Mariah Williams

A photo of Mariah Williams

Name: Mariah Williams 

Age: Old enough to know better, but still young enough to do it anyway! 

Origin: I hail from the lands of Norway and Germany. 

Profession: Medical Assembler by day, amateur crochet artist by night. 
Killer Style: I’m a pro at rocking the clean room attire and looking fabulous in a hairnet. 

Freaky Flaw: Sometimes I get a little too attached to my creations, but don’t worry, none of them have come to life… yet! 

Pet: Spike, the lovable dog; Indie, the adventurous hamster; and Margo, the surprisingly friendly jumping spider. They’re my own little monster squad, and I wouldn’t trade them for anything. 

Favorite Activity: Crocheting – it’s like stitching together monsters, but with yarn and love. 
Biggest Pet Peeve: Biggest Pet Peeve: When people say they’re “not animal people.” I mean, come on – have they even met Margo the jumping spider? 

Favorite Food: Favorite Food: Anything that doesn’t require assembly (I get enough of that at work!) 

Favorite School Subject: Art – because who doesn’t love expressing themselves through colorful creations? 

Least Favorite School Subject: Math – but hey, at least it helps me count all those tiny medical device parts! 

Favorite Color: Lab coat white and safety goggle blue 

Favorite Food: Anything that doesn’t require assembly (I get enough of that at work!) 

BFF’s: Spike, Indie, Margo, and my amazing coworkers who make every day at the lab a blast.


Name: Mariah Williams
Contact Information: Phone: (763) 614-8007

Experienced medical assembler with a proven track record of success in the medical device industry. Skilled in assembly, testing and inspection of various medical devices. Strong attention to detail and ability to work in a fast-paced environment.

Medical Assembler, Medtronic (2019-Present)

  • Assemble, test and inspect various medical devices according to established procedures
  • Maintain a clean and organized work area to ensure a safe and efficient work environment
  • Collaborate with team members to meet production goals and improve assembly processes

Petco, Cashier (2018-2019)

  • Provided excellent customer service and assisted customers with their pet-related needs
  • Maintained a clean and organized store environment
  • Assisted with the training of new employees

High School Diploma, Northwest passage High School (2018)


  • Medical device assembly
  • Testing and inspection
  • Attention to detail
  • Teamwork
  • Ability to work in a fast-paced environment

Certified Medical Assembler, Medtronic (2022)

The Future of AI is not Behind an API, an Essay by Mariah Williams

April 23, 2023

ChatGPT took the world by storm on November 2022. Becoming the fastest- growing website in the history of the world, ChatGPT was a new type of software that offered its users exciting and bizarre new capabilities, like autonomously editing essays, or implementing Python merge-sort with pirate-themed variable names. If there is one downside to the LLM revolution, however, it’s that the most powerful language models developed by OpenAI are hidden behind an API. Everyone is free to use and experiment with the models, but we can’t see the weights, we can’t apply our own RLHF to steer the model, and we can’t refine the model on our own datasets. It seemed like this was the new status quo, as outside of multibillion-dollar companies, no one had the resources to train their own LLM.

Facebooks LLaMA

This all changed in March of 2023 when Facebook’s LLaMA model leaked to the public. Like GPT-3, LLaMA ( is a large language model with seemingly limitless potential. The same emergent abilities of GPT-3 are also present in LLaMA. LLaMA can write code, edit essays, and write poems. Not only that, the LLaMA model comes in four sizes, the smallest of which can be run on commodity hardware like a MacBook Air.

Since March 2023, countless individuals and organizations have been experimenting with LLaMA and adding capabilities. Stanford announced their take on LLaMA, and introduced a new model called Alpaca (, which utilizes reinforcement learning to align the model with human feedback. This version of the model was much more useful than the raw output of LLaMA, and could hold long conversations without getting sidetracked. As a follow-up to Alpaca, UC Berkeley released their own open-source LLaMA with RLHF variant called Vicuna (, whose responses achieve 90% of the quality of ChatGPT 3.5.

The final, and possibly the most exciting LLaMA variant released this month, is a fork of Vicuna developed by the University of Wisconsin-Madison called LLaVA ( This model takes a page out of GPT-4’s playbook, incorporating multimodal vision capabilities. While a typical LLM takes text as input and outputs text, a multimodal LLM like GPT-4 and now LLaVA take both images and text as input. This exciting new development opens the door to a range of new capabilities not present in any of the currently public versions of ChatGPT.

LLaVA Output Examples

As a demonstration of LLaVA’s capabilities, I spun up an instance of LLaVA on my local machine and will now present the output I got after feeding in several images. The model is not only able to see the images and describe them, but it can also answer questions about the image and form opinions about what it’s seeing.

Dog picture By: Mariah Williams

Dog photo by Mariah Williams

Describe this image in great detail:

“The image features a happy dog sitting in the passenger seat of a car. The dog is wearing a pair of sunglasses, giving it a cool and amusing appearance. A person is also in the car, sitting close to the dog, with their arm around it, creating a warm and friendly atmosphere.

In the background, there is a cell phone placed on the car’s dashboard, and the car’s interior is visible, including the passenger seat and the area around the dog. The scene is a fun and lighthearted moment between the person and their canine companion.”

Butterfly picture by: Mariah Williams

Butterfly photo by Mariah Williams

Describe this image in great detail:

“The image showcases a beautiful butterfly sitting on top of a purple flower, creating a vibrant scene. The butterfly has landed on the flower, perhaps to feed on the nectar or simply enjoy the colorful surroundings. The field where the flower and the butterfly are located is filled with various plants and flowers, creating a rich and diverse environment for the butterfly to explore.”

Coy pond By: Mariah Williams

Fish photo by Mariah Williams

Describe this image in great detail:

“The image features a large group of colorful fish swimming together in a big pond. The fish are stirred up, possibly due to their active behavior or the water’s movement.
There are numerous fish in the scene, each with different colors and sizes. The fish are spread throughout the pond, some near the top, others at the bottom, and many in the middle area of the water. The scene captures a lively atmosphere, showcasing the beauty and diversity of the fish in their natural habitat.”

Do the fish look hungry?

“It’s hard to determine the fish’s emotions, but given that they are in a lively and active state, it is more likely that they are curious, engaged in their surroundings, or perhaps responding to potential threats or the presence of other fish. The fish are not visibly displaying signs of hunger.”

Local LLM Impact

The introduction of multimodal capabilities in LLaVA, which allows the model to analyze and respond to both images and text, expands the range of applications for these AI models. From assisting in image recognition and analysis tasks to providing creative inspiration for artists, the possibilities are virtually endless. Furthermore, as LLaVA and its variants continue to be refined and adapted, we can expect to see even more impressive advancements in the AI field, including greater accuracy, stronger alignment with human values, and the ability to tackle increasingly complex tasks.

The emergence of LLaVA and its open-source variants marks a significant shift in the landscape of AI research and development. No longer restricted to the domain of multibillion- dollar corporations, the power of large language models is now available to the broader public, enabling researchers, developers, and hobbyists alike to tinker with and refine these models according to their own needs and desires. This democratization of AI technology has the potential to fuel innovation and collaboration on a global scale, with universities and independent researchers contributing to the development of even more advanced and versatile models.

The future of AI is not behind an API – it is in the hands of the people who can now experiment, build, and share their discoveries with the world. This new era of accessible AI technology is poised to revolutionize industries, reshape the way we interact with machines, and ultimately, redefine the boundaries of human potential.

My Secret Hobby Photo Gallery

A trip I would love to go on.