ABOUT
Eric Decker
is an Sr. Engineering Manager at Squarespace for the Websites organization and previously the VP of Technology for the digital agency Firstborn. He lives in New Jersey with his wife, daughter, and too many cats. When he’s not wasting time generating silly images, he enjoys cooking, running, and watching the opossums that visit the backyard at night. His favorite animal is the humble pigeon. (You’ll see pigeons & opossums a lot here…)
Common Tools
-
MidJourney
Currently, MidJourney is my go-to tool. It’s versatile, quick, and relatively inexpensive. New models and features are released frequently, and it’s easy to use casually from Discord.
-
OpenAI
Previously DALL•E and now accessed via chatGTP, OpenAi’s image generation quality seems to fluctuate, and its typically a lot slower than MidJourney. I’ll use it for specific purposes, one being that so far it’s been the best at generating opossums, my daughter’s favorite animal.
-

NightCafe Studio
My first foray into image generation, which first started with style transfer. Today, NightCafe offers a slew of models to choose from and an active creator community.
-

VQGAN
In the fall of 2021, I started playing around with Google Colab to run VQGAN+CLIP notebooks to generate images. While far lower quality and resolution than what we’re used to today, they had a pleasant ethereal quality to them.
-

Runway
In early 2020 I experimented with training my own models on RunwayML – generating craft beer descriptions and imitating my own style of drawing butts.
-

SnowPixel
A web-based image generation service I used a lot in 2021, at the time SnowPixel provided some of the the highest resolution images possible.
-

New List Item
Stability.ai’s open source Stable Diffusion model allowed for higher quality image generation available to the public
-
Ideogram
Ideogram is another web-based image generation which, at least during it’s launch in 2023, was superior at text generation.
-

Artbreeder
Artbreeder is a very specific AI tool that lets you blend faces together to create new ones – sometimes leading to some very interesting creations…
-
ControlNet
In September of ‘23, ControlNet was all the rage, allowing for the generation of hidden image optical illusions.
Stuff
Whenever possible, I like the idea of bringing the digital into the physical.
Labels for homemade hot sauces in 2024
A Halloween opossum shirt for my daughter made in 2023
A pin made for a work summit in 2025 (and yes, the misspelling is deliberate)
A note on the speed of evolution…
Generative image technology has evolved and improved faster than any technology I’ve seen before. Writing this in 2025, it’s amazing to look back at older images and realize that “back in the day” was something like four years ago.
In June of 2022, one of the first images I made with MidJourney was from the prompt “A pigeon with a cigarette in its beak trending on artstation” (remember those early "hacks” of adding things like “ultra HD” or “trending on artstation” to get better results?) This must have been V2, which made an image that looked like a pigeon, but had a cigarette coming from its belly rather than in its beak. Today, you get much better results, and it’s only been 3 years.
Another fun experiment has been taking very early image generations that were naturally more abstract due to the state of generation at the time, and asking MidJourney to interpret the image (using describe) and then making new iterations based on the inferred prompt. Above is an image generated to look like a John Harris space painting from September 2021, and then reimagined by MidJourney in September of 2025.
When OpenAI first teased DALL•E, the images were the most advanced examples to date. I remember in May 2022, some random person online graciously offered to generate images for me when the tool was still in a heavily restricted beta. I remember getting the images back and being blown away by how “real” they looked – in this example, a pigeon on the streets of NYC wearing a gold chain and sunglasses (to be fair, I warned you about the pigeons.) Results from today in September of 2025 look much more photorealistic. (Also, I had forgotten about the required colored squares watermark in those earlier days.)
In early 2022 I made this… thing… in SnowPixel, I think the prompt was “a man named Randy Salmon” and honestly this bizarre old fishman crystal ham-head fella holds a special place in my heart. MidJourney interpreted it as more cyberpunk, and also missed the salmon aspect.
I do miss some of the “wild west” of generation in the early days – which again is a hilarious way to think about a technology only a couple of years ago. Of of my first prompts on NightCafe was “a sexy muffin dripping with butter” and it produced a horrifying result. But it had me hooked (and, for a time, this quote was actually used on the NightCafe homepage…) Today, it’s hard to replicate this with modern tools due to more aggressive content filters.
It was also a lot of fun to watch how content was made – some tools today still offer the option to export a generation time-lapse video.
Inspiration
In 2007, while in college at RIT for New Media Design & Imaging, I saw a presentation at FITC Toronto by Mario Klingemann titled “The Blink Sketchmaker.” In this presentation, he talked about first teaching a computer to “see” and understand art, based on basic principles like composition, symmetry, etc. After teaching it to see, he then taught it to generate random images based on those same principals, and then have it evaluate the generations. Generations that scored “better” would evolved. He ended up with some really cool looking art that predated the current generative tools by well over a decade.
The collection of images from Sketchmaker are still up on his Flickr page here.
This image has remained in my head since 2007 and is my personal favorite from his collection.
