How I Learned to Stop Worrying and Love AI — Experiment 3

The One Where I Became an Accidental AI Art Curator

Like most catastrophically disorganised filing systems, this one began innocently enough. I'd discovered a particularly striking AI-generated artwork, complete with its corresponding style code, and needed somewhere to save it. So I opened a WhatsApp chat with myself and sent the image there. "Perfect," I thought, "I'll just keep them all in one place."

Several months and hundreds of images later, I found myself frantically scrolling through an endless WhatsApp conversation with myself, searching for that one particular artwork with that one specific style code that would be perfect for... well, you get the idea. My "filing system" was no better than a shoebox under the bed.

MidJourney — arguably the most sophisticated AI image generation tool available today — has a feature called "Style References”, or SREFs. These are essentially numerical codes that you can add to your prompt (something like --sref 1937593590), and suddenly, the image variations of your prompt "photograph of a corgi wearing a spacesuit" maintain a consistent and predictable aesthetic rather than MidJourney's usual creative randomness.

AI creators share these SREF codes freely across the internet, but collecting and organising them quickly became impossible with my basic WhatsApp-based archival method.

The Gallery

So, like I find myself doing a lot these days, I began to wonder if AI could help me create a better system of image curation. I turned to my AI coding companion, Claude, and we sketched plans for what would become the MJ Gallery.

The requirements were deceptively simple:

  1. Categorise images by type (illustration, photograph, 3D render, or photomanipulation)

  2. Generate detailed descriptions of each image's style and possible inspirations

  3. Extract dominant colour palettes

  4. Create searchable tags

  5. Most importantly (because I'm fundamentally lazy), automate as much of this as possible

What started as a weekend project quickly snowballed into a three-week adventure in backend development — a first for me. Each evening after work, I'd sit down with Claude and tackle another piece of the puzzle.

First, we built the basic image upload functionality (which, I learned, is significantly more complex than just moving files from point A to point B). Then came the AI integration, and finally, the search functionality that would make this actually useful.

The public-facing gallery was far more than just a grid of pretty pictures. It became a proper curated space with nuanced search functionality, commonly searched terms for easy access, and even a page dedicated to my opinion on the ongoing debate about AI in creative industries. TL;DR: I'm cautiously optimistic, but I absolutely understand the concerns.

Is AI the Enemy of Real Creativity?

Just like every creative professional for months now, I’ve had the same knot in the pit of my stomach regarding AI. I felt it the first time I opened Midjourney. Or Claude. Or Perplexity. As a 42-year-old creative director who has built a career on human connection and craftsmanship, I've grappled with the same questions: How do we preserve the soul of our craft in this rapidly evolving landscape? What happens to the countless hours we've poured into mastering our skills?

I don’t have the answers — no one does. All we can do is muddle our way through. The gallery, therefore, needed to be more than just another repository of style codes for people to copy and paste. It needed to acknowledge both sides of the reactions these tools typically inspire: the "WTF, this is an abomination and I will never use AI!" crowd and the "WTF, this could be helpful to my creative workflow!" optimists.

In their current form (as of January 2025), I've come to view these tools as sophisticated sketches: brilliant for ideation and exploration, but **never** the final artwork. I think of them as having an infinitely patient creative companion, ready to chase down every wild concept that crosses your mind. The real delight lies in how these tools accelerate our creative journey — letting us fail faster, learn quicker, and explore more boldly.

But — and this is crucial — when it comes to commissioned work or client projects, there's simply no substitute for human collaboration. Talented designers, illustrators, and photographers bring something machines can't: the ability to authentically understand and translate unique visions. Always hire human creatives for final work!

Of Dashboards and AI Automations

Now, to the technical side of things... The backend dashboard (which only I have access to) is divided into four main sections:

  • Dashboard: A bird's eye view of the gallery's stats — total images, storage used, recent uploads, and AI analysis metrics. It's pretty satisfying watching these numbers grow, I will admit.

  • Upload: This is my favourite bit. What looks like a simple image upload form is actually an AI-powered analysis platform. Drop in an image (or a batch of them), click "Analyse with AI", and GPT-4o-mini gets to work. It automatically generates detailed descriptions that capture not just what's in the image, but its artistic style, potential inspirations, and even subtle nuances in composition. It identifies the type of image (illustration, photograph, 3D render, or photomanipulation), suggests relevant tags, and even extracts the four dominant colours from the image.

  • Library: Essentially the same as the public gallery but with edit and delete functions. This is where I can tweak the AI-generated descriptions or add additional tags if needed.

  • Analytics: While fairly basic, it offers some interesting insights. I can see which images are most popular, which tags are most searched, and even get a tag cloud based on usage. It's fascinating to see which styles resonate most with visitors.

For my fellow nerds, the backend is built with Next.js and uses Supabase for the database and authentication. The image analysis pipeline was more complex than I'd anticipated — I tried multiple AI models before settling on GPT-4o-mini, for the analysis. The colour extraction also went through multiple iterations before I was satisfied.

One particularly tricky challenge was handling batch uploads. I learned more about rate limiting and queue management than I ever thought I would! But the end result is a system that can process dozens of images in one go, with each image getting a thorough AI analysis that would take me hours to do manually.

The gallery is now live and open to anyone interested in exploring AI-generated artwork or finding inspiration for their own creative projects. And while it's far from perfect (there's always room for improvement), it's rather satisfying to see how a simple WhatsApp chat (with myself) evolved into a proper digital archive.

Check it out at this link!

Stay tuned for the next instalment in this series, where I'll share how I built a tool that signs PDFs without all the usual nonsense — because sometimes the best projects come from being thoroughly frustrated with existing solutions.

Watch this space!

Previous
Previous

The One That Liberated Us from Adobe Acrobat (kinda)

Next
Next

How I Learned to Stop Worrying and Love AI — Experiment 2