Why I Built Prompt Lab: From Copy-Pasting to Understanding

2026 Fireworks

I am an indie developer. My days are usually spent digging for keywords, writing code, and trying to get people to look at my products.

Recently, with the explosion of Nano Banana (Google Gemini), I decided to jump into the rabbit hole of AI image generation.

The "Expert" Trap

I thought I was ready. I had plenty of experience integrating LLM APIs and I knew text prompts inside out. I had studied frameworks like BROKE and LangGPT. I thought, "How hard can image prompting be?"

It turns out, very hard.

I would see stunning images on X (Twitter) created by AI artists. They looked magical. Generously, many creators shared their prompts. I would copy them, paste them into my tools, and hit generate.

The Goal

The Goal

I wanted to create intricate etching styles like this.

The Reality

The Reality

But my results often felt flat or disjointed.

The Confusion

The Confusion

Copying prompts didn't teach me *why* it worked.

The result? Disappointment.

My images lacked the "soul" of the originals. They were flat, boring, or just weird. I realized that copying a prompt isn't the same as understanding it.

The "Library" Dead End

My first instinct was to build a Prompt Library. I wanted to collect thousands of high-quality prompts so that "practice makes perfect."

But I quickly hit a wall.

There are already too many libraries out there.

Finding a prompt is easy; modifying it is hard.

If I found a great prompt for a "cyberpunk cat" but I wanted a "steampunk dog," I didn't know which words to change. Did I need to keep the lighting keywords? What does "octane render" actually do? I was lost in a sea of words.

The --sref Moment

The turning point came when I saw an amazing image on X. The prompt included a mysterious code: --sref 3694285212.

I pasted this into Gemini (Nano Banana) and waited for magic. Instead, the AI broke.

I realized then that AI models speak different languages.

  • --sref is specific to Midjourney.
  • Specific weighting syntax (word:1.2) works for Stable Diffusion.
  • Natural language is best for DALL-E 3 or Gemini.

You can't just take a prompt from one model and force it into another.

A New Approach: Deconstruct, Don't Collect

That is when the idea for Prompt Lab was born.

I realized I didn't need more prompts. I needed a way to reverse engineer the images I liked. I needed a tool that could:

  1. Analyze an image to find its "visual DNA."
  2. Structure the prompt into layers: Subject, Style, Lighting, and Parameters.
  3. Optimize the code for the specific model I was using.

Visual Deconstruction

Moving Forward

Prompt Lab is my attempt to solve this problem. It is currently an MVP (Minimum Viable Product), and I know it's not perfect yet. But it is a small step towards mastering AI art instead of just guessing at it.

I built this tool to scratch my own itch, and I hope it helps you too. I would love to hear your feedback—whether it’s a feature request or just a bug report.

Let’s build better prompts, together.

Jack

Jack