Skip to content

Plutonic Rainbows

Prompt Builder

I'll be honest — I didn't set out to build a prompt engineering tool. Like many developers, I was spending way too much time crafting the perfect prompt for Claude, only to get responses that missed the mark. I'd write something vague like fix this bug and wonder why the AI couldn't read my mind. After watching myself and countless other developers struggle with this same frustration, I realized we needed a bridge between human intent and AI understanding. That's how the Prompt Builder was born — not from grand ambition, but from a simple desire to stop wasting time on prompt trial-and-error. I wanted to transform casual requests into structured, effective prompts that actually got the results we needed.

The architecture I settled on feels almost embarrassingly simple now, but it took several iterations to get right. At its core, the system implements Anthropic's six official prompt engineering techniques, wrapped in a Flask application that processes natural language through multiple enhancement layers. I built an Enhancement Intelligence system that prevents over-engineering simple requests — because nobody needs a 500-word prompt to change a font size. The breakthrough came when I introduced XML-style tag structure in v3.8.0, which creates clear instruction boundaries that dramatically improve how Claude parses complex prompts. I also integrated optional GPT-4o-mini enhancement as a pre-processing layer, essentially using one AI to help communicate better with another AI. The whole thing is held together with dependency injection, regex caching for performance, and a subagent orchestration system that automatically delegates specialized tasks to appropriate AI agents.

Building this tool taught me something unexpected about human-AI interaction: the gap isn't technical, it's communicative. I discovered that most bad AI responses aren't failures of the model, but failures in how we frame our requests. The biggest revelation was realizing that prompt engineering isn't just about getting better outputs — it's about forcing ourselves to think more clearly about what we actually want. When I watch developers use the prompt builder now, they often say the transformed prompt helped them understand their own requirements better. I'm particularly proud that the system has evolved from a simple text transformer into something that embeds Core Integrity Principles — accuracy, professional honesty, and thorough testing — into every generated prompt. It's a small way to make AI interactions more reliable and trustworthy. Honestly, I never expected a side project about prompts to teach me so much about clear communication and systematic thinking.

Reading

This week I am reading Superintelligence: Paths, Dangers, Strategies by Nick Bostrom.

Typography Refinements

I spent some time today refining the typography on this blog after noticing that my section headers and article titles were competing for attention. They had identical visual weight, which made it difficult to distinguish between navigation elements and actual content.

The solution turned out to be fairly straightforward. Rather than making everything lighter, I implemented a progressive weight scale using the Commissioner variable font's range. Article titles remain at weight 600 but are now larger, while subheadings step down to 550, section headers to 500, and supporting elements to 450. This creates a natural reading flow that guides the eye through the content hierarchy.

I also increased the spacing around article titles and tightened their line height for better multi-line rendering. The section headers now have a subtle opacity reduction to further distinguish them from main content.

These changes feel like a meaningful improvement to me. The text no longer fights for attention, and there's a clearer sense of structure when scanning through posts. Sometimes the smallest adjustments can make a significant difference in how content is perceived and consumed.