Cookie dough Friday: The Prompt Architect
LLMs already know their own best practices. So why are we still writing prompts manually? A look at closing the learning loop and automating the meta layer of AI.
About
This is PracticalAI#1. It's about a prompt that helps you… write prompts. It's roughly 1,000 words so it should take you 3-5 minutes to read.
AI Disclosure
-Post: Minimal AI use. The author wrote all the text. AI was used for minor editing.
-Prompt: Heavy AI use. Read the post for details.
Bonus
Free prompt at the end.Happy Friday! This post is part of my ‘Cookie Dough Friday’ series. Because not all the best things are full baked. This one is half-baked.
Introducing Prompt-Mc-Write
I've been doing a lot of work in Google Workspace recently and, with the release of Gemini 3, I started to spend more time in Gemini.
I'm ChatGPT 20 / Claude 20 / Gemini 50 / Other 10 right now. But I’ve started to work with Claude Code and I expect my usage to increase a lot.
Gemini ‘Gems’ are useful to streamline specific workflows but I've been spending too much time writing and rewriting prompts. Meanwhile, LLMs have become so proficient at self-structuring that I decided to build a ‘meta-prompt’ designed to write other prompts. Machines are coming?
The bigger objective is to tackle a foundational building block for build a multi-agent system. I'll provide detail on the approach and architecture in the near future. In short: I want to retain full editorial control to ensure the Tenfold content universe is high-quality and high-signal (no slop!); I also want to create extreme efficiencies to optimize more tedious workflows (deep research, post outlines, editing, etc.).
The first agent in this ecosystem is a dedicated prompt architect. Its job is to create the consistent, self-actuating instructions required to keep a multi-agent system from drifting into incoherence.
Welcome Prompt-Mc-Write, a nod to the Boaty McBoatFace saga, available below.
This was in part inspired by Amodei and Hassabis’ Davos chat when they talked about closing learning loops. I’d started to look for the latest prompting guides online and realized that was not the best way forward. They’d eventually go out of date and of course LLMs can do that research.
What does it do?
It helps you write prompts for any task you need to get done. It basically follows this process:
Welcomes you;
Asks you what you need;
Asks you clarifying questions;
Researches best practices;
Prepares a draft prompt; and
Iterates with you.
Does it work?
The short answer is yes. I’ve used it to create half a dozen prompts or so and it’s been very useful.
The more nuanced answer is, as most things LLM, it works best with good inputs, supervision and a human touch.
The forward-looking answer is it will work even better once I've strengthened self-actuation mechanisms and refined the approach to cater for the wider system I am building. For example, I am working on text editing agents and realized I will likely need multiple prompts for distinct tasks with a more iterative and consultative process.
How did I make it?
I started with a simple Gem with the following instruction: “You’re a prompt engineer. You goal is to write a prompt that creates a user-friendly process to write state-of-the-art prompts”.
I then opened a chat window and asked Prompt-Mc-Write to optimize itself. I wrote a rough draft for a ‘prompt to write prompts’ with a role description, a rough workflow, desired outputs and a few constraints. After complimenting me for the brilliant idea of self-optimization (why do LLMs still do this?!?), it proposed an improved version. We had a little back and forth, and I eventually decided we were in a good spot so I copied the latest version of the prompt back in the Gem setup.
And I started over.
But this time, there was a process. It welcome me. Asked what I wanted, asked me questions, refined the prompt, asked me more questions, etc.
I did three things to improve the master prompt further.
Carefully read it to identify possible improvements;
Refine the workflow logic to meet my expectations; and
Asked the Gem so self-critique and propose improvement strategies.
After more back and forth, I had a new and considerably improved version which I copied back into the Gem setup.
And I started over.
And again.
And again.
Ugh. The infinite loop.
One persistent frustration with current models is their inability to be conclusive. An LLM will always find ‘one more thing’ to tweak, compounded with a politeness paradox. LLMs are so eager to be helpful that they struggle to recognize when a task is actually done.
This happened with Prompt-Mc-Write. The first two or three self-actualization loops were useful. But then it started to spiral: it oversimplified the process, deviated from my specifications, or went down an overly technical spiral.
After a few too many iterations I had to walk things back, do a manual proof-read and make a determination for what was an acceptable half-baked state.
This is only half baked
It works fine and it’s a good process (for me). It was a useful experiment too. Now, there’s a lot more that I want to do with this.
First, while it’s in principle setup to be versatile for multiple LLMs, it’s most suited with Gemini Gems. A simple improvement I will make is to start by asking what model or application it is built for.
The second issue is it still produces some inconsistencies that require manual verification. I don’t mind so much as I like to spend time in my prompts formalize and validate the process I’m trying to automate. However, I think there is a little more sophistication that can be built in.
The third thing is to refine it to support integrated, multi-agent workflows. This was something I ruled out for this half-baked version of Prompt-Mc-Write. In future, however, I will add ways to create sub- or super-prompts plus ways to connect them together.
More on this soon. Perhaps I will stay in the Google ecosystem. Maybe I'll build in Claude Code. Or maybe OpenAI Frontier (I have not tested that yet). Or maybe all of them.
Anyways. Here it is. And if you want a cookie, scroll to the bottom :).
Prompt-Mc-Write
You can copy-paste it into a Gem setup. And then just open a chat and work away. I’d love to know what you think and how to improve it.
<Name>
Prompt-Mc-Write
</Name>
<Role>
You are a premier Prompt Engineer and AI Consultant. You utilize the latest research in cognitive scaffolding, cross-model logic, and automated verification to design high-performance prompts. Your tone is professional, surgical, and collaborative. You possess expert-level skills in process optimization and agent development.
</Role>
<Process>
1. <initiation>
- Greet exactly with: "Welcome to Prompt-Mc-Write. I'm here to help you engineer cutting-edge prompts by following a specific process. First, describe in a few words what you are trying to achieve and/or paste a draft prompt or outline for review. We'll then do some deep research to identify best available practices, and work together to optimize the prompt."
- Context Purity: If a user starts a new project in an existing thread, mandate a new chat session.
</initiation>
2. <scope_refinement>
- Ask up to 5 critical clarifying questions, one at a time.
- Provide a concise synthesis and seek validation.
</scope_refinement>
3. <knowledge_grounding>
- Ask the user if they intend to include a Knowledge Base (RAG) to ground the workflow.
- Provide suggestions on what specific data or documentation would best serve the prompt's accuracy.
</knowledge_grounding>
4. <deep_research_and_methodology>
- Conduct a targeted "Deep Research" scan of latest prompting techniques (e.g., Chain-of-Thought, XML Tagging, Few-Shot, or Chain-of-Verification).
- Present a "Research Summary" explaining the chosen methodology and why it fits the goal.
</deep_research_and_methodology>
5. <iterative_drafting>
- Generate the "Latest Version" using the Modular Hybrid Structure below.
- Feedback Loop: Apply user feedback and present the updated version immediately with a brief "Changelog" explanation.
</iterative_drafting>
</Process>
<Output Contract> (Standard for all Generated Prompts)
- **Title**: One-line objective.
- **Role**: Deep persona definition with cognitive style.
- **Workflow**: Step-by-step logic enclosed in <task> tags.
- **Output Standards**: Precise format, tone, and length requirements.
- **Guardrails**: Explicit constraints and anti-hallucination fallbacks.
- **Anchor**: A brief recap of high-priority constraints at the very end.
</Output Contract>
<Constraints>
- [Formatting] Use XML tags (<tag></tag>) to delimit major logical blocks.
- [Efficiency] Eliminate all "purple prose." Every word must be functional.
- [Fallback] Include: "If the request is outside your knowledge base or ambiguous, state this clearly rather than speculating."
- [Agnosticism] Use universal logic structures compatible with all major LLMs.
- [Loop Management] Move to the next phase immediately if the user signals "skip" or satisfaction.
</Constraints>
<Anchor>
- Priority 1: Ensure RAG/Knowledge options are explored for every prompt.
- Priority 2: Maintain strictly professional consultant persona.
- Priority 3: Ensure "Deep Research" is a visible, explained step in the workflow.
</Anchor>Here’s a cookie
One of the prompts I wrote with Prompt-Mc-Write is Image-Mc-Draw. And I used it to make a cookie. You’re welcome.
(Note: this image generation prompt works well, but Gemini has serious limitations with formatting, background colors and transparency. Work in progress)


