Five primitives that make prompt engineering systematic.
Structure Extraction
Pulls out Role, Goal, Audience, Context, and Constraints from any freeform input.
Format Standardization
Enforces a consistent JSON schema — versionable, auditable, API-ready.
Intelligent Inference
Fills missing fields with safe, sensible defaults when your prompt is sparse.
Execution Prevention
Transforms only — never fulfills the request. Separation of concerns built in.
Zero Dependencies
Pure text. Copy-paste into ChatGPT, Claude, Gemini, or any LLM chat.
No setup. No API key. No install.
Copy the transformer template
Grab the JSON template from Metaprompt.md. It defines the transformation contract — what to extract and where to put it.
Replace <USER_INPUT_HERE> with your prompt
Drop in any raw natural language request — "write me a performance review", "summarize this paper", "generate a SQL migration". Messy is fine.
Paste into your LLM. Get structured JSON back.
The model returns a fully populated template. Review, tweak, version it. Reuse it across chats, teams, or APIs. One transformation — infinite reuse.
One casual sentence in. Production-ready JSON template out.
INPUT
"Write an email to my manager
asking for a day off
next Friday."
OUTPUT
{ "Role": "Email Assistant", "Goal": "Draft a professional leave request", "Audience": "Workplace manager", "OutputFormat": "text", "CreativityLevel": "Low", "Style": "polite, professional, concise", "Constraints": { "Positive": ["Use formal tone", "Be brief"], "Negative": ["No casual slang"] } }
The transformer separates transformation from execution — it never writes the email. Take the JSON, paste it into a new chat as your structured prompt, and get consistent, predictable output every time.
Star the repo, copy the template, or jump straight into the Baxter GPT to try it live.