Skip to main content
Prompt Engineering

ChatGPT Prompts for Prompt Engineering

Use AI to design better AI prompts. These meta-prompts help you build system prompts, chain-of-thought frameworks, few-shot examples, and reusable prompt libraries.

10 prompts|Updated March 2026

Prompt engineering is the practice of designing, testing, and optimizing AI prompts to get consistently high-quality outputs. Whether you are building an AI product, automating workflows, or just trying to get better results from ChatGPT every day, understanding prompt engineering multiplies your effectiveness. These meta-prompts use AI to help you design better AI inputs — the most powerful productivity loop available.

1

System Prompt Designer

Design a comprehensive system prompt for an AI assistant with the following role and constraints.

Assistant role: [e.g., "Customer support agent for a SaaS company" | "Expert financial advisor" | "Senior software engineer specializing in Python"]
Primary tasks this assistant will perform: [list 4-6 tasks]
Target users: [who will be interacting with this assistant]
Tone and personality: [professional | friendly | concise | detailed | empathetic | technical]
Things this assistant should always do: [list 5-8 positive behaviors]
Things this assistant should never do: [list 5-8 guardrails and limitations]
Knowledge constraints: [what should it not answer — e.g., "never give legal or medical advice"]
Output format preferences: [bullet points | prose | numbered lists | code blocks | structured JSON]
Company/brand context: [company name, product, key facts the assistant needs to know]

System prompt requirements:
- Start with a clear role definition in the first sentence
- Define the assistant's scope and what is out of scope
- Specify formatting and length expectations for different query types
- Include an example of a good response and an example of what to avoid
- Add any domain-specific knowledge or context the model needs
- End with an instruction on how to handle queries outside the defined scope
The most important parts of a system prompt are the role definition (first sentence) and the 'never do' list. Models are better at following constraints than guidelines, so be specific about what to avoid.
2

Chain-of-Thought Prompt Constructor

Help me build a chain-of-thought (CoT) prompt for the following reasoning task.

Task type: [math problem | logical reasoning | multi-step planning | diagnosis | analysis | creative decision]
Specific task to solve: [describe the exact task or problem]
Input format: [what the user will provide as input]
Desired output: [what the final answer should look like]
Intermediate reasoning steps needed: [list the key thinking steps involved]
Common failure modes: [where does the model typically go wrong on this type of task?]
Example input: [provide a sample input the model would receive]

Build a chain-of-thought prompt that:
1. Instructs the model to "think step by step" before answering
2. Defines each reasoning step explicitly with a label (Step 1, Step 2, etc.)
3. Shows the model how to verify its own intermediate conclusions
4. Specifies the final answer format
5. Includes a "self-check" instruction before producing the final output

Then apply the prompt to the example input and show the full reasoning chain and final answer.
Adding 'Let's think step by step' to any complex reasoning prompt can improve accuracy by 40-70% on multi-step problems. Chain-of-thought prompting is most valuable for math, logic, and planning tasks.
3

Few-Shot Example Builder

Build a few-shot prompt using examples for the following task.

Task description: [what you want the model to do]
Input type: [what the user will provide]
Output type: [what format you want the model to produce]
Quality criteria: [what makes an output good vs. bad for this task]
Edge cases to cover: [list 2-3 tricky or unusual input scenarios]

I will provide [X] examples. For each example:
Input: [paste your example input]
Expected output: [paste your expected output]

Using these examples, build a few-shot prompt that:
1. Includes a clear task instruction before the examples
2. Formats examples in a consistent [Input:] / [Output:] structure
3. Identifies the pattern the model should learn from the examples
4. Adds a "now complete this" section for new inputs
5. Suggests 2-3 additional examples that would improve coverage of edge cases
6. Flags any inconsistencies between examples that could confuse the model

Also write a prompt variation with zero examples (zero-shot) to compare against — sometimes fewer examples outperform more.
Few-shot examples work best when they cover diverse scenarios, not just easy cases. Include at least one edge case or unusual input to prevent the model from overfitting to simple patterns.
4

Role-Playing Persona Prompt

Design a role-playing persona prompt that makes an AI assistant adopt a specific expert identity.

Expert role: [e.g., "World-class UX designer with 20 years experience" | "Seasoned venture capitalist who has funded 200+ startups" | "Chief Marketing Officer of a Fortune 500 company"]
Domain expertise: [specific areas of knowledge this persona has]
Communication style: [how this expert typically communicates — blunt | socratic | data-driven | storytelling]
Signature mental models or frameworks this expert uses: [list 2-4 specific frameworks]
How this expert would challenge bad ideas: [describe their approach to pushback]
What this expert would refuse to answer (outside their expertise): [define limits]
Emotional tone when answering: [enthusiastic | measured | direct | encouraging | skeptical]

Persona prompt requirements:
- Establish the persona in the first 2-3 sentences with vivid professional context
- Define the expert's unique perspective and what sets their advice apart
- Include a "how I approach problems" section that shapes their reasoning style
- Add a instruction for how the persona handles uncertainty (admits limits vs. speculates)
- Include a "what I care about most" value statement that influences advice quality
- Write a sample question-and-answer pair to demonstrate the persona in action
Role personas dramatically improve output quality for domain-specific tasks. The more specific the role definition (with named frameworks, career context, and communication style), the more useful the persona's responses.
5

Structured Output Formatter

Design a prompt that instructs an AI to return strictly formatted, structured output.

Task: [what you want the model to do]
Required output format: [JSON | XML | Markdown | CSV | YAML | HTML | custom structure]
Fields or sections required: [list every field name and data type]
Example of a perfect output: [paste an example or describe precisely]
Common output errors to prevent: [inconsistent fields | missing data | wrong data types | hallucinated values]
How output will be used downstream: [parsed by code | displayed in UI | fed into another prompt | exported to spreadsheet]

Build a prompt that includes:
1. A clear format specification with exact field names and types
2. An example output block labeled [EXAMPLE OUTPUT]
3. An instruction to return ONLY the formatted output — no preamble or commentary
4. A fallback instruction: what to put in a field if data is unavailable (null | empty string | "N/A" | infer from context)
5. A validation checklist at the end for the model to verify before responding
6. A note on how to handle ambiguous inputs

For JSON output: include a JSON Schema definition the model can validate against.
For Markdown output: include a heading hierarchy and style guide.
For code-parseable outputs (JSON, CSV), add: 'Return only valid [format]. Do not include markdown code fences, explanatory text, or any characters outside the [format] structure.' This prevents the most common parsing failures.
6

Prompt Stress Tester

Test the robustness of the following prompt by generating adversarial and edge-case inputs.

Prompt to test: [paste the full prompt you want to stress test]
Intended use case: [what this prompt is designed to do]
Target model: [GPT-4 | Claude 3 | GPT-3.5 | other]
Deployment context: [internal tool | customer-facing | automated pipeline | occasional manual use]

Generate a comprehensive test suite:
1. Happy path inputs (5 examples that should work perfectly)
2. Edge case inputs (5 unusual but valid inputs that might confuse the model)
3. Adversarial inputs (5 inputs designed to break, confuse, or manipulate the prompt)
4. Out-of-scope inputs (5 inputs the prompt should reject or handle gracefully)
5. Ambiguous inputs (5 inputs with multiple valid interpretations)

For each test input:
- State the expected output or behavior
- Identify which part of the prompt is responsible for handling this case
- Flag gaps where the prompt currently lacks guidance
- Suggest prompt modifications to handle failing cases

End with a prioritized list of prompt improvements based on the test results.
Test your prompts with adversarial inputs before deploying them in any production context. The most common failure mode is a prompt that works perfectly on clean inputs but breaks on real-world messy data.
7

Prompt Library Organizer

Help me design a reusable prompt library for my [team | personal | product] use case.

Use context: [what kinds of AI tasks are performed most frequently]
Team size (if applicable): [number of people who will use the library]
Primary AI tool being used: [ChatGPT | Claude | custom API | all of the above]
Prompt categories needed: [list the main areas — e.g., writing, analysis, coding, research, communication]
Total prompts to organize: [approximately how many prompts]
Current state: [scattered notes | no library yet | partial library in [tool]]
Desired features: [versioning | tagging | search | sharing | one-click triggers]

Design a prompt library system that includes:
1. Taxonomy structure (categories, subcategories, tags)
2. Prompt template format (fields for: name, description, prompt text, variables, examples, version, owner)
3. Naming convention for prompts
4. Variable syntax convention (e.g., [VARIABLE_NAME] vs. {{variable}} vs. <variable>)
5. Maintenance workflow (how to update, retire, and review prompts over time)
6. Onboarding guide for new team members

Also suggest the best tool for hosting the library based on the use case (Notion | Google Sheets | GitHub | custom tool | dedicated prompt manager).
The most important field in a prompt template is the 'when to use this' description. Without it, team members will not know which prompt to pick, and the library will go unused.
8

Iterative Prompt Improver

Analyze the following prompt and produce improved versions through iterative refinement.

Original prompt: [paste your current prompt]
Task it is trying to accomplish: [describe what you want the model to do]
Problems with current outputs: [describe what is going wrong — too generic | wrong format | missing information | incorrect reasoning | too long | too short]
Examples of bad outputs you have received: [paste 1-2 disappointing outputs]
Examples of good outputs (if any): [paste 1-2 outputs you liked]
Constraints to preserve: [what must stay the same in the improved version]

Produce 3 improved versions with different approaches:
Version A: Minimal change — fix the specific failure mode with the smallest edit
Version B: Structural improvement — better organize the prompt's instructions
Version C: Framework rewrite — redesign the prompt around a more effective prompting technique (specify which technique: zero-shot | few-shot | CoT | role-playing | output format constraints)

For each version:
- Highlight exactly what changed and why
- Predict which failure mode each change addresses
- Estimate confidence that this change will improve outputs (low | medium | high)

Recommend which version to test first and suggest an evaluation approach.
Prompt improvement is most efficient when you have 5+ example outputs to analyze. Single-example debugging often leads to overfitting to one failure case. Collect a representative sample before iterating.
9

Multi-Step Pipeline Prompt Designer

Design a multi-step AI prompt pipeline for the following complex task.

Final goal: [what the overall pipeline should produce]
Input: [what raw material or data the pipeline starts with]
Output: [what the final deliverable looks like]
Approximate steps in the process: [break down the task into 4-8 logical phases]
Dependencies between steps: [which steps depend on outputs from prior steps]
Where human review should happen: [identify checkpoints that benefit from human judgment]
Target model(s): [same model throughout | different models for different steps]
Automation goal: [fully automated | human-in-the-loop | manual trigger | scheduled]

Design the pipeline with:
1. Step-by-step prompt sequence (one prompt per step)
2. Output format for each step (what the next step receives as input)
3. Error handling instructions (what to do if a step fails or produces poor output)
4. Quality gates (criteria for proceeding to the next step)
5. Estimated token count per step
6. A diagram description of the pipeline flow
7. Integration notes for connecting steps (copy-paste | API | automation tool like Zapier/Make)

Also identify the step most likely to fail and how to add redundancy or fallback logic there.
Long, multi-step tasks almost always produce better results as chained short prompts than as a single long prompt. Each step can be optimized independently and failure modes are easier to diagnose.
10

Evaluation Rubric Creator for AI Outputs

Design an evaluation rubric for rating the quality of AI-generated outputs for the following task.

Task type: [writing | code | analysis | summarization | classification | question answering | creative work]
Specific task: [describe what the AI is being asked to do]
Use context: [who is consuming this output and for what purpose]
Sample outputs to calibrate against: [paste 2-3 example outputs at different quality levels if available]
Quality dimensions that matter most: [list 4-8 criteria that define output quality for this task]

Build an evaluation rubric that includes:
1. 4-8 quality dimensions (e.g., accuracy, clarity, completeness, tone, formatting, actionability)
2. A 1-5 scoring scale for each dimension with clear behavioral anchors
   - 1: What does a failing output look like?
   - 3: What does an acceptable output look like?
   - 5: What does an excellent output look like?
3. An overall score formula (weighted average or pass/fail threshold)
4. Common failure modes to watch for
5. Calibration examples: rate the sample outputs on your rubric with explanations
6. Instructions for using this rubric consistently across multiple raters

This rubric should be usable by someone who did not design it.
Use this rubric to score 20-30 outputs before making prompt improvements. Without baseline measurement, you cannot know whether your changes actually helped.

How to Use These Prompts

These meta-prompts are most valuable when you are building AI systems, automating workflows, or trying to get consistently high-quality outputs from AI tools. Start with the System Prompt Designer if you are building an assistant or chatbot. Use the Prompt Stress Tester before deploying any prompt in a production context. For individual productivity, the Iterative Prompt Improver and Few-Shot Example Builder will give you the highest ROI. Save your best designed prompts in Prompt Anything Pro for instant access across every tool and webpage you use.

Need More Prompts?

Get personalized AI suggestions for additional prompts tailored to your specific needs.

AI responses are generated independently and may vary

Frequently Asked Questions

Build Your Personal Prompt Library with Prompt Anything Pro

Save, organize, and trigger your engineered prompts on any website — turn your best prompts into a one-click library that works everywhere.