DevAI Resources
Reference Guide Community Maintained Techniques Beginner Friendly

Prompt Engineering Guide

← Back to all resources

URL: https://www.promptingguide.ai

10+

Prompting techniques covered

Free

Completely free resource

CoT

Chain-of-thought prompting

Open

Community maintained

What is this resource?

The Prompt Engineering Guide (promptingguide.ai) is a comprehensive, community-maintained reference covering techniques for writing effective prompts for large language models. It covers a wide spectrum — from the most basic principles of how to phrase instructions clearly, to advanced research-level techniques used in production AI systems and academic papers. What distinguishes this resource from the official documentation provided by OpenAI or Anthropic is its focus: while the official API docs explain how to call the model, this guide explains how to communicate with it effectively once it's working. It's one of the most thoroughly organized free resources on prompt engineering available.

Prompt engineering is the practice of designing inputs to a language model to reliably produce the outputs your application needs. It is not about magic phrases or hacks — it is a systematic discipline with documented techniques, each with specific use cases, tradeoffs, and limitations. As AI becomes integrated into more software products, the ability to write system prompts that produce consistent, well-structured, reliable outputs is becoming a core engineering skill alongside database design or API integration.

What's in it?

The guide is organized as a progressive curriculum from fundamentals to advanced techniques. The introductory sections establish the basic anatomy of a prompt: instruction (what you want the model to do), context (background information that helps the model), input data (the actual content to process), and output format (how you want the response structured). Many poorly-written prompts fail simply because they're missing one or more of these elements.

The technique sections cover the three most practically important methods for application developers. Zero-shot prompting is giving the model a task with no examples, relying entirely on its training to understand the request — useful for general-purpose instructions. Few-shot prompting is providing 2–5 examples of the desired input-output format before the actual task — this is the single most effective technique for getting structured, consistent outputs like JSON, formatted reports, or specific classifications. Chain-of-thought (CoT) prompting asks the model to show its reasoning step by step before giving a final answer, which dramatically improves accuracy on tasks involving math, logic, or multi-step reasoning. Important 2025 caveat: CoT prompting behaves differently on reasoning models (OpenAI o3/o4-mini, Claude with Extended Thinking enabled) — these models already perform extended internal reasoning before answering. Adding "think step by step" to prompts for these models is redundant or counterproductive; instead, focus your prompt on clearly specifying the problem and desired output format.

The advanced sections cover techniques like self-consistency (generating multiple reasoning paths and taking the most common answer — still highly effective for non-reasoning models), tree-of-thoughts (branching reasoning trees for complex problems), and ReAct (combining reasoning with tool-use in agentic workflows). The guide also covers Structured Outputs — a 2025 development where model providers (OpenAI, Anthropic) guarantee JSON schema compliance, reducing the need for complex output-parsing prompts. When Structured Outputs are available, you rely on the schema enforcement rather than few-shot examples to control format.

How is it relevant to your purpose?

Getting the API technically working is only half the challenge when building an AI-powered application. The other half is getting the model to produce outputs that are consistently useful, correctly formatted, and appropriately scoped. Poorly written prompts lead to outputs that are unpredictable, verbose, off-topic, or in the wrong format — and these failures manifest as application bugs that are hard to diagnose precisely because the model's behavior is not deterministic.

Consider a concrete example: you're building a feature that extracts action items from a meeting transcript and returns them as a JSON array. Without proper prompt engineering, the model might return plain text bullet points, include unnecessary explanation, or add items that weren't mentioned. Adding a few-shot example to your system prompt — showing the model exactly what input maps to what JSON output — often eliminates all three problems immediately. In 2026, you can also use Structured Outputs (supported across the GPT-5.4 family) to guarantee schema-compliant JSON, which complements but doesn't fully replace good prompt engineering — you still need clear instructions, good context, and the right tone for your use case. The Prompt Engineering Guide gives you the vocabulary and the specific techniques to handle these situations systematically rather than by trial and error. For any developer building AI features that need to produce structured, reliable output, this guide is essential reading.

2025: Know which model type you're prompting

The single most important shift in prompt engineering in 2025 is understanding the difference between standard models and reasoning models. For standard models (GPT-5.4, Claude Sonnet 4.6, Gemini Flash), chain-of-thought prompting ("think step by step") dramatically improves results on complex tasks. For reasoning models (o3, o4-mini, Claude with Extended Thinking), the model already does extensive internal reasoning — adding CoT instructions is unnecessary and can hurt performance by interrupting the model's natural thinking process. Instead, give reasoning models clear problem descriptions and precise output requirements, and let them figure out the reasoning path themselves.

Recommended Watch

Prompt Engineering Tutorial – Master ChatGPT & LLM Responses

A comprehensive video guide to prompt engineering techniques — covers zero-shot, few-shot, chain-of-thought, ReAct, and more with practical examples. Pairs directly with the content in the Prompt Engineering Guide website.

Few-Shot Prompting: Before & After

Few-shot prompting is the most impactful technique for structured outputs. This Python example shows how adding just two examples to a system prompt transforms a flaky response into a reliable, parseable one.

from openai import OpenAI client = OpenAI() # ❌ BEFORE: vague instruction produces inconsistent output bad_system = "Extract the sentiment from product reviews." # ✅ AFTER: few-shot examples lock in the exact output format good_system = """Extract the sentiment from product reviews. Return ONLY a JSON object with this exact structure. Example 1: Review: "Works great, exactly as described!" Output: {"sentiment": "positive", "confidence": "high"} Example 2: Review: "Stopped working after two days, very disappointed." Output: {"sentiment": "negative", "confidence": "high"} Example 3: Review: "It's okay I guess, nothing special." Output: {"sentiment": "neutral", "confidence": "medium"} Now classify the review the user sends.""" def classify_review(review_text): response = client.chat.completions.create( model="gpt-5.4-mini", messages=[ {"role": "system", "content": good_system}, {"role": "user", "content": review_text} ], temperature=0 # temperature=0 maximizes consistency for structured tasks ) return response.choices[0].message.content print(classify_review("Battery lasts forever, super happy with this purchase!")) # Output: {"sentiment": "positive", "confidence": "high"}