You're using GPT-5 wrong! Here’s how to be ahead of 99% of users
Here's how to make the most out of GPT-5
Most users are not fully tapping into GPT-5’s potential.
Why? Because making the most out of GPT-5 requires familiarity with prompting techniques and parameter settings to maximize the quality of its outputs.
I’ve spent many hours studying OpenAI’s GPT-5 prompting guide to better understand how to make the most out of the model through both the ChatGPT web app and OpenAI Playground. In this guide, I’ll explain OpenAI’s prompting recommendations in plain English, avoiding most of the technical jargon found in the official guide.
To get weekly articles like this, subscribe 👇 After subscribing, check my welcome email to download my Python, ChatGPT, and more cheat sheets :)
#1 Optimize instruction following
GPT-5 follows instructions with great precision, which is advantageous in most cases, but if your prompt is unclear or contains contradictory instructions, the model may get confused or waste time trying to reconcile the conflicts.
For example, don’t say “Give a brief summary” and later in the same prompt say “Include all the details.”
Why is that wrong? The conflicting instructions will spoil the response, as the model won’t know which to prioritize. The prompting tip here is to always double-check your prompt for any mixed messages. Remove or clarify anything that could be interpreted in multiple ways.
According to OpenAI, cleaning up ambiguities and contradictions in prompts drastically improves GPT-5’s performance.
You can use ChatGPT to find conflicts in your prompt:
Review my instructions. Point out any conflicts. Suggest the smallest edits to make the instructions consistent
Or you can use OpenAI’s prompt optimizer to review your prompt. We’ll see that in detail in the next point.
#2 Use OpenAI’s prompt optimizer
OpenAI has a GPT-5 prompt optimizer available in Playground. Playground is a platform designed for advanced users to choose different models, tweak model parameters, and more (in fact, we’ll use Playground from tip #3).
One of the cool things we can find in Playground is the prompt optimizer.
To use the prompt optimizer, click here and log in with your ChatGPT account. Then type or paste your prompt and click on “Optimize” to get feedback. Once your prompt is optimized, the tool will highlight the changes in blue, and the notes icons on the right will provide the reasoning behind those changes.
The tool is useful, but it’s still necessary to learn the additional prompting tips in this guide to understand the changes it makes and to recognize when those changes are necessary.
The prompting tips from this point forward can be applied through the API or the OpenAI Playground. Playground requires no coding knowledge, and I highly recommend using it whenever the ChatGPT web app doesn’t feel enough and you need more control over GPT-5.
#3 Control the reasoning effort
GPT-5 has a reasoning_effort parameter to control how hard the model thinks and how willingly it calls tools. The default is medium, but you should scale up or down depending on the difficulty of your task.
To control the reasoning effort, click on the setting adjustment icon in Playground.
There are 4 levels of reasoning effort:
minimal: It was introduced in GPT-5 and tells the model to do the least amount of thinking possible to get you an answer. It’s designed to be quick, and it’s ideal for deterministic, lightweight tasks (extraction, formatting, short rewrites, simple classification)
low: It engages in a bit more thought, but still heavily prioritizes efficiency. Reliable for things that require a bit of understanding but not deep, creative problem-solving. Good for standard customer support, content summarization, etc
medium: It’s the default setting. Offers a balance between performance and speed. It’s where the AI really starts to "think." The answers are more comprehensive, creative, and well-structured. Good for content creation, code generation, analysis, and complex instruction following
high: It tells GPT-5 to take all the time it needs and use as many reasoning tokens as necessary before giving you an answer. Great for tasks where accuracy is critical, such as scientific and academic research, strategic planning, and debugging difficult code. It can be slow and expensive
Keep in mind that minimal reasoning effort can vary more drastically depending on the prompt than higher reasoning levels.
OpenAI recommends having GPT-5, when set to minimal reasoning, outline the approach first. For example, you can say: “First, list the steps you will take to solve the problem.” Even a one-sentence plan or a few bullet points of strategy at the start of the answer can improve performance on tasks that need higher intelligence.
#4 Control the agentic eagerness
By controlling the reasoning_effort we can also calibrate GPT-5’s agentic eagerness (whether the model is more proactive or not)
More eagerness: It encourages model autonomy and reduces occurrences of clarifying questions or otherwise handing back to the user.
To get more eagerness, you need to increase the reasoning_effort and use a prompt that encourages persistence and thorough task completion. Here’s a good prompt example to get more eagerness (extracted from the OpenAI guide):
- You are an agent - please keep going until the user's query is completely resolved, before ending your turn and yielding back to the user.
- Only terminate your turn when you are sure that the problem is solved.
- Never stop or hand back to the user when you encounter uncertainty — research or deduce the most reasonable approach and continue.
- Do not ask the human to confirm or clarify assumptions, as you can always adjust later — decide what the most reasonable assumption is, proceed with it, and document it for the user's reference after you finish acting
Less eagerness: By default, GPT-5 is quite thorough. It tries to gather a lot of context to ensure correct answers. You can reduce the context gathering behavior to get narrower and faster responses by switching to a lower reasoning_effort and defining clear criteria in your prompt for how you want the model to explore the problem space
Below are some instructions that you can add to your prompt to reduce the context gathering behavior (see complete instructions here):
- Avoid over searching for context
- If you think that you need more time to investigate, update the user with your latest findings and open questions. You can proceed if the user confirms
- Bias strongly towards providing a correct answer as quickly as possible, even if it might not be fully correct
Note that the “even if it might not be fully correct” helps to explicitly provide the model with an escape hatch that makes it easier to satisfy a shorter context gathering step.
#5 Control verbosity
OpenAI introduced a new API parameter in GPT-5 called verbosity, which influences the length of the model’s final response. You can now ask the model to engage in more or less reasoning using reasoning_effort, while independently adjusting the length of its final answer with verbosity.
How is that helpful?
Remember the contradictory instructions we talked about in point #1? Well, on Playground, you won’t need brittle prompt hacks like “be concise” because you can control it with the verbosity parameter.
That reduces contradictory instructions and improves adherence to the actual task instructions.
There are 3 levels of verbosity: low, medium, and high. If you set verbosity to "low," the model’s responses will be short, direct, and efficient, while “high“ will give longer and more detailed answers.
Here’s a simple example for the 3 verbosity levels:
User: What’s the capital of France?
Model (low verbosity): Paris.
Model (medium verbosity): The capital of France is Paris
Model (high verbosity): The capital of France is Paris. It’s the largest city in the country and serves as its political, cultural, and economic center.
That’s it! If you found this guide useful, share it with others and consider becoming a paid subscriber to make more articles like this possible.
Interesting. That explains what I experienced in chat but couldn’t make sense of. Thanks a lot for the deep dive
Awesome stuff!