Prompt Engineering Is Slowly Dying
Every time OpenAI upgrades its models, learning prompt engineering becomes less relevant.
OpenAI just released a new model and I have more reasons to believe that learning prompt engineering is becoming less relevant.
I’m not saying prompt engineering is already dead (later in this article I explain why) but things have changed since OpenAI released ChatGPT in 2022. Back then, learning prompting techniques was a must to get better responses. Today, ChatGPT doesn’t need as many instructions as it used to.
Here’s how OpenAI models are doing prompt engineering for you.
OpenAI’s o1 does Chain of Thought Prompting for you
OpenAI’s o1 stands out for its reasoning capabilities. It “thinks before it answers.”
How is this possible? How can an LLM think before responding to users?
Thanks to a prompting technique called chain of thought. We use this technique to induce the model to produce intermediate reasoning steps before giving the final answer to a problem. Here’s an example.
In the past, ChatGPT users used to type “Think step by step” to help the model produce this reasoning by itself.
With OpenAI’s o1 you don’t need to do the chain of thought yourself or type “Think step by step.” This new model does the chain of thought for you.
Why is this important? This often leads to more accurate results.
Here’s how OpenAI describes how o1 uses chain of thought.
Through reinforcement learning, o1 learns to hone its chain of thought and refine the strategies it uses. It learns to recognize and correct its mistakes. It learns to break down tricky steps into simpler ones. It learns to try a different approach when the current one isn’t working. This process dramatically improves the model’s ability to reason.
O1 isn’t the only example of a model that has incorporated a prompting technique. Here are others.
DALL-E 3 does Image Prompting for you
Some of the AI art tools that are mainly used for image generation are Midjourney and DALL-E. But there’s something DALL-E 3 can do for you that Midjourney can’t.
Image prompting.
Creating images in any AI art tool can be as simple as typing “image of two dogs playing in the park” but when it comes to generating more complex images, text-to-image systems tend to ignore words or descriptions, forcing users to learn prompt engineering.
This changed when OpenAI upgraded DALL-E. With DALL-E 3, we can create images using plain English, and when used within ChatGPT, it improves your prompts to generate better images.
Here’s the prompt and image I got for “image of two dogs playing in the park.”
I have to admit that whenever Midjourney can’t understand my prompts, I switch to DALL-E 3 because you don’t need to create complex prompts to get the image you want.
GPT Builder does Role Prompting for you
Role prompting is a technique that was widely used when ChatGPT was launched. This technique involves assigning a role to the system, allowing it to act accordingly and produce better responses. Here’s an example.
Act as a job interviewer. I’ll be the candidate and you’ll ask me interview questions for the X position …
I remember there was a site with a list of multiple ChatGPT personas for you to copy. However, now you don’t need to copy/paste these prompts or create one on your own but you can explain to GPT Builder what you want to get in plain English and it will generate the prompt for you.
After checking the prompt GPT Builder generated, I found many prompting techniques being applied such as role prompting, setting goals, and specifying how our GPT should behave.
This is another proof that OpenAI wants to make its products as easy to use as possible and the best way to simplify them is by not forcing users to learn prompt engineering.
Is prompt engineering already dead?
When writing this article I was tempted to title it “Prompt engineering is dead” but then I remembered that all these upgrades to OpenAI’s models come from research papers published years before they were incorporated into the models.
For example, Chain of Thought prompting comes from a paper released in January 2022 (ChatGPT wasn’t even a thing by then) and it’s just being incorporated into OpenAI’s model now.
As Sahar Mor from AI Tidbits once mentioned in one of my previous articles, the research community keeps on publishing better techniques that will take time to incorporate into a GPTs-like interface. Techniques like Everything of Thoughts and Take a deep breath presented in papers in 2023 might be incorporated in the future.
That said, I still believe learning prompt engineering is becoming less relevant for most users. In the end, the goal of OpenAI is to make its products accessible and user-friendly for a broader audience. Every time OpenAI incorporates a prompting technique into its models, there won’t be the need to learn or apply that technique ourselves, but let the model do it on its own.
So, maybe the prompt engineering might become more complex or effective. Because you wrote in text that approaches are from papers, and new papers are published all the time. I think prompt engineering will be more specific until we have more nothing to discover about that.
Excellent article and I think you’re absolutely right 🙂👍