OpenAI just published the authoritative guide to doing better prompt engineering with ChatGPT.
Called GPT best practices, the article outlines six strategies to get better results with ChatGPT using GPT-4. Each strategy is accompanied by a number of different tactics to operationalize each strategy.
The strategies are as follows:
- Write clear instructions. Tactics to write clearer include: using more detail to get more relevant answers, asking the model to adopt a persona, and specifying the steps you want GPT-4 to take to do what you want.
- Provide reference text. To cut down on hallucinations, it helps to instruct GPT-4 to use a reference text when answering. Says OpenAI: “In the same way that a sheet of notes can help a student do better on a test, providing reference text to GPTs can help in answering with fewer fabrications.”
- Split complex tasks into simpler, smaller tasks. You can do this in a number of ways, including: prompting GPT-4 to use intent classification to identify the most relevant instruction, summarizing or filtering previous conversations with GPT-4, and summarizing long documents in smaller pieces.
- Give it time to “think.” This means telling GPT-4 to give you its chain of reasoning before computing an answer. This gives the system the ability to reason more reliably.
- Use external tools. OpenAI recommends you “compensate for the weaknesses of GPTs by feeding them the outputs of other tools.” Examples include using a text retrieval system to help GPT-4 learn about documents or a code execution engine to help it do math and code.
- Test changes systematically. OpenAI warns “In some cases a modification to a prompt will achieve better performance on a few isolated examples but lead to worse overall performance on a more representative set of examples.” So you need to test regularly to improve performance.
“These may be worth trying with other tools, too,” says Marketing AI Institute founder/CEO Paul Roetzer on Episode 50 of the Marketing AI Show. While OpenAI specifies all of these are designed for GPT-4, the logic may also benefit you as you use tools like Bard and Claude.
It’s imperative to try prompting multiple times, says Roetzer. If you get a bad output on your first prompt, it may not be GPT-4’s fault—you may just need to take a different approach or provide more detail.
“It’s almost like briefing an intern, like all the depth of detail of exactly what you need,” he says.
Last, but not least, think of this as a critical skill you need to move forward in your career—because it is.
“Large language models are going to be a critical part of every marketer's job, every business,” says Roetzer.
Don’t get left behind…
You can get ahead of AI-driven disruption—and fast—with our Piloting AI for Marketers course series, a series of 17 on-demand courses designed as a step-by-step learning journey for marketers and business leaders to increase productivity and performance with artificial intelligence.
The course series contains 7+ hours of learning, dozens of AI use cases and vendors, a collection of templates, course quizzes, a final exam, and a Professional Certificate upon completion.
After taking Piloting AI for Marketers, you’ll:
- Understand how to advance your career and transform your business with AI.
- Have 100+ use cases for AI in marketing—and learn how to identify and prioritize your own use cases.
- Discover 70+ AI vendors across different marketing categories that you can begin piloting today.
Mike Kaput
As Chief Content Officer, Mike Kaput uses content marketing, marketing strategy, and marketing technology to grow and scale traffic, leads, and revenue for Marketing AI Institute. Mike is the co-author of Marketing Artificial Intelligence: AI, Marketing and the Future of Business (Matt Holt Books, 2022). See Mike's full bio.