Ever since OpenAI burst on the scene in late 2022 with its free version of ChatGPT many of us have had a love-hate relationship with large language models (LLMs). One the one hand we have been blown away by the capabilities of these chatty, helpful, bots, but on the other we find they sometimes wildly misinterpret what we are asking, or produce results that, while they appear to be accurate and useful, bear no relationship to reality!
In an earlier post I explained how LLMs differ from traditional search engines like Google or Bing, and why this happens.
In this post, I’ll discuss an important tool you can improve your interactions with LLMs, and increase the likelihood they will provide relevant, accurate responses. You can do this by developing thoughtful habits that govern how you ask the LLM for assistance. This is called “prompting.”
What Do We Mean by Prompting?
“Prompting” is just the term used to describe the instructions we provide to an LLM when we initiate an interaction. Understanding the anatomy of a good prompt can appreciably increase your chance of getting an accurate and relevant information from the LLM.
I believe the three core characteristics of a good prompt can be summarized in three words —Roles, Goals, and Limitations.
Roles: Defining Who’s Who in the Conversation
Assigning clear roles in your prompt guides the LLM to adopt the right tone, expertise, and style when responding, and it can also help the model search out the best training data to use when formulating its response. Defining the roles involves two perspectives:
• The LLM’s Role: Specify what persona or expertise you want the model to embody. For example, “Act as a financial advisor,” or “You are a high school science teacher.” This helps the model tailor its language, depth, and focus, producing responses that are contextually appropriate and that are aligned with your expectations.
• The User’s Role: Describe your own background or needs. For example, if you are looking for ideas to plan a vacation, what are the ages and interests of the folks who will be making the trip? This information can help guide the LLM to areas of training more closely aligned to your objectives.
Goals: Clarifying the Desired Outcome
Every effective prompt is anchored in a clear goal. Before you ask, consider:
• What do you want to achieve? Are you seeking a summary, a detailed explanation, a list of pros and cons, a creative story, or something else? Articulate your objectives explicitly.
• What does success look like? Describe the ideal output format—should it be a bulleted list, a step-by-step guide, or a concise paragraph? Do you want the response to include specific hyperlinks to websites supporting the response? – Ask for it! Remember, the more specific you are, the more likely the LLM will align its response to fit your needs.
Limitations: Setting the Rules and Boundaries
Outlining the acceptable limits you want to impose on the LLM is important as well, and it can help control the quality and reliability of the output. Think in terms of how you would advise a “human” assistant who was doing research to provide your desired response (your “goals”).
For example, maybe it is very important that the LLM seek out the most current information available. If the model (like most do today) can perform an internet search to update its response, be sure to reinforce the need for it to do that when crafting an answer for you. If you want each assertion provided in the response to contain links to reliable internet-based source material; you can instruct the model to provide those.
It’s also a good idea to ask your LLM not to hallucinate responses. That might sound a bit odd, after if you are asking for advice on the best restaurant for Indian food in Little Rock, why would the LLM ever think you wanted it to make up a dining location to provide that experience?
The answer is that sometimes users want a creative response, not necessarily an accurate “real world” example. Sometimes users want the LLM to hallucinate – to be “creative” and take existing data and extrapolate and arrange it in new format. Asking the LLM not to do this (assuming you need factual responses), apparently does reduce the tendency the model has to guess or make up a source information source. Positive requests such as “provide sources for any conclusion that does not involve common knowledge” or “if you have inadequate data state a conclusion, say so — do not speculate” can help.
You might also insist that the LLM take on your task one step at a time, proceeding to the next step after it has displayed results and received you’re ok to continue. This can be particularly useful in helping to direct the model, so it doesn’t waste time and energy gong don a path that isn’t relevant or useful.
A Practical Example: Putting It All Together
Here’s how a novice might structure a prompt using these three characteristics outlined above:
“You are an AI language model acting as a career counselor. I am a college student with no prior work experience, seeking advice on entering the tech industry. Please provide a step-by-step guide detailing how the student can determine what options are available and how they can determine if it would be a good “fit” for them. Consider relevant interests, character traits, course selection and life goals. Your response should include major bullet points and subcategories. It should be written in a conversational tone. You should include reputable sources for each step and avoid speculation—stick to verifiable information only.” Limit your response to approximately 750 words.”
- Roles: AI as career counselor; user as college student.
- Goals: Step-by-step guide, bullet points, reputable sources.
- Limitations: No speculation, only verifiable information, citations required, limit length
Experiment, Practice, and Don’t Expect Perfection
We tend to forget that we are at the very earliest stages of learning how best to interact with LLMs. There are many experts out there with ideas about how to work with these cheerful bots. At the same time programmers at each of the companies that created LLMs are constantly working to improve them, to make them more intuitive, responsive and less prone to hallucination.
For now the best strategy seems to be err on the side of providing LLMs too much, rather than too little guidance, and at least from my experience focusing on the three following the three characteristics of a good prompting “roles, goals and limitations” is a simple, yet effective way to produce better, more accurate results from your interactions with LLMs.
However, “better” is not “perfect.” Even if you tell prompt an LLM not to hallucinate a response, it will still do so from time to time. Even if you ask it to provide you with authoritative references, you’ll from time to find receive broken links, or a citation to a source that may not exist. For now, this is an issue we will have to accept. When accuracy is important – as it is most often -you simply must check and in some cases double check an LLMs response to make sure it isn’t “making stuff up.”
Yet, even with these trade-offs, I found that I use a LLMs more each day. While no LLM wrote this Post for me, it did shorten the time needed to spend working on it, and itoffered several suggestions to improve upon what I wrote. The net result, at least I believe, is a better product, produced in less time.
For me, that is not a bad trade off. Happy Prompting!