Prompt Engineering, Explained
Crafting the perfect prompt is the key to unlocking the true potential of language models.
Hey Product People,
Exciting news! Introducing Collections, a new way to organize and save your bookmarks.
Bookmarks.ProductMindset.io – your go-to hub for all things Product Management
We are looking for ideas/suggestions, please link here to share your suggestions.
What is Prompt engineering?
Prompt engineering is preparing step-by-step instructions for different topics you are crafting so that you can achieve better results from the large language models (LLMs).
LLM is an artificial intelligence system that has been trained to respond to queries after compiling a variety of data sets. After that, it will react to you in accordance with the cues you gave it and assist you in completing tasks like generating text, translating languages, and writing different kinds of creative content. However, LLMs can only generate outputs that are consistent with the prompts they are given
Today, we are the generation that sees the evolution and emergence of AI. Artificial intelligence is not a new buzzword for anyone these days. Almost everyone uses chatbots, Chatgpt, and Google Bard to get answers to their questions or to translate their feelings into words. Even though Chatgpt and Bard, our deep learning models and chatbots, are incredibly skilled at answering your questions, there are instances when we have noticed that these LLMs are unable to provide an accurate response to any of our queries.
These LLMs are not accurately answering your query because they are not receiving the exact commands.
Why is prompt engineering trending?
Professionals with the ability to design effective prompts to guide large language models (LLMs) are in greater demand as these models become more and more popular in a range of applications.
LLMs have demonstrated their extraordinary abilities and adaptability beyond writing and generating text to translating languages, as demonstrated by Chatgpt, Bard, and others. By providing prompts, you open up these LLMs to a wider audience and allow people who do not know much about AI to use them for a variety of purposes.
The need for AI-powered solutions is expanding like the universe across all domains. Many small and large businesses are using artificial intelligence to enhance decision-making, customer service, and process automation.
Reports by Gartner indicate that up to 80% of organizations will incorporate AI by 2025. This surge in AI adoption fuels the need for prompt engineers who can develop and deploy AI-powered solutions, ensuring organizational goals.
Even though this is a relatively new industry, people are embracing it quickly. But you must ensure some facts before using it. See the following for advice on using prompt engineering.
Elements of a Prompt
As we cover more and more examples and applications with prompt engineering, you will notice that certain elements make up a prompt.
A prompt contains any of the following elements:
Instruction - a specific task or instruction you want the model to perform
Context - external information or additional context that can steer the model to better responses
Input Data - the input or question that we are interested to find a response for
Output Indicator - the type or format of the output.
You do not need all the four elements for a prompt and the format depends on the task at hand. We will touch on more concrete examples in upcoming guides.
Examples of prompts for product managers
Prompt 1:
Instruction: Prioritize a list of product features based on user needs and business goals.
Context: The product team has a backlog of potential features that could be added to the product.
Input Data: A list of product features, along with user feedback and data on feature usage.
Output Indicator: A prioritized list of product features, ranked according to their importance for users and their alignment with business objectives.
Prompt 2:
Instruction: Create a user persona for the target audience of a new product.
Context: The product is a software application designed to help small businesses manage their finances.
Input Data: None
Output Indicator: A detailed user persona that describes the target audience's demographics, goals, challenges, and behaviors.
Prompt 3:
Instruction: Develop a product roadmap for a new mobile app that will meet customer needs and can be delivered on time and within budget.
Context: The target audience for the app is busy professionals who need a way to manage their tasks and stay organized. The app should be easy to use, have a clean and intuitive interface, and be integrated with popular productivity tools.
Input Data: None
Output Indicator: A product roadmap that includes timelines, milestones, and resource allocation.
Prompt 4:
Instruction: Analyze user feedback to identify the most common pain points and areas for improvement.
Context: The product team has been collecting user feedback through surveys, interviews, and in-app feedback forms.
Input Data: A dataset of user feedback containing comments, suggestions, and bug reports.
Output Indicator: A report summarizing the key findings of the user feedback analysis, including insights into user needs, pain points, and areas for improvement.
Prompt 5:
Instruction: Develop a strategy to improve customer onboarding and reduce churn.
Context: The company has a high churn rate among new customers.
Input Data: Data on customer onboarding processes, churn rates, and customer satisfaction.
Output Indicator: A strategy to improve customer onboarding and reduce churn, including recommendations for simplifying the onboarding process, improving customer support, and increasing customer engagement.
Prompt 6:
Instruction: Develop a competitive analysis to identify the strengths and weaknesses of the company's products and services.
Context: The company is operating in a competitive market.
Input Data: Information about competitors' products, services, pricing, and marketing strategies.
Output Indicator: A competitive analysis report that summarizes the key findings, including competitor strengths, weaknesses, and opportunities for differentiation.
Prompt 5:
Instruction: Create a plan for measuring the success of a new product launch.
Context: The company is launching a new product.
Input Data: Product launch goals, marketing metrics, and customer satisfaction data.
Output Indicator: A plan for measuring the success of the product launch, including key metrics to track, data collection methods, and reporting procedures.
Prompting Techniques
There are many different types of prompt engineering techniques, but some of the most common include:
N-shot prompting: This technique involves providing the LLM with a few examples (N-shots) of the desired output before asking it to generate its own output. This can be used for a variety of tasks, such as translation, summarization, and code generation.
Chain-of-thought (CoT) prompting: This technique involves breaking down a complex task into a series of smaller, simpler tasks. The LLM is then instructed to complete each task sequentially, with each task’s output serving as the input for the subsequent task. This can be used for tasks such as question-answering and problem-solving.
Generated knowledge prompting: This technique involves generating prompts that contain additional knowledge or information that the LLM may not have been trained on. This can be used to improve the accuracy and completeness of the LLM’s output.
Positive and negative prompting: This technique involves using both positive prompts (prompts that encourage the LLM to generate certain types of output) and negative prompts (prompts that discourage the LLM from generating certain types of output). This can be used to control the style and tone of the LLM’s output, as well as to prevent it from generating harmful or offensive content.
Interactive context-aware prompting: This technique involves iteratively refining the prompt based on the LLM’s output. This can be used to help the LLM understand and respond to complex or ambiguous prompts.
Role-playing prompting: This technique involves instructing the LLM to take on a specific role or identity when generating its output. This can be used to generate more creative and engaging outputs, such as poems, stories, and scripts.
Multiple Choice Question (MCQ) Prompting: This method consists of giving the LLM several options for a single question, with the instruction to choose the correct answer from the list.
Prompting Principles
These five principles provide a solid foundation for crafting effective prompts that can guide LLMs to generate desired outputs across a wide range of tasks and applications.
Clarity and Specificity: Write clear, concise, and specific instructions that accurately convey the desired task or output. The prompt should be unambiguous and easy for the model to understand.
Goal Definition: Clearly define the goal of the prompt, whether it's generating creative text formats, translating languages, writing different kinds of creative content, or answering questions in an informative way.
Model Capabilities: Understand the capabilities and limitations of the LLM being used. Tailor the prompt to match the model's strengths and avoid asking for tasks that are beyond its scope.
Iteration and Testing: Experiment with different prompts and techniques to refine the output and achieve the desired results. Continuously test and iterate the prompt to improve its effectiveness.
Feedback and Refinement: Incorporate feedback from users or experts to refine the prompt and improve its performance. Continuously refine the prompt based on feedback to ensure it consistently generates desired outputs.
Top 6 Lessons Learned
Finally, what have we found in practice? Here are 6 helpful tips we’ve observed along the way that could help make your life easier — and your product better.
Show, Don’t Tell the LLM: Include few-shot examples in your prompts. Include other instructions too, but with examples. Research and anecdotal evidence supports it — results are better. The most advanced teams dynamically inject few-shot examples that relate to each specific completion.
Version your prompts: You’ll likely iterate on your prompt over time — filling in new instructions to handle edge cases, adjusting few-shot examples, tuning the context provided, etc. Be prepared to save different versions along the way so that you can easily revert if you make changes you don’t like, or want to compare a few options later.
Save all the interesting outputs: You’ll need both good and bad examples as test cases to make sure you’re keeping your quality bar high & addressing failure cases, and you’ll need to know the version of the prompt that generated them. Lots of PMs discover too late that they’ll need these test cases later.
Pick the model for your prompt: The common progression we see is from OpenAI to multiple providers, especially Anthropic, Google & Cohere. There are cost, latency & redundancy benefits from multiple providers, but we frequently hear that some models are just better than others for each prompt. It’s common to find teams using 2-3 providers in production, and some use more.
Keep an eye on production: LLMs aren’t good for “set it and forget it” features. Edge cases emerge for customers, models change (aka “drift”), latencies can spike, and result quality can change drastically as a result. Figure out how to monitor for quality, latency, cost and errors so you
Design for a feedback loop: If this is your first time building with ML tech, you’ll want to think about feedback loops you can create in your customer experience. These might be active/explicit (e.g. thumbs up/down with a comment, like ChatGPT offers), or passive/implicit (e.g. tracking an event for when a customer makes use of an LLM-generated output, or when they request edits). Review a sample from each bucket of data on a regular basis to spot ways to improve.