Prompt Engineering in OpenQuestion: A Beginner’s Guide

In the conversational AI ecosystem Large Language Models (LLMs) are rising as powerful stochastic language calculators which can also be used to fulfill many relevant tasks in natural language processing. Though highly capable, LLMs are not autonomous and rely on prompts (which in essence are well-thought-out instructions) to make meaningful responses.

Prompt Engineering can be defined as the processes and strategies taken for instructing LLMs to generate the best possible output. In this article we’ll go through essential tips that everyone writing prompts should consider making special focus on how prompts are used in OpenQuestion.

Why should you care about Prompt Engineering?

You might be thinking: I already give instructions to a Large Language Model, do I really need to think on the best way to do this? A big risk when using LLMs is the lack of control on the generated output. This is particularly relevant in a business context where costs and customers experience may be negatively affected by a lack of a controlled response.

Enter Prompt Engineering: By carefully thinking on words, audience, context, and phrase order in the instruction to the LLM, developers can exert influence on the generated output and guarantee a safe and responsible use of LLMs.

What should you consider when writing prompts?

Prompting is an active exploration field and literature on this topic is growing exponentially. Nonetheless, we can identify a set of essential considerations for creating a successful prompt. In the following section we’ll explore some of them.

Disclaimer: Prompting is an iterative process

Following the idea above, the fact that there are no rules set in stone give developers the freedom and creativity to experiment with different types of prompts. This is perhaps why Prompt Engineering is often referred to as an “art”. Developing a good prompt usually takes more than one attempt so if you have been tasked with writing prompts, get ready to experiment with different combinations of phrases, words, and instructions to get to your desired result.

Mind the basic structure of a prompt

Even though prompts can be used for a variety of purposes, there are always elements common to all of them. When writing a prompt, consider including an instruction (the task to perform) context (additional information used to aid the model in generating a response) and a desired output. Moreover, you can include LLM-specific parameters such as temperature to control the ‘creativity degree’ of the model.

If looking to generate “customer-facing” outputs, you could also align answers to your brand guidelines by adding instructions such as the persona “writing” the answer and the tone and style used by specific customer segments. An example of this could be something like: “You are a customer service agent. Write a reply to a customer complaint. Use a kind and empathetic tone”.

In OpenQuestion, prompts are used for instructing GPT models to summarize dialogs and long user inputs. The dialog summarization feature recaps the conversations between customers and IVR to provide contact center agents with more context on the call once it is transferred. The image below show the summarization prompts used in the solution; note how the briefest of prompts includes all the basic elements listed above.

OpenQuestion also gives the possibility of enhancing intent recognition by summarizing long user inputs before they are classified by the NLU engine. The image below shows the instruction which includes a user persona and a specific focus area.

Consider how you phrase your instructions

Let’s zoom in into the instruction part of the prompt. LLMs tend to yield better results when presented with specific rather than vague tasks. The less ambiguous the instructions, the closer the outcomes align with your intended goals (and the more control on the output), so be aware of including keywords and explicit instructions in your prompts. Additionally, including fallback instructions (if you can’t do A, then do B" ), examples and additional context for your prompts is also a good idea to address the risk of hallucinations.

Including Negative Prompts (i.e., telling the LLM what not to do) is also a good strategy to control outputs. Typically, this is employed in to filter out offensive, irrelevant, inappropriate, or possibly misinterpreted content, words, or expressions. For instance, a negative prompt could be something towards: “Don’t create any content that promotes discrimination or hate speech".

Since at the time of writing the article a lot of people are experimenting with LLMs, there is an abundance of tips for writing better prompts across the internet. Regardless of what you read and which of these tips you try, make sure to keep these basic elements present.

Consider the type of task to perform

One of the most impressive capabilities of LLMs is their ‘ability’ to perform different tasks. Naturally, different tasks call for different instructions and consequently a different phrase and wording structure to use. We are currently observing the emergence of different prompting techniques such as Chain-Of-Thought Prompting, which involves breaking down the instruction in smaller tasks and solve each one before answering, or Few-Shot Prompting which consists on adding additional context in the prompt to enhance its performance (like in the example used above for summarizing conversations in OpenQuestion).

Prompts can influence costs

Most LLM service providers will charge based on tokens. Since the overall cost of utilizing the LLM will then rely on the length of both the initial request and the generated response, you will want to keep these under control by adding length restrictions in the instruction and not creating extensive prompts.

OpenQuestion has been designed to control LLM usage, and this starts in the prompts. The image below shows a prompt used for answering questions from documents or URLs using LLMs without the need of creating flows. In here we can see how a length constraint is included in the instructions to avoid extensive answers. Developers can also take advantage of the token counter available in the Dashboard to keep track of costs.

Bonus: Beware of Prompt Hacking

While incredibly versatile and useful for businesses, LLM prompting can also be used for malicious purposes. By “tricking” the LLM with carefully crafted prompts (a.k.a Prompt injection) hackers have developed techniques such as Jailbreaking or Prompt Leaking which can generate answers that break ethical guidelines (such as instructions on how to break into a home), corrupt the original bot’s tone and persona (picture a your bot suddenly offering or replying based on products from your competitors) or collect personal data. Luckily, there are also several countermeasures that can be applied to minimize these risks.

Final remarks

In this article we’ve explored the relevance of Prompting as one of the basic means of addressing some of the risks and limitations that using LLMs have. We’ve also shared common practices to ensure prompts are on brand, provide safe results and also keep costs controlled. We have also gone through some of the prompts that can be used when implementing a conversational IVR solution in the contact center.

As LLMs slowly consolidate within the CAI ecosystem, prompting is rising as new ability to add to the skillset of everyone wanting to interact with the models. Whether this will become a specific project role or just another skill in the ever-growing skillset of a CAI Developer, its yet to be seen. In the meantime, let’s keep on exploring and prompting!

What are your experiences with prompting? Have you applied any of the considerations above? Any more to add? Let us know in the comments!

3 Likes