Partnering with HumanFirst, Infobip generated over 220 knowledge articles, unlocked 30% of their agents' time, and improved containment by a projected 15%.
Reviewing the state of employee experimentation and organizational adoption, and exploring the shifts in thinking, tooling, and training required for workforce-wide AI.
With a one-click integration to Conversations, Infobip’s contact center solution, HumanFirst helps enterprise teams leverage LLMs to analyze 100% of their customer data.
Partnering with HumanFirst, Infobip generated over 220 knowledge articles, unlocked 30% of their agents' time, and improved containment by a projected 15%.
Reviewing the state of employee experimentation and organizational adoption, and exploring the shifts in thinking, tooling, and training required for workforce-wide AI.
Here are six best practices to improve your prompt engineering results. When interacting with LLMs, you must have a vision of what you want to achieve and mimic the initiation of that vision. The process of mimicking is referred to as prompt design, prompt engineering or casting.
Here Are Six Strategies For Better Results
Write Detailed Prompts
To ensure a relevant response, make sure to include any important details or context in your requests. Failing to do so leaves the burden on the model to guess what you truly intend.
As far as possible, OpenAI advises users to provide detailed input to the LLM when performing prompt engineering. For instance, users should specify if they require longer answers, or brief replies.
Also if the LLM responses need to be simplified or if it is intended for exports. The best approach is to demonstrate the required response to the LLM.
Describe To The Model The Persona It Should Adopt
Below is an example of how, within the OpenAI playground, the persona is defined. This determines the style of LLM responses.
Clearly Segment Prompts
A well engineered prompt should have three components…context, data and continuation.
The context needs to be set, and this describes to the generation model what the objectives are.
The data will be used for the model to learn from.
And the continuation description instructs the generative model on how to continue. The continuation statement is used to inform the LLM on how to use the context and data. It can be used to summarise, extract key words, or have a conversation with a few dialog turns.
Below the prompt engineering elements:
With the advent of ChatML, users are mandated to segment prompts, as seen in the example below:
You can see the model is defined, and within messages, the role of system is defined with a description. The role of user is defined with contents, and the assistant.
Decompose The Sequence Of Steps To Complete The Task
This can also be referred to as chain-of-thought prompting with the aim to solicit chain-of-thought reasoning from the LLM.
In essence chain of thought reasoning can be achieve by creating intermediate reasoning steps to incorporate in the prompt.
The ability of LLMs to perform complex reasoning improves the prompt results significantly.
Provide Examples via Few-Shot Training
The example below shows how a number of examples are given via a few-shot training approach, before the final answer is asked:
Provide The Output Length
You can request the model to generate outputs with a specific target length. This can be specified in terms of the count of words, sentences, paragraphs, or bullet points.
However, asking the model to generate an exact number of words is not very precise.
The model is more accurate in producing outputs with an exact number of paragraphs or bullet points.
I’m currently the Chief Evangelist @ HumanFirst. I explore & write about all things at the intersection of AI and language; ranging from LLMs, Chatbots, Voicebots, Development Frameworks, Data-Centric latent spaces & more.
Subscribe to HumanFirst Blog
Get the latest posts delivered right to your inbox
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.