Back to blog
Articles
Articles
July 24, 2023
·
4 min read

Retrieval Augmented Generation (RAG) Safeguards Against LLM Hallucination

July 24, 2023
|
4 min read

Latest content

Customer Stories
4 min read

How Infobip Generated 220+ Knowledge Articles with Gen AI For Smarter Self-Service and Better NPS

Partnering with HumanFirst, Infobip generated over 220 knowledge articles, unlocked 30% of their agents' time, and improved containment by a projected 15%.
September 16, 2024
Articles
7 min read

Non-Technical AI Adoption: The Value of & Path Towards Workforce-Wide AI

Reviewing the state of employee experimentation and organizational adoption, and exploring the shifts in thinking, tooling, and training required for workforce-wide AI.
September 12, 2024
Articles
6 min read

AI for CIOs: From One-Off Use to Company-Wide Value

A maturity model for three stages of AI adoption, including strategies for company leaders to progress to the next stage.
September 12, 2024
Tutorials
4 min read

Building Prompts for Generators in Dialogflow CX

How to get started with generative features.
August 15, 2024
Announcements
3 min read

HumanFirst and Infobip Announce a Partnership to Equip Enterprise Teams with Data + Generative AI

With a one-click integration to Conversations, Infobip’s contact center solution, HumanFirst helps enterprise teams leverage LLMs to analyze 100% of their customer data.
August 8, 2024
Tutorials
4 min read

Two Field-Tested Prompts for CX Teams

Get deeper insights from unstructured customer data with generative AI.
August 7, 2024
Tutorials
5 min read

Optimizing RAG with Knowledge Base Maintenance

How to find gaps between knowledge base content and real user questions.
April 23, 2024
Tutorials
4 min read

Scaling Quality Assurance with HumanFirst and Google Cloud

How to use HumanFirst with Vertex AI to test, improve, and trust agent performance.
March 14, 2024
Customer Stories
4 min read

How Infobip Generated 220+ Knowledge Articles with Gen AI For Smarter Self-Service and Better NPS

Partnering with HumanFirst, Infobip generated over 220 knowledge articles, unlocked 30% of their agents' time, and improved containment by a projected 15%.
September 16, 2024
Articles
7 min read

Non-Technical AI Adoption: The Value of & Path Towards Workforce-Wide AI

Reviewing the state of employee experimentation and organizational adoption, and exploring the shifts in thinking, tooling, and training required for workforce-wide AI.
September 12, 2024
Articles
6 min read

AI for CIOs: From One-Off Use to Company-Wide Value

A maturity model for three stages of AI adoption, including strategies for company leaders to progress to the next stage.
September 12, 2024

Let your data drive.

Articles

Retrieval Augmented Generation (RAG) Safeguards Against LLM Hallucination

COBUS GREYLING
July 24, 2023
.
4 min read

A contextual reference increases LLM response accuracy and negates hallucination. In this article are a few practical examples to illustrate how explicit and relevant context should be part of prompt engineering.

The Retrieval Augmented Generation (RAG) feature of LLM systems allows businesses to utilise their own data for generating responses.

This technique enables in-context learning without costly fine-tuning, making the use of LLMs more cost-efficient.

By leveraging RAG, companies can use the same model to process and generate responses based on new data, while being able to customise their solution and maintain data relevance.

On the contrary, without RAG, models may return incorrect knowledge as they are trained on a broader range of data, and more intensive training resources are required for fine-tuning.

Thus, RAG allows organisations to optimise the integration of LLMs while gaining various benefits such as fact-checking components, up to date data and business-specific data. Hence circumnavigating the problem of a LLM model being trained and frozen in time.

The image below shows a straight forward query being posed to gpt-4–0613:with the question For which club does Leonel Messi Play?

A caveat is included by the model with a dated answer.

Considering the image below, the RAG approach is taken, where, shown in red, a contextual reference is included in the prompt, and the model responds with the correct answer in this instance.

Below is the full code to run the example, first without the RAG approach:

And the answer received is:

As of my latest update in 2021, Lionel Messi plays for Paris Saint-Germain Football Club.

And then supplying the contextual reference in the prompt:

With the correct contextual answer: Leonel Messi plays for Inter Miami CF.

The challenge of course is including the correct amount of context, at the right time, at scale.

A Machine Learning pipeline approach needs to be taken when compiling the prompt in real time. While being able to measure and enhance the RAG workflow and testing elements like:

  • Data Generation
  • Automatic prompt creation
  • Observe, inspect and optimise prompt evaluation metrics, etc.

I’m currently the Chief Evangelist @ HumanFirst. I explore & write about all things at the intersection of AI & language; ranging from LLMs, Chatbots, Voicebots, Development Frameworks, Data-Centric latent spaces & more.

Subscribe to HumanFirst Blog

Get the latest posts delivered right to your inbox