Back to blog
Articles
Articles
June 27, 2023
·
4 min read

ChatGPT APIs & Managing Conversation Context Memory

June 27, 2023
|
4 min read

Latest content

Tutorials
4 min read

Building Prompts for Generators in Dialogflow CX

How to get started with generative features.
August 15, 2024
Announcements
3 min read

HumanFirst and Infobip Announce a Partnership to Equip Enterprise Teams with Data + Generative AI

With a one-click integration to Conversations, Infobip’s contact center solution, HumanFirst helps enterprise teams leverage LLMs to analyze 100% of their customer data.
August 8, 2024
Tutorials
4 min read

Two Field-Tested Prompts for CX Teams

Get deeper insights from unstructured customer data with generative AI.
August 7, 2024
Tutorials
5 min read

Optimizing RAG with Knowledge Base Maintenance

How to find gaps between knowledge base content and real user questions.
April 23, 2024
Tutorials
4 min read

Scaling Quality Assurance with HumanFirst and Google Cloud

How to use HumanFirst with Vertex AI to test, improve, and trust agent performance.
March 14, 2024
Tutorials
6 min read

Generating Chatbot Flow Logic from Real Conversations

How to build flexible, intuitive Conversational AI from unstructured customer data.
February 29, 2024
Announcements
2 min read

Full Circle: HumanFirst Welcomes Maeghan Smulders as COO

Personal and professional history might not repeat, but it certainly rhymes. I’m thrilled to join the team at HumanFirst, and reconnect with a team of founders I not only trust, but deeply admire.
February 13, 2024
Tutorials
4 min read

Accelerating Data Analysis with HumanFirst and Google Cloud

How to use HumanFirst with CCAI-generated data to accelerate data analysis.
January 24, 2024
Tutorials
4 min read

Exploring Contact Center Data with HumanFirst and Google Cloud

How to use HumanFirst with CCAI-generated data to streamline topic modeling.
January 11, 2024
Tutorials
4 min read

Building Prompts for Generators in Dialogflow CX

How to get started with generative features.
August 15, 2024
Announcements
3 min read

HumanFirst and Infobip Announce a Partnership to Equip Enterprise Teams with Data + Generative AI

With a one-click integration to Conversations, Infobip’s contact center solution, HumanFirst helps enterprise teams leverage LLMs to analyze 100% of their customer data.
August 8, 2024
Tutorials
4 min read

Two Field-Tested Prompts for CX Teams

Get deeper insights from unstructured customer data with generative AI.
August 7, 2024

Let your data drive.

Articles

ChatGPT APIs & Managing Conversation Context Memory

COBUS GREYLING
June 27, 2023
.
4 min read

Currently, ChatGPT is powered by the most advanced OpenAI language models. While OpenAI has made the APIs available to these models, it does not inherently manage conversation context & memory.

Traditionally one of the challenges of chatbot development frameworks, has been managing conversation memory, also referred to as conversational context.

The best way to illustrate this, is via an example conversation, as seen below. Three questions are asked by the user, the first question is a direct and explicit question.

Questions two and three are both contextually implicit questions by which context in the first question is referenced in an implicit and slightly ambiguous manner.

Question three is even more vague in implying context.

With LLM related tools like few-shot learning and summarisation, conversation memory and context can be managed easily.

Here are a few practical examples…

The current ChatGPT models are: gpt-4, gpt-3.5-turbo, gpt-4–0314 and gpt-3.5-turbo-0301. These models can be accessed via an API call with only a few lines of code.

As seen below, with only a few lines of code, you can enter a sequence of messages and the model will return a text output:

With the output:

The ChatML document submitted must contain conversational history in order to effectively maintain conversational context and manage dialog state (also referred to as memory).

By incorporating prior dialog turns, the model is then able to answer contextual questions. Hence having conversational memory.

OpenAI clearly states, that the models have no memory of previous and past requests. Hence all relevant information must be supplied via the conversation.

It is important to remember that if a conversation is too long for the model’s token limit, it must be shortened. This can be done by having a rolling log of the conversation history where only the most recent dialog turns are submitted.

But how can conversation memory be managed for manageable scaling?

The common approach to no-code Generative App development is dividing programming tasks into different components. One of those components is Conversation Memory.

For example, below are the three memory components of LangFlow.

  1. Buffer for storing conversation memory.
  2. Conversation summariser to memory.
  3. Knowledge graph memory for storing conversation memory.

Below is the most basic general chatbot with memory…with the memory component allowing for ambiguous questions.

Consider the conversation below, and how contextually sensitive the two follow-up questions are.

I’m currently the Chief Evangelist @ HumanFirst. I explore and write about all things at the intersection of AI and language; ranging from LLMs, Chatbots, Voicebots, Development Frameworks, Data-Centric latent spaces and more.

Subscribe to HumanFirst Blog

Get the latest posts delivered right to your inbox