Back to blog
Articles
May 26, 2023
·
4 MIN READ

Intent Creation & Extraction With Large Language Models

May 26, 2023
|
4 MIN READ

Latest content

Customer Stories
4min read

Lightspeed Uses HumanFirst for In-House AI Enablement

Meet Caroline, an analyst-turned-AI-expert who replaced manual QA, saved countless managerial hours, and built new solutions for customer support.
December 10, 2024
Customer Stories
4 min read

How Infobip Generated 220+ Knowledge Articles with Gen AI For Smarter Self-Service and Better NPS

Partnering with HumanFirst, Infobip generated over 220 knowledge articles, unlocked 30% of their agents' time, and improved containment by a projected 15%.
September 16, 2024
Articles
7 min read

Non-Technical AI Adoption: The Value of & Path Towards Workforce-Wide AI

Reviewing the state of employee experimentation and organizational adoption, and exploring the shifts in thinking, tooling, and training required for workforce-wide AI.
September 12, 2024
Articles
6 min read

AI for CIOs: From One-Off Use to Company-Wide Value

A maturity model for three stages of AI adoption, including strategies for company leaders to progress to the next stage.
September 12, 2024
Tutorials
4 min read

Building Prompts for Generators in Dialogflow CX

How to get started with generative features.
August 15, 2024
Announcements
3 min read

HumanFirst and Infobip Announce a Partnership to Equip Enterprise Teams with Data + Generative AI

With a one-click integration to Conversations, Infobip’s contact center solution, HumanFirst helps enterprise teams leverage LLMs to analyze 100% of their customer data.
August 8, 2024
Tutorials
4 min read

Two Field-Tested Prompts for CX Teams

Get deeper insights from unstructured customer data with generative AI.
August 7, 2024
Tutorials
5 min read

Optimizing RAG with Knowledge Base Maintenance

How to find gaps between knowledge base content and real user questions.
April 23, 2024
Customer Stories
4min read

Lightspeed Uses HumanFirst for In-House AI Enablement

Meet Caroline, an analyst-turned-AI-expert who replaced manual QA, saved countless managerial hours, and built new solutions for customer support.
December 10, 2024
Customer Stories
4 min read

How Infobip Generated 220+ Knowledge Articles with Gen AI For Smarter Self-Service and Better NPS

Partnering with HumanFirst, Infobip generated over 220 knowledge articles, unlocked 30% of their agents' time, and improved containment by a projected 15%.
September 16, 2024
Articles
7 min read

Non-Technical AI Adoption: The Value of & Path Towards Workforce-Wide AI

Reviewing the state of employee experimentation and organizational adoption, and exploring the shifts in thinking, tooling, and training required for workforce-wide AI.
September 12, 2024

Let your data drive.

Intent Creation & Extraction With Large Language Models

COBUS GREYLING
May 26, 2023
.
4 MIN READ

In previous articles, I argued that a data-centric approach should be taken when engineering training data for Natural Language Understanding (NLU). Building on this, this article will discuss the importance of creating and using intents when working with Large Language Models (LLMs).

Introduction

As with AI in general, NLU Models also demand a data-centric approach to NLU Design. Improving NLU performance demands that the focus shift from the NLU model to the training data.

NLU Design best practice needs to be adhered to, where existing conversational unstructured data is converted into structured NLU training data.

NLU Design should ideally not make use of synthetic or generated data but actual customer conversations.

The image below shows the process which can be followed for a data centric approach to intent detection, creation and use. Staring with Embeddings

Embeddings

The first step is to use conversational or user-utterance data for creating embeddings, essentially clusters of semantically similar sentences.

These groupings each constitutes an intent, each group needs to be given a label which is the “intent name”.

Read the article below for a detailed description of step one.

Create Classifications

Once we have the groupings/clusters of training data we can start the process of creating classifications or intents. The terms “classes” and “intents” will be used interchangeably.

For the purposes of this article I’ll be making use of the Cohere LLM.

The classification can be done via the Cohere classify post endpoint:

https://api.cohere.ai/classify

The training body of text is classified into one of several classes/intents. The endpoint only needs a few examples to create a classifier leveraging a generative model.

The Colab notebook snippet below shows how to install the Cohere SDK, and how to create a client. You will need an API key which you can get for free by creating a login on the Cohere website.

For reasons of optimisation and speed, it is best to make use of a small model.

Once you have installed the SDK and created your Client, run this code to create the intents.

Each with training examples…from cohere.classify import Example

You will see the training utterances are all labelled using one of three classes or intents:

Shipping and handling policy

Start return or exchange

Track order

Extract Classification

The text below shows the queries, analogous to user utterances submitted to the conversational agent…

inputs=[" Am I still able to return my order?",
      "When can I expect my package?",
      "Do you ship overseas?",

As seen in the image,

(1) The input or query utterances can be submitted
(2) The Classifications are extracted
(3) the results printed

A portion from the Colab notebook showing steps 1, 2 and 3

The results:

Conclusion

Intents are often neglected and seen as an insignificant step in the creation of a conversational agent. Frameworks like Amelia, Oracle Digital Assistant and Yellow AI offer synthetically generated training phrases. This approach can run the danger of trivialising the intent creation process.

Synthetic training data can suffice as a bootstrap measure, but will not serve well in creating a longer term sustainable solution. NLU Design and Data Best Practice should be adhered to from the onset.

Also, these synthetic training phrases are based on often “thought up” intents and intent names which are most probably not aligned with existing user intents. And definitely not addressing the long-tail of intent distribution.

It is time we start considering intents as classifications of existing customer conversations; a process of intent driven development is required for successful digital assistant deployments.

I’m currently the Chief Evangelist @ HumanFirst. I explore & write about all things at the intersection of AI and language. Including NLU design, evaluation & optimisation. Data-centric prompt tuning & LLM observability, evaluation & fine-tuning.

Subscribe to HumanFirst Blog

Get the latest posts delivered right to your inbox