Back to blog
Articles
Articles
August 21, 2023
·
3 min read

Iterative Prompting Pre-Trained Large Language Models

August 21, 2023
|
3 min read

Latest content

Tutorials
4 min read

Building Prompts for Generators in Dialogflow CX

How to get started with generative features.
August 15, 2024
Announcements
3 min read

HumanFirst and Infobip Announce a Partnership to Equip Enterprise Teams with Data + Generative AI

With a one-click integration to Conversations, Infobip’s contact center solution, HumanFirst helps enterprise teams leverage LLMs to analyze 100% of their customer data.
August 8, 2024
Tutorials
4 min read

Two Field-Tested Prompts for CX Teams

Get deeper insights from unstructured customer data with generative AI.
August 7, 2024
Tutorials
5 min read

Optimizing RAG with Knowledge Base Maintenance

How to find gaps between knowledge base content and real user questions.
April 23, 2024
Tutorials
4 min read

Scaling Quality Assurance with HumanFirst and Google Cloud

How to use HumanFirst with Vertex AI to test, improve, and trust agent performance.
March 14, 2024
Tutorials
6 min read

Generating Chatbot Flow Logic from Real Conversations

How to build flexible, intuitive Conversational AI from unstructured customer data.
February 29, 2024
Announcements
2 min read

Full Circle: HumanFirst Welcomes Maeghan Smulders as COO

Personal and professional history might not repeat, but it certainly rhymes. I’m thrilled to join the team at HumanFirst, and reconnect with a team of founders I not only trust, but deeply admire.
February 13, 2024
Tutorials
4 min read

Accelerating Data Analysis with HumanFirst and Google Cloud

How to use HumanFirst with CCAI-generated data to accelerate data analysis.
January 24, 2024
Tutorials
4 min read

Exploring Contact Center Data with HumanFirst and Google Cloud

How to use HumanFirst with CCAI-generated data to streamline topic modeling.
January 11, 2024
Tutorials
4 min read

Building Prompts for Generators in Dialogflow CX

How to get started with generative features.
August 15, 2024
Announcements
3 min read

HumanFirst and Infobip Announce a Partnership to Equip Enterprise Teams with Data + Generative AI

With a one-click integration to Conversations, Infobip’s contact center solution, HumanFirst helps enterprise teams leverage LLMs to analyze 100% of their customer data.
August 8, 2024
Tutorials
4 min read

Two Field-Tested Prompts for CX Teams

Get deeper insights from unstructured customer data with generative AI.
August 7, 2024

Let your data drive.

Articles

Iterative Prompting Pre-Trained Large Language Models

COBUS GREYLING
August 21, 2023
.
3 min read

As opposed to model fine-tuning, prompt engineering is a lightweight approach to fine-tuning by ensuring the prompt holds contextual information via an iterative process.

As I always say, the complexity of any implementation needs to be accommodated somewhere.

In the case of LLMs, the model can be fine-tuned, or advanced prompting can be used in an autonomous agent or via prompt-chaining.

In the last two examples mentioned, the complexity is absorbed in prompt engineering and not via LLM fine-tuning.

The objective of contextual iterative prompting is to absorb the complexity demanded by a specific implementation; and not offload any model fine-tuning to the LLM.

Contextual iterative prompting is not a new approach, what this paper considers is the creation and automation of an iterative Context-Aware Prompter.

At each step (each dialog turn) the Prompter learns to process the query and previously gathered evidence, and composes a prompt which steers the LLM to recall the next piece of knowledge.

This process reminds of soft prompts and prompt tuning.

The paper claims that their proposed Context-Aware Prompter Designoutperforms existing prompting methods by notable margins.

The approach shepherds the LLM to recall a series of stored knowledge (e.g., C1 and C2) that is required for the multi-step inference (e.g., answering Q), analogous to how humans develop a “chain of thought” for complex decision making.

Source

The automated process establishes a contextual chain-of-thought and negates the generation of irrelevant facts and hallucination with dynamically synthesised prompts based on the current step context.

The paper does confirm the now accepted approach to prompting which include:

  1. Prompts need to be contextual, including previous conversation context and dialog turns.
  2. Some kind of automation will have to be implemented to collate, and in some instances summarise previous dialog turns to be included in the prompt.
  3. Supplementary data, acting as a contextual reference for the LLM, needs to be selected, curated and truncated to an efficient length for each prompt at inference.
  4. The prompt needs to be formed in such a way, not to increase inference time.

I’m currently the Chief Evangelist @ HumanFirst. I explore & write about all things at the intersection of AI & language; ranging from LLMs, Chatbots, Voicebots, Development Frameworks, Data-Centric latent spaces & more.

Subscribe to HumanFirst Blog

Get the latest posts delivered right to your inbox