Back to blog
Articles
Articles
June 9, 2023
·
3 MIN READ

Agents

June 9, 2023
|
3 MIN READ

Latest content

Customer Stories
4min read

Lightspeed Uses HumanFirst for In-House AI Enablement

Meet Caroline, an analyst-turned-AI-expert who replaced manual QA, saved countless managerial hours, and built new solutions for customer support.
December 10, 2024
Customer Stories
4 min read

How Infobip Generated 220+ Knowledge Articles with Gen AI For Smarter Self-Service and Better NPS

Partnering with HumanFirst, Infobip generated over 220 knowledge articles, unlocked 30% of their agents' time, and improved containment by a projected 15%.
September 16, 2024
Articles
7 min read

Non-Technical AI Adoption: The Value of & Path Towards Workforce-Wide AI

Reviewing the state of employee experimentation and organizational adoption, and exploring the shifts in thinking, tooling, and training required for workforce-wide AI.
September 12, 2024
Articles
6 min read

AI for CIOs: From One-Off Use to Company-Wide Value

A maturity model for three stages of AI adoption, including strategies for company leaders to progress to the next stage.
September 12, 2024
Tutorials
4 min read

Building Prompts for Generators in Dialogflow CX

How to get started with generative features.
August 15, 2024
Announcements
3 min read

HumanFirst and Infobip Announce a Partnership to Equip Enterprise Teams with Data + Generative AI

With a one-click integration to Conversations, Infobip’s contact center solution, HumanFirst helps enterprise teams leverage LLMs to analyze 100% of their customer data.
August 8, 2024
Tutorials
4 min read

Two Field-Tested Prompts for CX Teams

Get deeper insights from unstructured customer data with generative AI.
August 7, 2024
Tutorials
5 min read

Optimizing RAG with Knowledge Base Maintenance

How to find gaps between knowledge base content and real user questions.
April 23, 2024
Customer Stories
4min read

Lightspeed Uses HumanFirst for In-House AI Enablement

Meet Caroline, an analyst-turned-AI-expert who replaced manual QA, saved countless managerial hours, and built new solutions for customer support.
December 10, 2024
Customer Stories
4 min read

How Infobip Generated 220+ Knowledge Articles with Gen AI For Smarter Self-Service and Better NPS

Partnering with HumanFirst, Infobip generated over 220 knowledge articles, unlocked 30% of their agents' time, and improved containment by a projected 15%.
September 16, 2024
Articles
7 min read

Non-Technical AI Adoption: The Value of & Path Towards Workforce-Wide AI

Reviewing the state of employee experimentation and organizational adoption, and exploring the shifts in thinking, tooling, and training required for workforce-wide AI.
September 12, 2024

Let your data drive.

Autonomous Agents in the context of Large Language Models

With Large Language Models (LLMs) implementations expanding in scope, a number of requirements arise:

  1. The capacity to program LLMs and create reusable prompts.
  2. Seamless incorporation of prompts into larger applications.
  3. Sequence LLM interaction chains for larger applications.
  4. Automate chain-of-thought prompting via autonomous agents.
  5. Scaleable prompt pipelines to collect relevant data from various sources.
  6. Based on user input and constitute a prompt; and submit the prompt to a LLM.
“Any sufficiently advanced technology is indistinguishable from magic.”
- Arthur C. Clarke

With LLM related operations there is an obvious need for automation, which is taking the form of agents.

Prompt Chaining is the execution of a predetermined and set sequence of actions.

Agents do not follow a predetermined sequence of events and can maintain a high level of autonomy.

Agents have access to a set of tools and any request which falls within the ambit of these tools can be addressed by the agent.

The Execution pipeline lends autonomy to the Agent and a number of iterations might be required until the Agent reaches the Final Answer.

Actions which are executed by the agent involve:

  1. Using a tool
  2. Observing its output
  3. Cycling to another tool
  4. Returning output to the user
“Men have become the tools of their tools.”

- Henry David Thoreau

The diagram below shows how different action types are accessed and cycled through.

There is an observation, thought and eventually a final answer. The diagram shows how another action type might be invoked in cases where the final answer is not reached.

The output snipped below the diagram shows how the agent executes and how the chain is created in an autonomous fashion.

Taking LangChain as a reference, Agents have three concepts:

Tools

As shown earlier in the article, there are a number of tools which can be used and a tool can be seen as a function that performs a specific duty.

Tools include Google Search, Database lookup, Python REPL, or even invoking existing chains.

Within the LangChain framework, the interface for a tool is a function that is expected to have:

  1. String as an input,
  2. And string as an output.

LLM

This is the language model powering the agent. Below is an example how the LLM is defined within the agent:

Agent Types

Agents use a LLM to determine which actions to take and in what order. The agent creates a chain-of-thought sequence on the fly by decomposing the user request.

Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. — Source

Agents are effective even in cases where the question is ambiguous and demands a multihop approach. This can be considered as an automated process of decomposing a complex question or instruction into a chain-of-thought process.

The image below illustrates the decomposition of the question well and how the question is answered in a piece meal chain-of-thought process:

Below is a list of agent types within the LangChain environment. Read more here for a full description of agent types.

Source

Considering the image below, the only change made to the code was the AgentType description. The change in response is clearly visible in this image, with the exact same configuration used and only a different AgentType.

For complete working code examples of LangChain Agents, read more here.

I’m currently the Chief Evangelist @ HumanFirst. I explore and write about all things at the intersection of AI and language; ranging from LLMs, Chatbots, Voicebots, Development Frameworks, Data-Centric latent spaces and more.

Subscribe to HumanFirst Blog

Get the latest posts delivered right to your inbox