Get deeper insights from unstructured customer data with generative AI.
Generative AI has introduced a new way to interact with unstructured data. LLMs have advanced contextual understanding, and can perform basic reasoning across vast amounts of information. This removes the need for preprocessing and makes qualitative analyses manageable, fundamentally shifting the way non-technical teams can work with customer data.
The process of directing the LLM to extract a specific insight, or perform a specific transformation, is called “prompt engineering.” With a well-structured prompt, teams can extract any insight from any data to achieve their goal, whether they’re trying to make sense of a handful of onboarding calls or analyze sentiment acrossthousands of support conversations. Applying the prompt at scale is where the results become valuable.
In this article, we’re sharing two field-tested prompts we’ve developed for leading CX teams to help them work with product reviews, support call transcripts, conversation data, and more.
For Reviews | Summarizing Key Issues
Summarization is a great starting point for LLMs. This summarization prompt is designed to extract key points from product or service reviews across sources like Trustpilot, G2, Capterra, Yelp, Google, or other platforms. The prompt gives the LLM clear instructions on what to summarize and how to format the results.
To test this prompt, you can copy and paste it into the LLM-interface of your choice (like ChatGPT) and input a review in place of the parameter below.
A simple summarization prompt like this can save hours of manual analysis, so CX leaders and product teams can find and solve priority problems. You can adjust the prompt to analyze contact reasons across hundreds of support calls to speed up topic modeling projects and understand where customer support agents are spending their time.
For Support Calls | Analyzing Customer Sentiment
Beyond the feedback provided in product or service reviews, CX teams want to understand their customers’ experience across different touch points like support interactions. Sentiment is a key metric to measure and manage because of its outsized impact on customer retention.
With a fine-tuned prompt, teams can go beyond simple ‘positive’ and ‘negative’ designations and ask the LLM to supply a list of reasons for the good and the bad across the conversation, scoring the conversation accordingly.
Here’s how the scored conversations look in HumanFirst.
When you run this prompt at scale across a large dataset of conversations, important trends begin to emerge. With the right prompt and data environment, you can map the total score to the source conversation, and isolate the low-scoring conversations to see which negative events occur most frequently. Similarly, you can extract positive events contributing to high-scoring conversations, and augment your agent training accordingly.
We’re sharing 4 more ready-to-use prompts in next week’s webinar - register to attend the session and receive a take-home guide with all 6!