Tuesday May 7th, 2024

Next Generation Customer Care Call Handling

Automatically enrich the first prompt for conversational AI, using demographics derived form voice to improve virtual agent outcomes

Context

The adoption of conversational AI and virtual agents is growing fast, fueled by Large Language Models (LLMs). Today, verbal input from the caller is converted to speech-to-text (STT), then used as a prompt for an LLM to inform its response. The result is used to either provide real-time, dynamic script assistance to help human agents or operate real-time virtual agents to expand the scope of self-serve interactions - deflecting more calls from human agents.

 

Challenge

While there is great potential in these high-tech services, they will largely deliver underwhelming, ‘general’ responses because the prompts that drive them - are uninformed about ‘who’ they are talking to.

 

Next Generation Customer Care Call Handling

Solution

Using a straight-forward API integration, VoxEQ’s Prompt service can accelerate conversational AI-powered calls up to 90 seconds by helping guide the LLM on the proper customer path by feeding it callers demographics with only a few seconds of voice. Obtaining caller demographics, in a friction-free and privacy preserving manner, improves the relevancy of responses and speed of resolutions, for agents and customers.

AI at Scale

AI-powered customer segmentation tools, tailored for voice demographics, removes the guesswork of what drives sales and outcomes success.

Better LLMs

Get more value with an enriched prompt - appending, for example, “I am a female Millennial,“ to better match script, offer and agent/bot style to vastly improve first-call outcomes.

Faster Results

Start the conversation at a higher quality level and speed up insights - up to 90 seconds faster - with fewer prompt responses.

Having context about the caller improves the performance of conversational AI. Studies show 90% of communication is non-verbal and VoxEQ has the unique ability to uncover some of this information. In our scenario tests, the LLM response is vastly improved when the first prompt includes demographic information, such as birth sex and age. Because VoxEQ doesn’t need any voice enrollment, it improves the very first interaction with a customer. VoxEQ, via an API, automatically provides an engineered prompt for the LLM to provide richer, more meaningful and accurate responses.

Jack Caven, Chief Executive Officer, VoxEQ