Today’s interview is with Ryan McDonald, Chief Scientist at ASAPP. Ryan joins me today to talk about his experience over the last 20 years in the language technology space (AI: NLP, ML, LLMs), recent developments in the generative AI space, the challenges that enterprises face in embracing and leveraging this technology and how ASAPP is advancing AI to augment human activity to address real-world problems for enterprises, particularly in the area of customer care.
This interview follows on from my recent interview – Well-being and the changing nature of management and leadership – Interview with Ray Biggs, Head of Customer Care at John Lewis & Waitrose – and is number 464 in the series of interviews with authors and business leaders that are doing great things, providing valuable insights, helping businesses innovate and delivering great service and experience to both their customers and their employees.
Here are the highlights of my conversation with Ryan:
- A lot of companies are working on generative AI large language models.
- ChatGPT has put an urgency behind them to really get into this rapid release cycle.
- The opportunities are enormous.
- If you asked me 10 years ago if we would be at a point with our generative AI models that we can do all of the sort of tasks that we are now talking about. I wouldn’t have believed it.
- But, we’ve made huge progress in terms of computing power, more access to the training data, and improvements in architectures and training paradigms.
- The big challenge with enterprise is you’ve got to sort of live in each of these verticals. You’ve got to build models for fintech or finserve, you’ve got to build models for telcos, airlines etc.
- Generative AI is a tool ultimately. This is not a product by itself, and we have to build a bunch of scaffolding around it so that we can get a return on investment when we put it into products and give it to call centers, say.
- For a lot of use cases, the models have to be fast, particularly when you are augmenting agents or augmenting humans. Often they have to respond in the blink of an eye otherwise the recommendations come in too late and you’ve slowed the human down instead of helping them solve a problem.
- Right now the really huge models wouldn’t fit into that dynamic.
- One of the other big challenges is integrations. These models don’t live by themselves, they have to be integrated.
- If they’re gonna execute actions on behalf of an agent or autonomously, that kind of interaction has to be done well and be done scalably. Getting that right is a tricky thing.
- The next challenge is accuracy. These models are amazingly accurate. But, there’s a difference between accuracy that looks great in a prototype versus something that you’re going to put in an enterprise environment that people are going to rely on it being perfect when talking to a customer.
- Even one error in 100 is quite problematic in that sense.
- Think about organizations that live in regulated industries, where a 97 or 98% accuracy rate is just not good enough when it comes to a compliance framework.
- In those cases, there will probably have to be a real-time learning or QA type of thing in place i.e. putting process around predictions in order to make sure you are being compliant.
- The crux of the problem is really how do you design the user experience in a way that allows the human and the machine to work seamlessly together.
- The nature of the agent is going to change. Reps will become more like conversation experts with the technology there to do the heavy lifting like eliminating pauses when reps are taking notes or reading articles from a database to understand what to suggest to the customer or what step to take next.
- The more pauses there are the more distracted the agent is the lower customer satisfaction is.
- The agent’s role is already evolving into the master multitasker.
- There’ll probably be a switch in time where the AI is helping them be a multi tasker to where the agent is going to help the AI verify things.
- Even though voice is an important channel and we can do a lot in voice even with large language models as they are today, the sort of natural place to get real value from them is in digital.
- So I think in about two or three years, we will get to the space where for a huge number of companies the majority of their interactions will be on digital channels. Probably not 100%.
- Some of the barriers to achieving this aren’t technological. They are organizational and based on investments that companies have made in these technical stacks that are very heterogeneous.
- We’ve made so much progress in generative AI and it’s so much more powerful but it’s all going to be for nought if we’re not thinking about the integration side of the story as well.
- We’ve built tools for assisting agents in live environments:
- Auto-summary, which is targeted at disposition when an agent finishes a call. Often they have to do a bunch of tasks to close out that call. This sometimes means writing a little note of what happened in the conversation and maybe filling out some structured data etc. Our solution for that is entirely powered by generative AI. It helps agents with disposition tasks, reduces time spent on writing notes and improves customer satisfaction.
- Auto-compose: suggests responses for agents, reduces crafting time and can power around 80% of agent responses for most clients. Moreover, if you consider that it saves on average agents 12 to 20 seconds depending on the type of response, you get a lot of savings, particularly with digital interactions where an agent might be handling multiple conversations at once.
- One of the great things about generative models is that they’re almost natively multimodal.
- Generative AI architectures are naturally multimodal, blurring the boundary between digital and voice communication.
- One of the biggest things we’ve realised over the last few years is if you focus on business outcomes that will keep you going.
- People buy outcomes and they renew contracts on outcomes.
- Sitting down watching agents to understand where are the pain points in their workflow are and connecting those dots is hard but you have to do it to get to the business outcomes.
- Ryan’s Punk CX word: Elegant
- Ryan’s Punk XL brand: The UK Government Digital Service
Ryan McDonald is the Chief Scientist at ASAPP. He is responsible for setting the direction of the research and data science groups in order to achieve ASAPP’s vision to transform the CX space through the advancement of AI. Ryan has been working on language understanding and machine learning for over 20 years. His PhD work at the University of Pennsylvania focused on novel machine learning methods for structured prediction in NLP, most notably information extraction and syntactic analysis. At Penn, his research was instrumental in growing the fields of dependency parsing and domain adaptation in the NLP community. After his PhD, Ryan joined Google’s Research group spending 15 years there. At Google, Ryan led numerous efforts to push NLP and ML models into production for Google Translate, Assistant, Search and Cloud. He has written over 100 published papers that have been cited over 20,000 times.
Check out ASAPP, say Hi to them on Twitter @asapp and feel free to connect with Ryan on LinkedIn here.
Photo by Jukan Tateisi on Unsplash