Today’s interview is with Deon Nicholas, is the Co-Founder and CEO of Forethought, an AI company that creates order, removes redundant work, and provides efficiency for businesses everywhere. They are set on changing the way enterprises access, share, and leverage their knowledge. Moreover, not only is Forethought committed to diversity, it’s part of their strategy.
Deon joins me today to talk about Unbiased AI, how bias gets into AI, overfitting, how we can rid AI of unconscious biases, what customer service and experience teams should be looking out for and doing more of or doing better, what is Forethought doing differently to tackle these issues and how diversity is a central part of their strategy.
This interview follows on from my recent interview – Who are the people that feel the least welcome when they engage with a business? – Interview with Gavin Neate of WelcoMe — and is number 393 in the series of interviews with authors and business leaders that are doing great things, providing valuable insights, helping businesses innovate and delivering great service and experience to both their customers and their employees.
Here’s the highlights of my chat with Deon:
- Deon previously appeared on the podcast in Oct 2019.
- Bias happens in human judgment. It’s actually an everyday part of our lives.
- When you think about bias, it’s really another word for heuristics for our brains.
- AI learns by reflecting on historical data and learning on data sets that were created by humans.
- These patterns might also be a function of who’s creating the data set or other circumstances connected to that idea.
- The challenge with identifying patterns like is that we look for patterns and correlations and then jump from there to causation.
- The whole goal of a machine learning algorithm…to look at these correlations and to figure out where there is causation and which correlations are spurious.
- We need to be constantly ‘weeding’ to find these spurious or erroneous correlations so we don’t jump to harmful or ineffective decisions.
- If we don’t do this then the implications can be really dangerous. For example, the African American man in Michigan who was wrongfully arrested due to a false hit in facial recognition technology which labeled him incorrectly based on the actual algorithms that they were using.
- An example of how it could play out in the customer service space:
- Imagine if you had AI software that was picking up on the accents of agents from listening in on calls. Well, it could make the wrong judgments if a person with certain accent happens to have a better CSAT score relative to another person with a different accent. It may then conclude the wrong things and train the agents in the wrong ways.
- It’s really important not to build models that pick up on the wrong correlations and really focus on the actual factors that matter in customer service e.g. Did you ask these kinds of questions? Did you answer the customer’s questions? These are more powerful signals that can lead to a better gauge of performance.
- The good news about AI systems is that they can (to some degree) be audited and where systemic bias is found, new data can be added and the models retrained.
- There are organizations, nonprofits and communities and groups within machine learning and AI companies and customer service groups that do this.
- For example: the Algorithmic Justice League (AJL) by Joy Buolamwini aims to shift the AI ecosystem towards equitable and accountable AI in terms of the computer vision problems.
- At forethought, we build artificially intelligent agents to help improve the customer experience. But, to do that you should always start with people.
- One of the biggest things that can lead to sampling bias is a lack of diversity in the dataset creators.
- So, if you’re creating a facial recognition technology and from your vantage point most of the world looks a certain way, most likely you’re going to end up picking images and data sets that resemble your vantage point.
- The bias in your brain is actually the way brain makes decisions quickly and so no one person is going to be able to eliminate that bias from the sample set immediately. But, if you actually take teams of people who all come from different backgrounds, vantage points and perspectives then you’re far more likely to build more unbiased data sets at the data collection layer. This is what Forethought is doing.
- To combat bias in AI:
- 1. Build a diverse team of folks who are actually building the algorithms, the models and the data sets.
- 2. As you’re building these models, you must be continuously auditing and testing them with new scenarios and questions.
- Everything is an input, even the team that you hire to help build, test, design and implement these models. Everything matters.
- Agatha is Forethought’s virtual assistant and started inside the walls of the organization to enable agents. They have now expanded their platform and are applying their AI across the entire customer journey and, in particular, the journey of a support ticket.
- The name Agatha comes Agatha in the film, Minority Report, which Deon believes is named after Agatha Christie.
- Looking forward we’re going to start to see service coming to the consumer rather than forcing the consumer to come to customer service.
- Deon’s Punk CX word: Nimble and Intelligent
- Deon’s Punk CX brand: Amazon
Deon Nicholas, is the Co-Founder and CEO of Forethought. Previously, Deon built products and infrastructure at Facebook, Palantir, Dropbox, and Pure Storage. He has ML publications and infrastructure patents, was a World Finalist at the ACM International Collegiate Programming Contest, and was named to Forbes 30 under 30. Originally from Canada, Deon enjoys spending time with his wife and children, playing basketball, and reading as many books as he can get his hands on.
Check out Forethought, say Hi to them and Deon on Twitter @forethought_ai and @dojiboy9 and feel free to connect with Deon on LinkedIn here.
Thanks to Don Sniegowski for the image.