According to statistics, approximately 2.9 billion identity records have already been exposed in 2019, consisting of 774 million unique email addresses and 21 million unique...
Regardless of how you define AI, we all know ‘it’ is advancing, and ‘it’ has opened up many questions. Our focus today is on the concerns customers may have, which of course are a concern to the businesses that serve them. Again, because AI is a broad subject, I’m going to predominantly focus on conversational platforms and chatbots. This is because they can be a fantastic starting point for the implementation of AI and intelligent automation in terms of cost and deployment – and they have clear use-cases.
First off, it’s worth pointing out that your business probably doesn’t need to beat the Turing test. If you’re using bots, there are arguments that maintaining an illusion of human interaction makes the customer experience better, but this approach isn’t compatible with trust. Whilst the technology is always advancing, its impossible to rule out a bot getting stuck and not understanding what someone meant, breaking the illusion regardless.
If a customer knows they’re speaking to a bot, and it is clear what that bot is there to do, the customer knows what to expect. At the end of the day, bots can provide answers to some questions much faster than a human agent who has to type out the answer, so long as the bot can understand the intent, or goal of the customer.
When designing bots, defining their purpose is more than just good practice of software development. Clear specific purposes mean that users intents are more accurate, meaning your data is more accurate. Whilst you can have as many bots as your (or your cloud’s) infrastructure can handle, narrower purposes also improve intent resolution, reduce development time and allow an organisation to ‘dip their toe’ into the world of AI.
After determining what the technology is going to do for the customer, next ask what is it going to do with their data? In the post GDPR world, you need a legal reason or legitimate interest for processing data, or you need to obtain the subjects consent. Again using the example of a chatbot, you could ask the bot to obtain its own consent.
You can even allow the conversation to continue, and not be recorded. Thinking a bit more innovatively, it can even begin to handle deletion requests on your behalf, or at least begin the process if that’s what you prefer.
If you want to use some of the information in the conversation for something else, you need to call it out explicitly. But as you can see, a bot can be 100% consistent in the privacy messages it provides, and capture the outcome of obtaining consent just as consistently, then act accordingly.
If you’re looking to deploy AI elsewhere in the business, for example to run data analysis on your customers calls, you will have to ensure they know what you’re going to do to the call. In situations like this, it may be worth introducing this into your IVR, or providing scripts and training to your call-handling staff. Depending on what then happens with these outputs, an organisation may need to be familiar with what the GDPR says about automated decision making and profiling.
The GDPR hasn’t ‘killed AI’, you just have to apply the technology and the law appropriately, which is what provides the best outcomes for your customers anyway.
Trust is fickle and a complicated concept. To conclude, it’s worth referring to figures from the ICO; that 88% of UK data breaches are caused by human error. Most breaches are caused by human error, yet we trust humans. Perhaps we can all trust properly designed, appropriately used, AI?
Next time we will explore how organisations can trust AI with your business and to realise genuine benefits.
Share this article