Implementing Trustworthy Artificial Intelligence

Simon Adams
Simon Adams

AI – A broad brush term that means different things to different people. It can cover anything from shouting ‘Hey Siri’ into your phone, to analysing real time data and making decisions based on an ever improving knowledge base. But when you think about it, those two examples could be exactly the same thing, as the latter is exactly what Apple is going to do after you say the former.

Regardless of how you define AI, we all know ‘it’ is advancing, and ‘it’ has opened up many questions. Our focus today is on the concerns customers may have, which of course are a concern to the businesses that serve them. Again, because AI is a broad subject, I’m going to predominantly focus on conversational platforms and chatbots. This is because they can be a fantastic starting point for the implementation of AI and intelligent automation in terms of cost and deployment – and they have clear use-cases.


Transparency is key

First off, it’s worth pointing out that your business probably doesn’t need to beat the Turing test. If you’re using bots, there are arguments that maintaining an illusion of human interaction makes the customer experience better, but this approach isn’t compatible with trust. Whilst the technology is always advancing, its impossible to rule out a bot getting stuck and not understanding what someone meant, breaking the illusion regardless.

If a customer knows they’re speaking to a bot, and it is clear what that bot is there to do, the customer knows what to expect. At the end of the day, bots can provide answers to some questions much faster than a human agent who has to type out the answer, so long as the bot can understand the intent, or goal of the customer.

When designing bots, defining their purpose is more than just good practice of software development. Clear specific purposes mean that users intents are more accurate, meaning your data is more accurate. Whilst you can have as many bots as your (or your cloud’s) infrastructure can handle, narrower purposes also improve intent resolution, reduce development time and allow an organisation to ‘dip their toe’ into the world of AI.

Conversational data, even if there’s only one human!

After determining what the technology is going to do for the customer, next ask what is it going to do with their data? In the post GDPR world, you need a legal reason or legitimate interest for processing data, or you need to obtain the subjects consent. Again using the example of a chatbot, you could ask the bot to obtain its own consent.

By the way, so that we can continue to help you in the future, I will keep a copy of this conversation. That way, my master can help you if I’m not able to. Is that okay? Click here to read more about our privacy policy.


You can even allow the conversation to continue, and not be recorded. Thinking a bit more innovatively, it can even begin to handle deletion requests on your behalf, or at least begin the process if that’s what you prefer.

If you want to use some of the information in the conversation for something else, you need to call it out explicitly. But as you can see, a bot can be 100% consistent in the privacy messages it provides, and capture the outcome of obtaining consent just as consistently, then act accordingly.

If you’re looking to deploy AI elsewhere in the business, for example to run data analysis on your customers calls, you will have to ensure they know what you’re going to do to the call. In situations like this, it may be worth introducing this into your IVR, or providing scripts and training to your call-handling staff. Depending on what then happens with these outputs, an organisation may need to be familiar with what the GDPR says about automated decision making and profiling.


Data driven trust

The GDPR hasn’t ‘killed AI’, you just have to apply the technology and the law appropriately, which is what provides the best outcomes for your customers anyway.

Trust is fickle and a complicated concept. To conclude, it’s worth referring to figures from the ICO; that 88% of UK data breaches are caused by human error. Most breaches are caused by human error, yet we trust humans. Perhaps we can all trust properly designed, appropriately used, AI?

Next time we will explore how organisations can trust AI with your business and to realise genuine benefits.

Share this article

More about the author

Simon Adams
Simon Adams Operations Director

Simon is responsible for the day to day running of the consultancy practice. Simon brings consultancy experience in leading the prioritisation and management of large change portfolios across IT, business and third-party suppliers. Simon is an excellent communicator, often involved in working with the executive teams, but is equally comfortable driving engagement at all levels. Simon’s passion at work is in driving change and the adoption of digital culture, tools, and ways of working. Having advised and led pre-sales due diligence and post-M&A integration, he brings first-hand experience of successfully creating a culture of high performance and engagement that is progressive in the adopted ways of working.

Contact an expert

Get in touch directly with a consultant - We’d love to discuss how we can help you achieve you project goals.

Get in touch
let's talk speech bubble