How to build trust with Trusts on artificial intelligence

How to build trust with Trusts on artificial intelligence

Dr Venkat Reddy, Consultant Neurodevelopmental Paediatrician, Senior Clinical Adviser and AI Lead at Future Perfect


In general, and as a clinician myself, I believe there is a lack of trust between clinicians and the use of AI.

Aside from the few clinicians with an interest in clinical informatics and digital health, views are still largely shaped by newspaper headlines about killer robots. Unfortunately, there has been concern over the use of algorithms due to recent events. Not to mention the negative press about the use, or misuse, of AI by social media giants to gather information and ‘snoop on people’.

There is also still a prevailing belief that AI is going to take away peoples’ jobs. This is especially in relation to admin staff and in the introduction of robotic process automation (RPA) which can automatically scan documents, book appointments, and such like. The clinical staff are concerned too, particularly in radiology and pathology where AI is already doing a chunk of work. 

The final aspect to this mistrust is a concern over safety and the fairness of AI as a tool.

With that rocky foundation, how do you convince clinicians that they can benefit from AI-based tools? 


Starting with the basics of AI

The first thing that must be developed is a defined way to explain the fundamentals of AI as a tool. It may also help to start using different terms - for example ‘augmented intelligence’ – to drive forward the fact that it really is simply a tool. We use laptops with tools like Microsoft Teams and digital dictation. In the same way, AI is a tool, there to be used for a specific purpose. Clinicians are still in control, and it’s vital to make that clear.

Also important is demystifying how algorithms are used in AI, and for what purpose. NHSX is currently doing work towards this by addressing concerns about bias. These initiatives are really useful. It will also help to clear up that many clinicians are using AI outside of work already, in the form of Alexa or even through recommended products on sites like Amazon. There is nothing to fear from moving it forward in a clinical setting, with the right tools to make sure the solution doesn’t exacerbate inequalities. 


From assistive AI to the fully autonomous 

An idea was put forward in the Lancet recently, of starting slow, implementing assistive AI and moving through to fully autonomous AI on a gradual basis. I would agree with this approach. The consequences of getting things wrong with AI on something like social media, for example, are not as life-threatening, but it is very different in healthcare and caution is the right attitude to adopt. 

The prevailing attitude in places like Silicon Valley is one of speed. The ‘move fast and break things’ attitude doesn’t work in healthcare. Clinicians rightly want to build slowly, safely, and build things that will last. 

It can be no different from a drug trial. You choose a drug that may help the patient, and then you monitor it for side effects. In the same way, you can trial AI in a gradual way. People are rightly concerned about everything happening all at once, so starting with something more controllable (assistive technology for example) and moving through to something fully autonomous may be the key. 

Narrow and specific task-focused Assistive AI is already in use  in healthcare, and is very different from the more complex general AI solutions of the future.


The NHS and its relationship with digitisation

Historically, efforts to digitise NHS processes have been slow, haphazard and piecemeal. There has long been a general understanding that we’re nowhere near the level of digitisation as in other industries like fintech or travel. Part of that is due to a lack of overarching strategy and introducing new solutions that don’t integrate with the system as a whole – with that, you end up just as disjointed as when you started. Another huge part is that clinicians are still too often battling legacy systems. Forget automation – they would simply settle for a system that works, and AI isn’t on their radar.

There is, however, already motivation for change with initiatives like Global Digital Exemplar Trusts which encourage digital readiness.

We often talk about digitisation and AI in terms of computers and keyboards, but it’s just as much about people being ready. 

Since Covid, this precarious state of ‘digital readiness’ has begun to blossom into a real movement. Everything has changed. Before Covid, only a small amount of GP surgeries offered remote consultation. That is now the majority. Multidisciplinary teams are meeting over Teams. We’re even delivering autism assessments and speech and language assessments remotely. We’ve proven that things can and will move forward, and AI is part of the equation. We must trust that we have come this far, and that we can go further with new and innovative tech that will make clinicians’ lives easier.


Healthcare AI in the long term: where to start

Even if the desire is there, where should a Trust start? My advice for NHS Trusts is this: start by implementing a tool that will make clinicians’ lives easier. Something that makes the workflow more seamless, that won’t interrupt work or add any more time to an already busy day. Lots of services are inundated with patients asking similar questions on a routine basis. A great starting point for an AI-enabled solution would be to implement a simple bot which can answer these queries and is available 24/7. From an admin perspective, you can deploy solutions which prioritise appointments, scan documents, and organise referrals. Even better if you can implement a solution with natural language processing (NLP) that interacts with the electronic health records and retrieves information for staff, limiting the time they spend tethered to a keyboard. These things don’t take away human interaction - they free up staff for more of it.

That is the goal - not a hostile takeover of jobs or a risky, biased algorithm, but a gradual move towards AI-enabled care, powered with clinicians’ trust.


COVERAGE LINKS:

https://www.healthtechdigital.com/how-to-build-trust-with-trusts-on-artificial-intelligence/

https://www.med-technews.com/features/how-to-build-trust-with-trusts-on-artificial-intelligence/

https://thejournalofmhealth.com/how-to-build-trust-with-trusts-on-artificial-intelligence/

Image