VASBI ‘AI in research’ presentation raises concerns over accuracy, post-deployment monitoring, and data sparsity

464

At the latest Vascular Access Society of Britain and Ireland (VASBI) annual scientific meeting (26–27 September, Cardiff, UK), Nicholas Inston (University Hospitals Birmingham, Birmingham, UK) gave a presentation as part of the vascular access technology and challenges update session entitled “AI [artificial intelligence] platforms and their place in medical/vascular access research”. The interactive discussion that followed explored the subject in further detail, with audience members raising questions about the accuracy of work produced via the use of AI, the amount of work needed to ensure that data was correct, and the availability, or lack thereof, of data that can be used in AI generators. The consensus that the audience reached? AI is being underutilised due to data restrictions, but it also should be utilised cautiously.

After Inston finished his presentation, during which he demonstrated the potential of AI when used to write research articles by walking audience members through the steps he took to generate an article of his own, the floor was opened to the audience for questions. In response to an opening question from Charmaine Lok (Toronto General Hospital, Toronto, Canada) as to whether he used AI for his own work, Inston initially stated that he hadn’t had any experience with it prior to preparing his presentation, but also added: “I think we can’t avoid it, but I think we [also] have to be cautious. Half of [the need for caution] is being scared of something that we don’t understand, with the other half being due to it being out of our control, but [AI] is in our lives already, we just don’t realise that it’s happening.”

Kate Steiner (East and North Herts NHS Trust, Stevenage, UK) was the next to add to the discussion. “One of the problems that we’ve seen with AI deployment in diagnostics,” she began, “is that we’ll deploy an AI solution, and then that works, but the post-deployment monitoring aspect of the solution sometimes negates the amount of work that the AI is saving.” She added that while AI is “relatively in its infancy”, one of the common issues that are encountered is that AI will generate content or data that is either inaccurate or just false. Due to this, Steiner argued that that post-deployment monitoring is essential. “Even if there have been clinical studies,” she said, “even if it’s [received a European CE mark], the results we see in real world practice often do not match the trial data,” meaning that the monitoring of the results are an essential, but labour- and time-intensive aspect of using AI.

Inston followed on from this point, adding: “I think that’s true, and it’s a bit like ECGs [echocardiograms]; you used to have to read them, but now it says something at the top and it tells you what the diagnosis is, but if you’re using the machine and it’s not attached to a patient, it’ll tell you you’ve got VT [ventricular tachycardia], or VF [ventricular fibrillation], so you’ve got to have some sense checking there. Medicine is very nuanced and involves a lot of intelligence, real intelligence.”

A final issue that was touched on towards the end of the discussion by both Inston and Ali Kordzadeh (Mid and South Essex NHS Foundation Trust, Broomfield, UK) was the availability of medical data, or rather, the issues in obtaining the data necessary for AI to be used effectively. “AI is as good as the data,” Kordzadeh stated. “If I put about 600 data points into an AI [algorithm] and said ‘I want you to classify them from A–Z’, if the data is good, it will do the classification of the basic information.” This issue arises, however, when the data is not available due to general data protection regulation (GDPR) and patient confidentiality restrictions, as Inston highlighted.

“I think one of the big issues with AI, especially when you involve government and healthcare systems, is data protection,” he argued. “Healthcare data is so protected that we’re not generating the amount of data that we could do. AI generators cannot get hold of that data. So, unless a company has permission to use it, the amount of healthcare data out there—especially for dialysis, which involves patients being monitored three times a week, with a new dataset each time—we do potentially have great data there, but we just don’t get to it.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here