Healthcare has improved rapidly in the last 20 years, raising life expectancy around the world. However, ageing populations have consequently put increasing strain on healthcare services.
Managing these patients is expensive and requires healthcare systems to focus on long-term care management vs. episodic care management.
As many healthcare market research agencies will testify, artificial intelligence has the potential to revolutionise healthcare and help address this challenge.
It is already successfully being used in areas such as disease detection and diagnosis, although there are still barriers preventing the expansion of AI in healthcare.
In this month’s blog, we explore some of the main barriers to AI in healthcare.
Firstly, there is the challenge of regulation. There are many governing bodies unique to different markets. For the purposes of this blog, let’s narrow it down to one: The US.
In April 2019, the FDA published a discussion paper which sparked debate around what regulatory frameworks should be in place for the modification and use of AI in the medical environment.
At the start of this year, they issued a new action plan which built on that debate, laying out the planned approach to regulation of software as a medical device that utilises AI or ML (machine learning). You can read more about the action plan here.
According to FDA guidelines in the US, AI software programmes and devices are most likely to fall under Class 3.
Class 3 is defined as high risk. This represents ~10% of medical devices on the market and is the primary category where artificial intelligent systems fall into. They can pose serious threats to patients if they malfunction.
Whilst most AI software programmes and devices serve to assist medical professionals, it’s difficult to say whether these devices will override the judgement of health professionals.
This leads us onto the next hurdle: Patient and provider trust. Even if the FDA does approve these medical devices, will they be trusted?
AI innovation is everywhere in our lives, and sometimes we do not even notice it. Whilst it is relatively harmless in most cases, trusting AI to provide accurate health recommendations is far more complicated.
There have been numerous examples in other industries where AI has failed. Specific to the healthcare industry, IBM’s Watson for Oncology (an AI powered super-computer) promised to revolutionise the treatment of cancer.
However, according to a STAT investigation into the technology, it is not living up to its promises and is still struggling to differentiate between different forms of cancer. Moreover, hospitals outside of America complain that the machine’s advice is biased towards American patients and methods of care.
Whilst the technology is still in its infancy, IBM have not published any scientific papers demonstrating how the technology affects patients and providers, making it more difficult to trust.
Both providers and patients want to understand why certain treatment has been recommended, and because machine learning algorithms are far too complicated for the average user to understand, the ‘why’ is missing. It is no surprise that patients trust the opinion of a human doctor over a machine.
It is vital that manufacturers of AI and ML are transparent about how the technology works, its data sources, the benefits and its limitations.
Understanding the ‘why’ behind AI and machine learning is complex, so helping patients understand how AI can support their care and convince providers that they can trust these machines.
Related to this issue of trust is the concern of privacy and cybersecurity.
Firstly, patient data. There are already tight regulations around this and how the data can be shared and used.
In some use cases, it might be possible to anonymise the data enough to let the AI machine do its work. However other areas may be more problematic such as image-dependent diagnoses like ultrasounds.
Secondly, as AI grows in its capabilities so will cyberattacks. Techniques like advanced machine learning, deep learning and neutral networks enable computers to look for patterns in data but also to find and exploit vulnerabilities.
However, AI can also be part of the solution. Already advanced machine learning techniques combined with cloud technology are analysing a huge amount of data and identifying real-time threats. AI can identify hot spots where cyberattacks have originated and generate cybersecurity intelligence reports.
AI is still in its infancy in the healthcare industry, and we are constantly learning more about what AI can offer, and its limitations. AI cannot replace human doctors, but it has a range of capabilities to assist in clinical decision making.
It is capable of picking up on complex patterns that can only become apparent when patient data is viewed in aggregate, something that would be unreasonable to expect a human doctor to recognise.
There are several other barriers to AI and ML that have not been discussed in this article, however patient and provider trust is one of the biggest. Whilst trust issues hold patients and providers back, the widespread adoption of AI in healthcare is on the horizon.
As one of the UK’s leading healthcare market research agencies, IDR Medical has over a decade of experience in conducting market research tailored to healthcare markets. In fact, we have conducted projects in over 30 countries to drive success of brands, products, and services of our clients.
If you are interested in conducting a market research project do not hesitate to contact us. We would be delighted to offer an initial telephone discussion, or an online meeting to understand how we can assist you.