With so many wild predictions being bandied around about AI’s potential to radically transform healthcare for the better, it is easy to lose sight of the fact that materialization of that dream will be far from easy. Indeed, the task a hand can seem quite formidable. Most pharma companies’ existing IT infrastructures are based on legacy systems that were simply not designed with AI in mind: their data, if stored at all, is often kept in unwieldy free form and their systems lack interoperability.
“The world of healthcare is beset with a paradox in whereby there is more data than ever is flowing through physicians’ hands, but the true value of that data has gone largely untapped because it is unstructured and silo-ed in systems that generally are unable to talk to one another,” laments Janssen’s Chairman for EMEA, Kris Sterkens.
Nor is the situation much better in the corporate domain of drug development. “Frankly the huge numbers of mergers and acquisitions that have occurred across the pharma industry over the past couple of decades have meant that many drug makers do not have the systems and infrastructure in place to make all their data available even internally, let alone to external collaborators like healthcare providers,” acknowledges Professor Jackie Hunter, chief executive of clinical and strategic partnerships at BenevolentAI. Moreover sorting out these infrastructural gaps can be prohibitively costly. “One mid-sized pharmaceutical company spent USD 200m federating all its clinical data,” she recalls, “yet doing so is still absolutely essential for the full effect of AI to be able to take root.”
“Heterogeneous data sources and diversity of data management technology add to the complexity and can further make it a real management challenge,” points out David Crean, managing director for the investment firm, Objective Capital Partners. However he equally notes that the risks of failing to do so are significant. “In this day and age of algorithmic medicine, the absence of a cohesive strategy to incorporate cloud platform and data pools, integrating and consolidating data warehouses, data hubs and databases as a single source of data, can easily send organizations into a tailspin,” he cautions.
The Imperative of Good Data Hygiene
Logically, as AI operates on large sets of data, the availability of clean data at scale becomes a fundamental precursor to establishing a suitable environment for the growth of AI-based solutions. It is the lifeblood of the process. Without this requisite underlying infrastructure of big data, the promise of AI technology will surely fall short. “The brute reality is artificial intelligence approaches can only be as good as the data we can apply them to,” warns Precision for Medicine CEO, Matthew Hall. “If you put junk in, you can only expect to get junk out,” he reasons.
“The first thing we’ve learned is the importance of having outstanding data to actually base your machine learning on,” candidly admits Novartis’ CEO Vasant Narasimhan who was personally surprised at the scale of the job required just to reach a situation of appropriate data hygiene. “In our own shop, we’ve had to spend a great deal of time and efforts just cleaning the data sets as a precondition to being even able to run the algorithm. It’s taken us literally years just to clean the datasets. I think people tend to underestimate just how little clean data there is out there, and how hard it is to clean and link it all up,” he exclaims.
The current abundance of poor data hygiene will likely mean that the practical roll out of AI-based healthcare will be uneven and unbalanced, at least in the initial stages. Already there are some indicators of this happening. “Healthcare, being a highly regulated industry, is unequivocally a data-rich, but that doesn’t necessarily mean that this data is balanced. For us here in Asia to reap the benefits of AI, there is an urgent need to increase the availability of Asian-specific data. At the moment, existing genetics and clinical trial databases are predominantly made up of Caucasian data, which means entire geographic regions and ethnic groupings risk being left behind as this technology takes off,” observes Vishal Doshi, founder of AUM Biosciences.
Winning over Hearts and Minds
Beyond the need for a consistent and reliable supply of the raw material, there are also certain cultural and organisational barriers that have to be breached. “Health systems will have to dramatically re-engineer their own ways of working,” observes Pierre Meulien, executive director of the Innovative Medicines Initiative (IMI). “Right now, less than 10 percent of the healthcare data worldwide is actually analysed… Many stakeholders simply don’t understand their data environment or the richness of the information they already have, or could gather, so there is a practical need to educate.”
“A fundamental challenge facing healthcare systems is to figure out how to effectively adapt routines and ensure these changes are embedded in the culture of the system. The focus needs to be on the ‘effector-arm of AI,’ thoughtfully combining the data with behavioural economics and other approaches to support positive behavioural changes,” agrees Krishna Cheriath, chief data officer at BMS.
Nor should it be especially surprising that the medical community tends to be mistrustful of the ‘black box’ element of AI technologies and thus penetration remains far less pronounced than for some other sectors such as the fintech or the travel and entertainment industries. “Due to the various ways in which AI can be modelled, it can be hard to explain how it works, rendering adoption by conservative, highly-regulated industries like healthcare heavy going,” explains Christopher Rafter, Chief Operating Officer (COO) of Inzata Analytics.
“Although an AI model can be mathematically proven, its reasoning can be difficult to articulate, which goes against the grain of the discipline of medical science where there is an ingrained tendency to build upon precedent and rely on processes that have been successful in the past,” he elaborates.
Addressing the Fallibility
Finally there is the important matter of what to do and who to hold to account when AI itself malfunctions. “While AI is, of course, meant to reduce, if not eliminate the margin of human errors, disparities and incorrect diagnoses can still occur… thus there is a need for regulators to consider the tricky question of who is responsible for the information produced by the algorithm,” urges AUM’s Vishal Doshi.
Nor should all AI-backed decisions be taken as gospel. According to Carla Smith, former executive vice president of the Healthcare Information and Management Systems Society (HIMSS), “there are actually myriad places where AI, while genuinely exciting in many ways for medical science and healthcare, contains a dangerous potential for perpetuating existing or introducing new bias into decision-making.” She warns of the real risk posed by “dumb systems that people think are smart, but actually contain design bias,” that would accentuate itself in machine learning technologies.
“There is an unequivocal need to start involving ethicists at the conceptual design stages of new applications and to establish a data ethics framework to ensure that beyond privacy, data and algorithms consider fairness so algorithmic decisions do not create discriminatory or unjust impacts,” argues BMS’ Krishna Cheriath.