AI is the buzzword du jour across all industries, but what is needed for it to truly make a difference in clinical trials? Writing in December 2023’s DIA Global Forum magazine, Lokavant’s Rohit Nambisan lays out the three essential characteristics for AI impact in clinical research, foregrounding the vital importance of providing a measurable return on investment.

 

Artificial intelligence/machine learning (AI/ML) has been leveraged in various industries for decades, but generative AI (GenAI) has recently emerged to offer new use cases in life sciences. Ultimately, GenAI—or any other form of AI—is not going to meet today’s lofty expectations if it does not provide a tangible return on investment (ROI).

We are nearing the peak of the AI hype cycle, and skepticism is starting to grow. For example, recent surveys indicate that AI has not won public trust: consumers need to see proof of benefits before AI is widely utilized in healthcare. Even GenAI comes with equal parts excitement and skepticism. Moreover, the industry’s substantial interest in AI (one report says the AI in the life sciences market is expected to reach $7.09 billion by 2028, growing at a CAGR of more than 25% from 2023 to 2028) has not yet resulted in widespread deployment.

Adoption and change-management roadblocks start and end with ROI. AI applications must be tied to use cases that demonstrate specific advantages, such as increased efficiency or commercial lift. Despite early examples of technical validation, broad industry adoption will inevitably cease if AI doesn’t provide value commensurate with the cost for deployment, training, and maintenance. The bar is even higher in clinical research, an evidence-generating industry that relies on concrete proof. Trial sponsors, contract research organizations (CROs), and sites all require proof of added value in everyday workflows.

If we cannot generate evidence that AI can deliver substantive and tangible value, it will not gain mainstream acceptance. To provide evidence of ROI—and ultimately lead to a sizable impact on clinical research—AI technologies must incorporate three fundamental characteristics: comparative baselines, value attribution, and data interoperability.

 

  1. Comparative Baselines Provide Tangible Proof

Historical comparisons are often used to validate accuracy. In healthcare, this is referred to as “concordance,” as in this analysis: Concordance in Breast Cancer Grading by Artificial Intelligence on Whole Slide Images Compares With a Multi-Institutional Cohort of Breast Pathologists.

In a phase 3 hematology-oncology study, a clinical trial intelligence technology provider used a comparative baseline to validate an enrollment forecast algorithm. An AI model analyzed historical data (just as it would for a live study) and initially projected that the timeline set for the study was not adequate for enrolling the required number of participants, citing a zero percent chance of meeting the goal within the planned enrollment period. Additionally, the algorithm forecasted a more realistic timeframe. This forecast showed concordance with actual performance and was proven accurate: the model predicted the trial to complete at month 42, but in reality, it completed at month 43, a 40-day margin from the actual last patient enrolled.

Predictive models that indicate a project is off-track drive teams to take corrective action and change course. Naturally, this alters the outcome, making it difficult to gauge what would have happened without intervention, which is why historical comparisons are vital. Historical comparative baselines offer insight into potential outcomes in the absence of preemptive adjustments and ultimately provide tangible evidence of success. In this way, it is crucial for AI tools to leverage retrospective controls to prove their outcomes.

 

Read the full article on the DIA Global Forum website here