In November 2022, OpenAI opened its GPT3 model to the public through ChatGPT. It became the fastest adopted service ever, reaching 100 million users within 2 months. A host of other foundation models have since followed, portending to disrupt professions and whole industries whilst enabling us to create more than ever before.
In vitro diagnostics will too see changes from the broader upheavals that AI will bring. In fact, imaging – part of the wider diagnostic toolkit – has been enjoying a quieter revolution for some time as Machine Learning (ML) models equal or even outperform pathologists at interpreting patient scans. Recognising the potential, TTP helped develop a quantitative tissue staining technology to improve aspects of digital pathology in 2019.
But monumental change in both the worlds of AI and in vitro diagnostics since – in particular, the shift to distributed diagnostics and community-based healthcare during and after the Covid – means it’s high time to reconsider how AI will disrupt our industry.
dAIgnostics could be defined by the unexpected interplay of three themes: more bio/physical markers, greater accuracy, greater personalisation.
The three themes are familiar. On the back of ML, the first two mainly raise considerations for the development of point-of-care diagnostic platforms. It’s the interplay with the third, driven by foundation models, that could drive change in the diagnostic process with implications for the types of dAIgnostics needed in the future.
What do we mean by AI?
AI usually refers to one of two things: Machine Learning (ML) models, programmes trained on a moderate amount of data related to a single task; and foundation models, which are trained on astronomical volumes of non-specific data. Some nuance is lost here, but we’ve written about both ML and foundation models and their role in technology and solution development before, so I'll leave it to those pieces to discuss more deeply.
Current usage has become somewhat ambiguous as to whether AI includes ML. While many see it as part of the AI family, others now consider it a distinct field. The FDA gives guidance on “Artificial Intelligence and Machine Learning”, not simply “AI”, implying that they now see these as two related but distinct fields.
We will include both ML and foundation models in our definition of AI: ML is already changing diagnostics and holds even greater potential to do so in the future, whilst foundation models may well change the way healthcare is delivered.
More Biomarker options
There is no shortage of biomolecules we can measure and that, in principle, could tell us about every facet of the body’s functions. AI enhances our ability to extract signals from this universe of biomarkers, previously too vast to be efficiently explored by scientists.
The MarkerDB database, for instance, lists over 34,000 biomarkers known to be associated with various biological processes – although for many of these, the relationships detected with conventional statistical tools aren’t necessarily strong enough for use in the clinic.
And this is still only a tiny proportion of the universe of potential human biomarkers, as our bodies contain over 100,000 proteins (and a huge number of modifications), over 250,000 metabolites and associated chemicals, and a staggering number of DNA and RNA variants.
As powerfully demonstrated in the realm of image analysis, modern AI systems are capable of interpreting vast amounts of data to uncover relationships. Not only will this likely uncover more single-target biomarkers for testing, but also the promise that signatures of multiple biomarkers will give better diagnostic or prognostic accuracy will be easier to grasp.
Ultimately, the changes in the types of biomarkers researchers discover will likely translate into changes in the type of instruments the IVD industry need to develop. In short, we believe instrument developers should think about moving towards:
- Target-agnostic platforms capable of running different assay cartridges for different targets on the same instrument.
- Multiplexed testing providing for multiple markers in a single test.
- Quantitative testing detecting not just the presence of but the quantity of a biomarker, which will be meaningfully related to clinical decisions.
- Trading greater platform complexity for lower cartridge complexity: the larger number of tests available will mean that cartridges will need to be small and cheap for storage at the point of care.
Instituting the above won’t be easy. Luckily, a new set of AI tools may well help…
More accurate – or lower cost diagnostics for the same accuracy
AI systems are very good at detecting signals that were previously not known to exist. An illustrative example of this was published 4 years ago when a team from across Europe used a Deep Learning model to predict the sex of patients from pictures of the retinal fundus – the inside surface of the back of the eye. The model managed this with an accuracy of approximately 80%. Yet clinicians at the time were unaware of any differences between the retinal anatomy of men and women.
An AI model discovered a signal where two hundred years of anatomy had not.
What other information are we missing from our test read-outs? In a sense, ML models are the ultimate statistical tests – tests specially developed for the deployment environment that are able to robustly call results which standard statistical tests cannot reliably detect.
This opportunity to categorise from larger datasets encourages us to collect more data. A hospital ECG measurement may only record electrical activity from the heart a few minutes, limited by the volume of data a cardiologist can reasonably interpret. Transient or rare events are likely to be missed.
AI enables simpler anylsis of data from monitoring of patients over much longer periods, with rare events more likely to be within the monitoring window, and small changes being detected through repeated data collection, instead of diagnosis on the basis of a single instance of high-precision data collection.
All else being equal, the adoption of AI into diagnostics means a step change in the precision and accuracy of any tests. With a view to point-of-care diagnostics – we prefer distributed diagnostics – this newfound power can either be used to create more accurate tests, or be traded for portability and cost.
We believe developers of in vitro diagnostics systems should consider taking advantage of the following:
- When a test is still limited by sensitivity and specificity, AI calling methods should be able to improve both for a given design.
- Importantly, when the achieved sensitivity and specificity is already sufficient for clinical application, then AI calling algorithms can allow a reduction of costs in two ways:
- Reducing the requirements for components such as photodiodes, filters etc., whilst maintaining performance, should ultimately reduce build the bill of materials for the instrument or platform.
- Reducing the requirements for sample preparation can reduce both development cost and the cost of each test, because “sample prep” can be as (if not more) challenging as the diagnostic assay itself.
- Signals that wouldn’t have been distinguishable previously may now be, allowing for greater multiplexing in a single test.
- It will be valuable to collect data over time, and not just a single snapshot.
In effect, AI offers the chance to trade hardware-enabled accuracy for comparatively cheaper software-enabled accuracy.
More personalisation
AI will probably not bring about a sudden revolution; healthcare is a slow changing field. But it seems certain that it will enter the growing suite of options that allow clinicians to continue personalising care.
During the era of symbolic and then machine learning AI, the unstructured (or poorly structured) data contained in Electronic Health Records (EHR, also known as Electronic Medical Records, EMR, or Personal Health Records, PHR) couldn’t easily be interrogated.
The tokenising systems that underpin foundation models completely change that. Now, the unstructured data contained within EHRs of patients can easily be read by AI.
We think this could enable doctors to offer better, more personalised care but will also change the demands on diagnostic testing:
- Humans have many biases about what data they weight most heavily. AI can help ensure that data from across your EHR is given fairer weighting during decisions on care (although care must be taken to not create bias by the choice of training data).
- Diagnostic testing may become more targeted as AI improves the ability of clinicians to perform differential diagnosis from symptoms and history.
- As a result, the distribution of diagnostic testing may change. Previously rare or new tests may become popular whilst currently common tests fade from relevance.
- Finer stratification of populations can lead to care decisions that are more likely to be beneficial for the patient.
- Quantitative biomarker tests can be better interpreted in light of a full history as to whether they are positive or negative for the disease they attempt to monitor.
The distributed diagnostics market is already moving strongly toward quantitative testing and AI will only serve to accelerate the pressure to do so.
There are caveats…
Whilst we are optimistic about the benefits that AI may bring to patients and the healthcare industry, it is unlikely to be a straight path to increased adoption.
First and foremost, the major regulators will have to be convinced of the efficacy of AI. This is happening, with the European Artificial Intelligence Act entering into force in August 2024 and the FDA having published guidance. Still, regulators are likely to act as a calming force on an AI field used to moving fast.
Doctors sometimes still show lower trust in point-of-care vs. central lab tests, as shown by TTP’s own surveys, despite a long history of central-lab quality testing at the point of care. Convincing doctors and patients that they can trust an AI “black box” which may suggest counterintuitive results is a challenge. It’s well worth keeping an eye on research in this area, but we’ve previously explained how this can be achieved.
Finally, there may be cases where AI-enabled tests don’t simply slot into existing workflows but introduce new grey areas of treatment: we will have to assess the clinical significance of signals that would not previously have been detected but now will be.
…but evolution seems all but inevitable
None of the above challenges are fundamental; the promise of dAIgnostics, and AI-enabled healthcare more widely, is clear enough to be confident that the necessary activation energy will be achieved.
The value of “more biomarker options” and “greater accuracy” is already realisable for many IVD developers, but I think it’s likely that the value of future dAIgnostics will also be defined by the confluence of these themes with “greater personalisation” driven by foundation models.
In particular, the ability to analyse unstructured data from EHRs will make it possible to offer better healthcare and call for diagnostic products more suited to answering questions relevant to the specific circumstances of the patient than previously.
Companies need to be ready for dAIgnostics as they develop next-generation products.