The Market Opportunity for Trustworthy Healthcare AI

The Coalition for Healthcare AI (CHAI, for short) is a coalition of various healthcare systems IT leaders aimed at AI applications in healthcare. Notably, CHAI released their V1 blueprint of what constitutes trustworthy healthcare AI. It’s worth the ~25 minute read, but here’s the TL;DR — trustworthy healthcare AI should verifiably demonstrate utility, safety, accountability, transparency, explainability, interpretability, bias mitigation, security, resiliency, and privacy. The report expounds on each value with more concrete concepts.

After reading the report, a few thoughts came to mind.

  1. US healthcare is like a final boss for AI. We have messy biased data to train on, high stakes results in a low fault tolerance environment, and plenty of grey area to flounder. It is ambitious for a startup to categorize their AI as the turnkey solution for healthcare, especially when transparency and explainability are inherently difficult, if not impossible, with current machine learning models.

  2. Given that milieu, a truly capable healthcare AI will be extraordinarily expensive to develop and train, especially with FDA regulations around software-as-a-device. Unless you have Big Tech levels of money, accessing enough high quality, cleaned, and vetted healthcare data, and then training on it, will be both cost- and time-prohibitive. Your runway needs to be generously long and your backers must have saintly patience.

  3. Only health systems with large amounts of patient data can be partnered with to train healthcare AI reliably. The larger the system, the more volumnious the patient data, the better the training. Startups may confuse the quality of data with the notoriety of the health system; that would be a mistake. The ideal health system to partner with has a large footprint, ideally multi-state, multi-modal, and in a variety of healthcare delivery zones (urban, rural, etc). These systems are few and far between, but is there an alternative?

Healthcare AI is poised to be an $11B field by 2030, but all that money will go to waste if the AI models don’t work. This is where the market opportunity for building, testing, and vetting healthcare AI comes in. The organizations most advantageously positioned to provide these services are those who aggregate patient data across multiple health systems and states. That’s right — QHINs.

Or, perhaps, organizations like OCHIN that provide Epic to health systems across the United States. With access to anonymized patient data across a variety of healthcare encounters, departments, time periods, and more, these data aggregators, if transparent with their processes to mitigate bias and secure patient privacy, can provide the walled data playground AI startups need to vet their models before the pricey FDA approval process. And, as a bonus, if these data playgrounds could provide the healthcare industry with quality assurances, their seal of approval could make or break a healthcare AI’s prospects.

Of course, creating this environment comes with its challenges to ensure that AI startups feel confident the data their using is high quality. But that’s a challenge that is surmountable, at least in part. VC's looking into the AI space, specifically within healthcare at the seed round, should consider pitches focused on AI quality assurance over AI as the primary product; the runway will be shorter before ROI.

Next
Next

Epic EHR Go-Live Lessons