View all research work

Back

April 10, 2026

The Human Foundation of AI in Healthcare

Paola Rodriguez

,

MD., Eng., Msc. & AI researcher

David Q. Sun

,

VP of AI/ML, Eight Sleep

Sam Hashemi

,

VP, Prenuvo

Introduction

Artificial intelligence in healthcare is no longer a distant promise or a conceptual frontier. It is already reshaping how care is delivered, how risks are identified, and how decisions are made. What was once reactive and episodic is steadily becoming continuous and anticipatory. Across diagnostics, monitoring, and care delivery, AI is enabling earlier detection, more personalized interventions, and more scalable systems of care. Yet, behind every meaningful advancement lies a less visible but critical force: human intelligence. As AI systems become more capable, the role of expert oversight, judgment, and refinement is not diminishing but becoming more essential. The future of healthcare will not be defined by automation alone, but by how effectively human and machine intelligence evolve together.

The role of human intelligence in advancing agentic AI in healthcare

As artificial intelligence systems in healthcare move beyond static models toward more agentic, decision-supporting capabilities, the importance of human intelligence becomes increasingly central. While recent progress in large-scale models has enabled impressive performance across a range of medical tasks, these systems remain fundamentally dependent on the quality, structure, and intent of the human input that shapes them.

In practice, the development of reliable AI in healthcare is not simply a function of data volume or computational scale. It is a function of how well human expertise is embedded throughout the lifecycle of these systems. From the initial stages of data curation and annotation, to evaluation, calibration, and continuous iteration, clinicians and domain experts play a defining role in ensuring that outputs are not only accurate, but clinically meaningful and safe.

This is particularly critical as AI systems begin to operate in more autonomous or semi autonomous contexts. Agentic AI, defined by its ability to take initiative, reason through multi-step problems, and support decision making, introduces new layers of complexity and risk. In these settings, human intelligence acts as both a grounding mechanism and a safeguard. Experts are needed to define what constitutes a meaningful error, to distinguish between acceptable uncertainty and unacceptable risk, and to ensure that models align with real world clinical priorities rather than abstract optimization metrics.

At micro1, we have seen firsthand that scaling medical AI is not simply about scaling models, but about scaling access to high quality human expertise. The ability to engage clinicians, specialists, and trained reviewers in structured workflows allows AI systems to be continuously refined in ways that reflect real world nuance. This includes identifying edge cases that are often underrepresented in training data, stress testing model behavior in ambiguous scenarios, and iterating on evaluation criteria to ensure that systems are judged against standards that matter in practice.

Importantly, human intelligence also plays a critical role in closing care gaps that AI alone cannot address. While models can process and analyze vast amounts of data, they do not inherently understand context in the way clinicians do. They cannot fully account for social determinants of health, patient preferences, or the subtleties of clinical judgment that often guide decision making. By integrating human insight into AI workflows, we create systems that are not only more accurate, but more aligned with the realities of care delivery.

Looking ahead, the most impactful healthcare AI systems will not be those that seek to replace human expertise, but those that are designed to collaborate with it. This requires rethinking how we build, evaluate, and deploy AI, with a focus on creating feedback loops where human and machine intelligence continuously inform one another. In doing so, we move closer to a model of care that is both scalable and deeply informed, where technology amplifies, rather than abstracts, the human element at the core of medicine.

The Clinical Trial That Runs While You Sleep

Tonight, a few hundred thousand people will fall asleep on a thin layer of sensors and generate hours of continuous biosignal data: heart rate, respiratory rhythm, posture, micro-movements. No electrodes. No lab visit. Collectively, the Eight Sleep fleet has recorded over a billion hours of sleep. That makes it one of the largest longitudinal physiological datasets ever assembled, and it's growing every night.

When we started collecting this data, our product tracked only basic biometrics: heartbeats, respiration and sleep architecture. But we had an ambitious research thesis: that the passive biosignals captured through bed surface could unlock clinical-grade health intelligence far beyond sleep tracking. That thesis is now paying off.

Biological aging. We trained a foundation model on this dataset to distinguish individuals by their heartbeat waveform alone. When we fine-tuned it for age, the model learned to estimate biological age from vibrations in the Pod, and independently discovered the same pattern of age fluctuation across sleep stages that other researchers found using wrist-worn optical sensors and a completely different modality. When two methods converge on the same biology, the signal is real. Measured nightly, biological age becomes something you can track over time, not a one-time snapshot. Early results suggest that nights with deeper, more restorative sleep correlate with younger-presenting physiological signals, pointing to a measurable link between sleep quality and biological recovery.

Breathing disorders. Roughly 80% of moderate-to-severe sleep apnea goes undiagnosed because testing is inconvenient and expensive. Our next-generation sensor foundation models are designed to detect apnea events and sleep posture contactlessly, no mask or home test kit required. Because the system runs every night, it captures variability that a single lab visit misses. We are also exploring how mechanical and thermal adjustments can encourage positional shifts during sleep, potentially reducing apnea events for the large share of patients whose condition is posture-dependent. This line of work is on a medical device qualification pathway.

Disease risk. Published research has shown that a single night of sleep data can carry risk signals for over 100 disease conditions. We are developing these capabilities on our proprietary sensor arrays, and beginning to integrate clinical health records to build the label quality that predictive models require. The long-term goal: nightly health monitoring that flags early signals years before symptoms appear.

Most health devices ask you to do something: wear a tracker, draw blood, visit a clinic. The next generation of health AI is built on the opposite idea. The richest health signals come from the place where you do nothing at all, for a third of your life, every single night.

AI in Imaging: From Snapshots to Trajectories 

For decades, medical imaging has been episodic, ordered when symptoms appear, interpreted in isolation, and rarely connected to a longitudinal view of health. AI changes this paradigm. The opportunity is not just faster reads or incremental accuracy gains; it is transforming imaging into a system for continuous risk detection and trajectory prediction.

Modern AI models extract far more than the original clinical intent of a scan. A single MRI or CT contains latent signals about muscle quality, visceral fat, vascular health, organ aging, brain volume, liver fat, prostate volume, and early structural changes, many of which precede symptoms by years. Historically, this information was either ignored or impractical to quantify at scale. AI makes it measurable, repeatable, and trackable over time.

The shift is fundamental: from identifying disease to quantifying deviation from health, and ultimately forecasting where a patient is heading. Imaging becomes less about answering a binary question and more about mapping a personalized health trajectory.

A concrete example is body composition analysis from whole-body MRI. Traditionally, these scans were used to rule out major pathologies. With AI, the same scan can quantify visceral fat, muscle volume, and fat infiltration across dozens of regions in minutes. These metrics are not just descriptive, they are predictive. Research from our group and others has shown strong associations between visceral fat distribution and cardiometabolic risk, as well as emerging associations between metabolic dysfunction and neurodegenerative processes, including Alzheimer’s-related changes. This reframes imaging as an early warning system, not just a diagnostic tool.

When tracked longitudinally, these measurements become actionable. Patients are no longer told they are “within normal range”; they can see how their physiology is evolving relative to their own baseline. Interventions—nutrition, exercise, pharmacology—can then be evaluated through direct structural change, not indirect proxies.

Imaging also plays a distinct role within the broader health data stack. MRI and blood biomarkers form the foundation lower frequency, but high-fidelity ground truth. On top, wearables and continuous monitoring provide high-frequency, dynamic signals. AI connects these layers, aligning precision with continuity to create a unified view of health.

The future of imaging is not more scans, it is more meaning per scan. And ultimately, better decisions per patient.

Conclusion

Together, these advances point toward a fundamental shift in healthcare, from episodic and reactive to continuous, predictive, and deeply personalized. Imaging provides high fidelity structural insight, sleep and biometric data offer continuous physiological signals, and AI connects these layers into a cohesive understanding of health over time. Yet, this system only reaches its full potential when grounded in human expertise. Clinicians, researchers, and domain experts ensure that these signals translate into safe, meaningful decisions. The future of healthcare will not be defined by any single modality, but by the integration of technologies and the human intelligence that guides them toward better outcomes.

Paola Rodriguez

David Q. Sun

Sam Hashemi