


I. Motivation
The emergence of useful, powerful, and accessible AI models has eclipsed much of the attention in previously hot sectors, from crypto, to quantum computing, to consumer social, and — if you look back long enough — the once groundbreaking field of wearable technologies.
In this thesis, I do not argue against this overt focus on building, investing in, and engaging with machine learning native technologies. Quite the opposite: I highlight the overlooked potential of wearables to significantly amplify the practical impact of AI—particularly in the way that matters most: helping humans.
I open with a discussion on how the concept of “context” — that is, providing models with additional information about their task — has drastically improved performance, through several case studies. From there, I draw parallels between these examples and the state of AI in the consumer health space. I then cover emergent technologies that I believe will provide powerful streams of information for personalized health models, and I conclude with my thoughts on the companies best poised to win in these spheres.
II. Context and Compute
The past few years of Large Language Models (LLMs) have demonstrated the unequivocal performance improvements that stem from scale and high quality training data sets. For language, such a massive, representative dataset comes from 30 years of internet websites, forums, video subtitles — essentially every piece of high quality, curated writing that’s ever been written online.

Source: Dmytro Nikolaiev, Medium
There is no clear parallel in healthcare, primarily because the term "detailed data" remains loosely defined. A useful starting point might be to consider an idealized scenario for machine learning engineers—and perhaps privacy advocate's worst nightmare. Envisions a comprehensive, system-level representation of the human body: a molecular encoding that captures the state of a human body continuously, every second, for every individual on the planet (and potentially some simulated ones as well).
This is, however, not necessary. LLMs showed exceptional results long before the entirety of the world wide web was scraped, as do autonomous vehicles currently — well short of having driven on every possible road. We need good enough, not perfect, to start seeing powerful machine learning models. But there is also a floor — a base minimum necessary to get meaningful results. Better put, what is the minimum viable amount, type, and format of information necessary to see this same exceptional performance across personal health models. While impossible to say certainly, I have some criteria I believe we’d need to hit.
The first is data across a wide, diverse range of humans. Even a near-perfect understanding of an individual’s biology — something we know to be quite different across the population — would likely bring limited benefit in a model meant to generalize across different people. The same goes for surface level data: expanding our FitBit heart rate dataset from millions to billions is unlikely to highlight some trend or insight unobtainable from the existing dataset. The short, and maybe unsatisfying, answer, is that we need both scale and depth.

Source: FitBit Users Data Analysis
The question then boils down to “how much of each” — population breadth and individual depth, if you will — are necessary to provide sufficient context for meaningful health insights. To try and answer this, we’re first tasked with the precursor of answering what are “meaningful health insights”. After all, one could say that the lifesaving Apple Watch alerts, or the fitness hyper-optimization of WHOOP are themselves meaningful. A good threshold I use here is whether you’d go back for your wearable if you forgot it at home, the same way you undoubtedly would for your phone. My take is that besides medical devices worn by people with specific health conditions — say glucose monitors for diabetics (most of which are already worn continuously) — there is truly no general consumer wearable the rational person would feel a dependency to go back for; the health insights are a ‘nice to have’ while a phone’s added utility nowadays, for the average consumer, is a need to have.
With that definition, the next step is to ask: “what functionality would a health context equipped wearable need to have in order to make the average person retrace their steps?” An exact answer here is difficult, but it boils down to a device that dynamically interacts with you based on how your day unfolds, much like a phone. Practically, this looks like continuous notifications, specifically ones contextualized to how your day is unfolding (think calendar notifications, reminders, etc). Fundamentally, health wearables should be able to do the same thing: provide a meaningful, continuous stream of notifications based on what’s happening with your day. The added layer here, of course, is that beyond just being contextualized with your meetings and to-dos, it knows how you’re doing on a health level.
For example, If you're experiencing a glucose crash twenty minutes before an important meeting, a wearable could guide you to the nearest convenience store to grab some fruit while encouraging light activity. Or perhaps after a poor night’s sleep before a morning workout, it might draft a message to your workout partner and reschedule the session in your calendar. Alternatively it detects consistently low energy levels, it could compile a summary for your doctor, allowing them to provide more personalized health advice. This kind of intelligent, proactive support is what would elevate a wearable from a helpful tool to an essential companion.
Said more plainly, wearables need to feel like a superpower. They need to provide a level of human augmentation that is indisputably useful, experienced near instantly, and accompanied by a level of friction lower than the utility. The analogy I can make is to an exoskeleton — a “wearable” that instantly changes the way someone feels, functions, and interacts with their environment (as opposed to passive information collection). Beyond internal quantification, wearables should fundamentally change the way we interact with the world, from proximity-triggers actions, to dynamic reactions to a user’s environment (think light, humidity, sound, temperature), active biometric responses (think instantly calming you down, or helping you through recovery routines). They should provide real time allergen and chemical detection, electromagnetic field detection for navigation, stress detection for real time interventions, continuous posture and exercise monitoring for skeletal improvement, and recovery procedures for maximizing muscle growth. They should understand how you learn and work, and dynamically modify your day based on what feels best. Health context, along with a powerful, consumer facing platform, is what I envision will power the consolidated future of wearables.
Speculations aside, let’s explore where we are now.
III. Pre-Dawn Era of Wearables
Current efforts to build wearables have largely diverged in two forks.
One group is leveraging generative AI — particularly through voice powered LLMs — to build companions contextualized with every message, action, and spoken word one takes, under the belief that access to this hyper-personalized information will supercharge the personal assistant experience from, well, Siri, to one of actual utility.
On the other side, we have founders expanding on the consumer health wearable experience, building miniaturized and superpowered versions of medical devices in an attempt to provide continuous vital assessments and metabolic measurements. Built off the legacies of predecessors of Jawbone and Fitbit, devices like the Galaxy Watch FE have reached unprecedented levels of technological sophistication, equipped with optical heart rate, electrical heart, and bioelectrical impedance analysis sensors — as well as an array of environmental and motion monitors.

Source: Samsung
The common thread here is context. With over a decade of accessible Fitbit data, we’ve seen how simple heart rate and motion data can predict activities, obesity, and even mental health diagnoses. With data that’s increasingly multivariate and medical grade — a level we’re already seeing with Galaxy, WHOOP, and Apple — paired with massively diverse and populous datasets, we’re eclipsing a new generation of ML-powered predictive health, both on a personal and population wide level.
More granularly, the last decade of wearables can be primarily characterized by four features:
- Wrist-based
- Rechargable
- Screen intractability
- Noninvasive
Obviously not all wearables fit this mold, but many of the widely adopted ones do.
To transition from interesting gadgets to irreplaceable tools, I believe three of these paradigms must, and likely will, change.
First, location. I imagine early versions of “truly useful” wearables won’t be a single wearable at all, but rather a constellation of noninvasive devices across the body. By expanding from a single point of information to several, such a system would near perfectly model body position real time, embedding a deep layer of context into wearable data streams.
For example, a 2022 study exploring action-recognition from users wearing a smartwatch and a phone in their pocket — just two degrees of information — was able to accurately classify activities as granular as “eating a hamburger,” “folding laundry,” and “brushing teeth.” Expanding this with more, biology rooted sensors synchronously measuring position and state could simulate a near perfect visualization of the human body.
A 2022 study demonstrated the ability to recognize actions using data from just two devices—a smartwatch and a phone in a pocket. This setup accurately classified activities as specific as “eating a hamburger,” “folding laundry,” and “brushing teeth.” Expanding this approach with additional biologically rooted sensors that synchronously measure position and state could potentially enable an unprecedented visualization of the human body. This also eases the challenge of miniaturization — something which will grow exponentially more difficult with the inclusion of advanced chips, energy harvesting modules, and biological sensors.
A few additional papers that develop this idea include:
Wearable Sensors for Monitoring Human Motion: A Review on Mechanisms, Materials, and Challenges
Sensor-Based Wearable Systems for Monitoring Human Motion and Posture: A Review
Recent developments in sensors for wearable device applications
Ambient energy harvesters in wearable electronics: fundamentals, methodologies, and applications
Further, a communication interface that both recognizes inputs (hand gestures, arm gestures, voice, tap inputs) and communicates with the user and nearby technology through RFID (think waving your arm to turn off your lights), has tremendously powerful use cases. Rather than being reactive (responding only when interacted with), I envision proactive systems that interact with you real time), a direction many wearables are already exploring.

Source: Hobeom Han and Sang Won Yoon, Gyroscope-Based Continuous Human Hand Gesture Recognition
This is a good point to introduce a guiding thought behind many of my claims:
Theorem: Humans will use a technology if and only if it provides more utility than it incurs friction. Friction can include both difficulty and risk of usage, as well as surrendering privacy
As wearables gain access to more and more real time health information, friction from surrendering privacy increases. By the theorem above, this means that for a wearable to remain, well, worn, either its utility needs to increase or its friction needs to decrease in some other way. One drastic way this could happen is by eliminating the friction introduced through recharging a device. As ambient energy harvesting techniques improve, such a reality grows increasingly plausible. I envision these to be largely mechanical, since chemical, patch based systems again add friction.

Source: Helios Vocca and Francesco Cottone, Kinetic Energy Harvesting
Another way to reduce friction is to remove the oftentimes frustrating screen functionality of miniature wearables. We’re already seeing market leads like Whoop opt for completely screenless devices that convey information through other methods. I think haptics, discrete gestures, and voice will be the primary forms of interaction with these devices. This also significantly improves battery life, which further decreases the friction of using the device.
This brings us into the fourth point, which is invasiveness. Making a health device invasive radically increases friction. While I technology will eventually reach a high enough utility point (think superintelligence, modified perception, etc.) to warrant invasiveness, I believe devices like BCIs are still strides away from being revolutionary (and safe enough) to become mainstream for consumers.
IV. Archetypes and Considerations for Consumer Health Companies
The technical discussion above serves as a guidebook for the sorts of wearables and consumer health companies I believe are focusing on the right problems, and thus, good investments at the private stage.
Generally, I believe in companies building technologies that will benefit directly from growing AI usage in healthcare, rather than compete with it. I draw a direct parallel to the current trends in AI. While wrapper companies ultimately enjoy short term growth, the generational winners are those that develop proprietary models, a task which largely depends on having the data and the resources to train such models.

Source: Kellogg Business
Specifically, I see two categories of interest here.
Two key archetypes stand out. The first is data collection platforms—hardware like wearables and stationary units capable of continuous, large-scale data collection, often paired with contextualizing software like continuous voice recording or computer vision-based action recognition. These systems will be pivotal in providing the foundational datasets for training powerful AI models.
The second is research-focused companies, particularly in areas like genomics and protein folding, where human intuition falls short. These are the companies developing proprietary, cutting-edge models to maximize the value of complex datasets. Success in these domains will likely be driven by teams with niche, interdisciplinary expertise—combining, for example, computer vision with genomics.
The intriguing question then becomes: what does this data (and in turn these models) look like? With initiatives like the UK Biobank and others amassing vast amounts of genomic information, we are approaching a point where incredibly powerful training datasets will enable us to uncover the biological foundations of many human conditions.
The space often overlooked is the potential of continuous, high-resolution data across diverse types for enabling unprecedented correlation analysis. This is where next-generation wearable companies come into play. Building a successful wearable brand with cutting-edge sensors has far greater potential than simply capturing a share of the consumer market. These companies could become the default data source for massive healthcare machine learning projects, generating real-time, ever-growing datasets that reveal how millions of actions impact hundreds of biomarkers. What’s more, data labeling would be inherently crowdsourced, as consumers themselves use the wearables to improve their personal health models.

Source: Shopify
This raises the question: what continuous health information is most valuable? While this is difficult to pinpoint, even a single variable—if collected over diverse datasets with high temporal resolution and long timeframes—can be profoundly insightful. For instance, researchers have developed advanced models for sleep states, fitness recovery, and even depression using only continuous heart rate data, a shocking reminder of the power of deep models. These studies uncovered relationships that are near-impossible for clinical researchers in low-N studies to detect. Many of these results have been further enhanced through work on open source datasets.
The value of such large-scale temporal data becomes even clearer when compared to other industries. Autonomous driving is a strong analogy: Tesla is winning largely because of its access to continuous, diverse, and massive datasets collected effortlessly from its users. The same scaling phenomenon that has driven breakthroughs in autonomous vehicles and language models can likely be generalized to healthcare. In this context, wearables collecting vast, noninvasive datasets—even lacking perfect reliability as you’d expect in clinical studies—could be more impactful for population-wide insights than small-scale, perfectly accurate clinical studies.

Source: Brain Creators
Another consideration is how this data will be shared and utilized. As health data becomes more valuable, we may see the rise of companies focused on data transfer and accessibility. Wearable companies may not handle direct data-sharing agreements due to liability concerns, paving the way for smaller, third-party companies to create research marketplaces. Consumers could even "vend" their data in a marketplace setting, incentivized by bounties for specific research uses. These marketplaces could become massive businesses in their own right, providing early opportunities for innovation.
Agentic workflows also offer a compelling vision for wearable data's potential. Continuous data streams could trigger a variety of automated, personalized actions. For example, a calendar agent could adjust your schedule to align with your energy levels, a medical agent might flag early signs of a health issue, and a food agent could recommend meals based on your current glucose and nutritional needs. Each agent could be its own company, but all would rely heavily on partnerships with the wearable data providers.

Source: Vellum
The integration of health data into non-health consumer applications will also grow. Just as messaging apps request access to photo libraries, apps shaping your day—like commuting or food delivery—could request access to continuous health data streams for hyper-personalization. If such practices become widespread, we might even see anonymized health datasets emerge online, akin to the image-text datasets that propelled modern computer vision models. This would only further accelerate health ML.
Of course, this vision assumes that continuous, molecular-level health data will be valuable—both for consumers and for advancing machine learning models beyond current capabilities. Granular datasets, such as large-scale gene imaging or continuous blood tests, might ultimately "solve" some of humanity’s largest health challenges. Yet noninvasive signal data collected by wearables, while seemingly rudimentary, offers unique advantages: it is contextualized to real human life. With enough scale, it can sample an infinite variety of authentic activities, revealing otherwise invisible relations.
This discussion, along with my overview of emerging technologies, provides the basis for my final section, which details companies I believe to be well poised for the future of augmented health.
V. Key Players
So, where does this leave us? I think, with some winners, and many losers. As mentioned above, I believe the generational consumer wearables companies will a. Have emergent, protected, and biologically rooted technologies for gathering health information and b. Create seamless, non-invasive, and low friction form factors for using their technology. Conversely, companies that fail to do both — either gathering basic data, or positioning themselves as a medical device over a piece of consumer tech — stand to fall behind. Fast.
By this reasoning, the companies that long term lose the AI game will — much like we’ve seen with LLMs — likely be wrapper companies building around existing streams of continuous wearable data. Much like with the chatbot space, these companies will likely take advantage of unaware consumers, likely ones with specific chronic illnesses // lifestyle issues that may trust some specialized, well marketed model more than the one sourcing the technology it's running on.
Some companies I believe are worth looking at:
- Synex Medical is developing non-invasive health monitoring technology through portable magnetic resonance imaging (MRI) techniques. Their technology is able to detect metabolites like glucose and lactate, which are critical indicators of real time health, all without breaking the skin. Synex’s team includes John Capodilupo, who is the co-founder and former CTO of WHOOP.
Giving the everyday consumer access to real time glucose readings is not just technologically impressive, it’s incomprehensibly powerful context for developing ML tools for real time wellness. Glucose dynamically affects your both short- and long-term energy, mood levels, and overall health. A model trained on millions of hours of continuous glucose data could dynamically optimize schedules, meals, and exercise routines—approaching superpower territory.

Source: Synex Medical
1. Empatica's health monitoring platform delivers precise, continuous insights for researchers and clinicians. Their flagship wearable, the EmbracePlus, features one of the most advanced sensor stacks available on the market. Primarily used in research settings, Empatica's devices generate rich, verifiable data that enhance both their hardware and algorithms. However, the company has yet to expand significantly beyond clinical trials into the consumer market.

Source: Empatica
2. Empatica's achievement in miniaturization is remarkable. By simplifying multivariate machine learning problems into a single data stream, they demonstrate the potential of what next-generation wrist-worn wearables could achieve. Notably, their hardware provides a significant moat for training cutting-edge health machine learning models.

Source: Empatica
3. Ultrahuman builds advanced smarts rings, continuous glucose monitors, and home hardware for continuous health information. Beyond detailed metrics on longevity, sleep, and recovery, Ultrahuman’s Ring AIR — their flagship product — markets enhanced productivity through features like circadian alignment, caffeine timing, and screen optimization.

Source: Ultrahuman
4. Ultrahuman also offers a longevity focused continuous glucose monitor (CGM), which tracks glucose real time and uses measurements to provide proactive updates. These include stability updates, metabolic score streaks, and hyperglycemic event detection, as well as targeted advice to improve glucose variability.
Using data from users’ CGMs, Ultrahuman also built an Open Glucose database, which provide precise glucose scores for nearly every food imaginable. The insights are powered by how users themselves reacted to foods, and each score is contextualized with a description of the food’s metabolic impact.
Ultrahuman then leveraged data from users’ CGMs to create an Open Glucose database, offering precise glucose scores for nearly every food. These insights are based on users’ individual reactions to meals, with each score contextualized by a description of the food’s metabolic impact.

Source: Ultrahuman
Ultrahuman’s broad initiatives indicate a prioritization towards acquiring proprietary continuous data, and using it in effective ways. Their elegant marketing—both for their devices and their application layers—demonstrate they understand the importance of onboarding users over simply making cutting edge tech, and something I consider an enormous edge over non-consumer players.
VI. Conclusion
Personal health—despite millennia of progress—still remains anchored in proxy understanding, a modern-day Plato’s cave. Daily, unexplained fluctuations riddle the rhythm we wish to establish as we build towards our goals. The mystery of “why do I feel so bad” is one that dominates even the most ambitious, intelligent, and self-aware among us. Perhaps not for long.
Much appreciation to anyone who took the time to get through this. I’m young, and many of my thoughts are likely still naive. Nevertheless, crystallizing them in writing opens doors for conversation, and anchors a reference point for myself to look back on years down the line.
Best,
Teo Dimov