• Skip to primary navigation
  • Skip to main content
  • Skip to footer
LumaGroup

LumaGroup

  • Home
  • LABI
  • People
  • Insights
  • Our Strategy
  • Contact Us

Uncategorized

The Next Frontier in Chronic Disease Monitoring: Our Investment in Curve Biosciences

Despite remarkable medical advances, real-time monitoring of organ health remains out of reach. Curve Biosciences is closing this gap with Whole-Body Intelligence™—a platform built on the world’s largest curated tissue atlas that maps how disease reshapes each organ at the molecular level. By tracing these precise signatures in blood, Curve delivers liquid biopsy tests that move beyond cancer to detect and monitor disease across the full spectrum of organ health. Its first product, a liver test, is already demonstrating superior accuracy and the potential to transform early detection and chronic disease management.

October 2025

Despite extraordinary advances in medical diagnostics, understanding and monitoring organ health in real time is still a significant challenge. While imaging tools like MRIs can offer precise visualization of tissue structure, they are too slow and costly to be a viable option for every patient. This leaves a fundamental gap in physicians’ ability to monitor organ health in a timely, accurate, and affordable manner.

Liquid biopsy emerged as a promising solution to this challenge. With a simple blood draw, scientists can identify molecular biomarkers that describe what is happening in a patient’s body at a moment in time. These biomarkers can be several biological entities, such as fragments of DNA released by dying cells, RNA transcripts, proteins, or other circulating molecules. By decoding these signals, researchers can help clinicians assess organ health, detect disease, and monitor disease progression without the need for invasive tissue biopsies or repeated imaging procedures.

While not yet mature, liquid biopsy has made notable progress in oncology. It has enabled earlier cancer detection, real time monitoring of treatment response, and deep insights into tumor evolution and resistance. However, its potential can reach far beyond cancer. The same molecular readouts that reveal tumor biology can in principle capture information about chronic diseases, inflammatory conditions, and beyond – areas that remain completely unexplored.

Unlocking this potential requires a fundamental shift in how we analyze the blood. Historically, biomarker discovery has relied on comparing blood from healthy individuals to patients with disease to search for patterns that differentiate the two. Because DNA in the blood can come from anywhere in the body, these biomarker discovery readouts are often noisy and limited by an inability to pinpoint where in the body the biomarkers originated, putting a ceiling on their potential accuracy. To realize the full promise of liquid biopsy, discovery must start from the clarity of the tissue itself. By first mapping how disease reshapes molecular profiles within organs, we can be certain that when we detect those same signatures in the blood, they truly reflect the underlying biology from the target organ.

This approach is Curve Biosciences’ core focus. With the world’s largest curated tissue atlas, a blueprint of the human body by organ and disease state, Curve can deliver on liquid biopsy’s long promised potential.

Curve’s Approach

Curve is the world’s first company pioneering Whole-Body Intelligence™, developing first-in-class, blood-based monitoring tests to characterize the continuum of organ health: from healthy organs to chronic disease to cancer. Co-Founder and CEO Ritish Patnaik, PhD built the foundation of the company during his graduate work at Stanford in Professor Shan Wang’s lab. Ritish and his team at Curve manually curated the world’s largest tissue atlas to study how tissues transition from healthy to chronic disease to cancerous.

To study this transition, Curve focuses on tissue-specific methylation patterns on DNA, which are chemical modifications that change with age and disease and can serve as precise biomarkers for disease identification and progression. To generate a complete picture, the company has analyzed more than 400,000 samples spanning over 100,000 studies. Their close review of each sample ensured the data were optimized for processing, and free of errors. When inconsistencies appeared, Curve has personally reached out to investigators to fill gaps and correct mistakes, ultimately creating the largest and highest quality methylation atlas in the world. The success of this work depended on a purpose-built approach that only Curve’s team could have executed with 8+ years of careful effort.

Figure 1: Curve’s platform and product.

Source: Curve Biosciences.

With this Whole-Body Atlas™, Curve has characterized the methylation fingerprint of each organ across disease states and silenced the noise from other healthy organs that confound traditional biomarker discovery approaches. When analyzing a patient’s blood, Curve searches for those same signatures in circulating DNA fragments released by dying cells. Through Whole-Body Intelligence™ models trained on the Whole-Body Atlas™, they have an unprecedented ability to isolate noise to less than 0.02% of the sample, enabling an unobstructed view of organ health.

Figure 2: Curve’s approach to silence biological noise to find the signal.

Source: Curve Biosciences.

Curve’s First Molecular Map: The Liver

One of the body’s most essential, yet overlooked, organs is the liver. It quietly filters toxins and maintains systemic health, but when it becomes distressed, it often remains silent until damage is irreversible. Despite its critical role, the liver has long been underserved by medicine, only recently seeing renewed attention with the traction of FGF21 analogues and GLP-1 based therapies in metabolic associated steatohepatitis. Curve views the liver as the ideal starting point for its platform, to create a suite of tests designed to monitor early signs of fibrosis, assess treatment response, and enable the earliest possible detection of progressive chronic disease and cancer.

Figure 3: Methylation signatures across liver states.

Source: Curve Biosciences.

Curve’s early data in their first liver test already outperforms the benchmarks set by the standard-of-care, demonstrating the potential to reduce avoidable MRIs, lower healthcare costs, and save lives through accurate patient monitoring.

Curve is uniquely positioned to enable a new frontier in chronic disease monitoring, using the same molecular framework to guide treatment decisions in conditions such as liver cirrhosis, metabolic associated steatohepatitis (MASH), and obesity. With this foundation, Curve aims to redefine how medicine detects, tracks, and ultimately prevents disease at the organ level.

Finding Curve

We have known Curve’s Co-founder and CEO, Ritish Patnaik, for many years – since his earliest days considering taking the leap into entrepreneurship. Once we saw his prospective pilot data, we knew the company was on the brink of something transformational. We could not be more excited to partner with him and his team. There are too many wonderful characteristics to list out here, so we’ll focus on just a few of them that led us to partnering with the company:

  • Data-first approach: Creating Curve’s Whole-Body Atlas™ with this level of manual curation is an undertaking only an ambitious PhD student would start. Sorting through more than 400,000 samples, identifying inconsistencies, and coordinating with principal investigators to stitch together a consistent dataset is painstaking and slow work.
  • World-class team: Transforming how organ health is monitored isn’t as simple as just getting the science right. Teams need to develop a true product, with excellence from the original science all the way through commercial launch. Ritish has assembled that expertise in a team of exceptional leaders:

    • Nathan Hunkapiller, CSO: Nathan is a recognized leader in blood-based testing, having helped shape the field across some of its most influential companies. Before joining Curve, he served as Vice President and Head of R&D at Natera, a company now valued at more than $26bn, and as Senior Vice President and Head of R&D at Grail, where he led scientific development through its $8bn acquisition.
    • Chuba Ayolu, CTO: Chuba brings a proven track record in building and scaling blood testing from inception to commercial success. He was the founding scientist at Counsyl Diagnostics, where he helped develop the company’s core technology and guided it through rapid growth and its $375m acquisition. His experience spans both deep technical innovation and the operational rigor required to deliver clinically reliable products at scale.
    • Alice Chen, COO: Alice brings extensive commercial and product leadership experience from several of the industry’s most prominent blood testing companies. Prior to joining Curve, she was Senior Vice President of Product at Grail and previously held product and R&D leadership roles at Natera, Progyny, and Sienna Biopharmaceuticals. Her expertise bridges scientific development, product strategy, and market execution.
    • Shan Wang, Chief Innovation Officer, Scientific Co-Founder: Shan is an Endowed Professor at Stanford, having authored over 350 publications and more than 80 issued or pending patents from 30+ years. His pioneering research bridging engineering and medicine inspired the creation of the Whole-Body Atlas™, providing the foundation for its unique approach to biological discovery. His deep scientific insight and conviction in Curve’s mission keep him closely involved with the company.

This depth of expertise from early science to commercial scale-up is essential in pioneering a new technology, shaping a new market, and is a testament to the quality of the foundational work at Curve.

  • Large unmet medical need: The chronic disease monitoring space remains largely untapped, awaiting a breakthrough approach to unlock reimbursement from the insurers. Liver disease stands to be the initial proof points for Curve’s platform to set the stage to tackle chronic disease.

Better data, better outcomes

At Luma Group, we back teams redefining treatment paradigms. Curve’s data-first approach embodies that spirit. The process to create Curve’s Whole-Body Atlas™ to enable Whole-Body Intelligence™ was tedious, impossible for AI to recreate, and forms the foundation of what sets Curve apart. We are proud to support Ritish and his team as they build a world where disease can be detected and monitored through only the blood and unlock better care for millions.

The Next Era of Care for Neurodegenerative Disease

Neurodegeneration is a gradual loss of brain function that unfolds over decades before symptoms appear. It isn’t quite a silent killer like heart disease or cancer; it’s an emotional long goodbye. While neurodegeneration manifests in many ways, severe memory loss from dementia is among the most common outcome of this irreparable brain damage.

Each year, millions of individuals with dementia experience progressive losses in memory, personality, and functional independence. Countless family members and caregivers bear witness to this devastating life deterioration. Despite its toll on society, innovations combatting neurodegenerative disease have lagged far behind advances in heart disease that produced statins and the rapidly expanding arsenal of cancer-fighting drugs. These shortcomings are not due to lack of funding or interest from society, but instead because drug development for the brain faces complex challenges that other therapeutic areas do not.

Alzheimer’s Disease (AD) is the most common form of neurodegeneration and is responsible for ~70% of dementia cases. Over the past two years, the AD drug development space has started to turn a corner, highlighted by the first two disease modifying drug approvals. For the first time, patients have drugs that modestly slow cognitive decline, giving hope to both the patient and drug development communities.At Luma Group, we believe that these innovations are just the tip of the iceberg. The clinical learnings from AD on optimal diagnosis, delivery, and patient selection will catalyze further innovation in neurodegenerative proteinopathies like Parkinson’s disease, Lewy body dementia, Huntington’s disease, and amyotrophic lateral sclerosis (ALS). This whitepaper uses AD as a case study to explore the future of care in neurodegenerative disease. We present the current understanding of AD disease biology and treatment landscape for a generalist audience and give our vision of the next era of care in AD: a world where AD diagnosis is as easy as blood test and physicians can choose from a broad armamentarium of therapies to tackle the multifaceted features of neurodegeneration.

October 2025

What is Alzheimer’s Disease?

The defining hallmark of AD pathology in the brain is the presence of insoluble amyloid β plaques found outside of neurons and tau neurofibrillary tangles inside of neurons. The abnormal accumulation of these proteins has been the focus of AD drug discovery since Alois Alzheimer first reported on the disease in 1907. Alzheimer described a patient who “developed a rapid loss of memory” and was “disoriented in their own home.”1 This description matches our current understanding of AD clinical presentation, as patients become increasingly unaware of their surroundings, lose all ability to recognize family members, and eventually become prone to delusions and hallucinations.

At the molecular level, cognitive decline closely correlates with tau tangle pathology, but amyloid plaques appear in the earliest stages of AD. This suggests a sequential mechanism where amyloid initiate disease, but tau drives neurodegeneration. Beyond these two proteins, other factors, like chronic neuroinflammation in response to amyloid and tau, contribute to disease progression. Thus, while amyloid plaques and tau tangles are considered the hallmark pathology, AD is now recognized as a multifactorial disease in which amyloid, tau, and neuroinflammation interact to cause progressive neuronal damage and cognitive decline.2

Figure 1: The sequential formation of amyloid plaques and tau tangles.

Source: Gomez W, Morales R, Maracaja-Coutinho V, et al. Down syndrome and Alzheimer’s disease: common molecular traits beyond the amyloid precursor protein. Aging (Albany NY) (2020 PMID: 31918411.

How are plaques and tangles formed?

Studies on amyloid processing have found that not all forms of amyloid β are toxic. Amyloid β is formed when two enzymes called β-secretase and γ-secretase cleave amyloid precursor protein (APP). Under normal conditions, these enzymes cleave APP to form soluble monomers of amyloid β, which occasionally make small aggregates that do not have any disease-causing properties. However, under AD conditions, these monomers aggregate into larger, insoluble amyloid fibrils and plaques that are a hallmark of AD, especially in early or presymptomatic AD. Tau neurofibrillary tangles on the other hand, tend to appear in the later stages of the disease. These are formed when native tau is modified with excessive phosphate groups (hypophosphorylation), which also makes tau more prone to aggregation.

Figure 2: A closer look at the formation of insoluble amyloid plaques and fibrils

Source: Hampel, H, Hardy, J, Blennow, K et al. The Amyloid-β Pathway in Alzheimer’s Disease. Mol Psychiatry (2021). PMID: 34456336.

Why do plaques form?

Genetics research has provided some of the clearest clues on why these disease-initiating aggregates form. Mutations in the PSEN1 gene, a key component of the γ-secretase enzyme, alter the cleavage of APP and lead to production of different sizes of amyloid β monomers. Some of these monomers—particularly the amyloid β species that is 42 amino acids long—are more prone to aggregating into amyloid plaques.3 Importantly, PSEN1 mutations are the most common cause of early-onset AD, often leading to symptoms decades earlier than the typical age of onset. Nearly all patients with PSEN1 mutations have symptoms before the age of 60, with some case studies reporting symptoms as early as 32.4 While understanding PSEN1 disease biology offers important context on AD’s underlying mechanisms, autosomal dominant, inherited AD only accounts for ~1% of all AD cases. The majority of AD is classified as sporadic AD, caused by a mix of environmental and genetic risk factors that predispose you for disease. For example, aging is the strongest environmental risk factor for developing AD and certain forms of a gene called APOE can increase or decrease the chance of developing AD.5

Figure 3: Prevalence of late-onset and early-onset AD

Source: Adapted from Sirkis DW, Bonham LW, Johnson TP, et al. Dissecting the clinical heterogeneity of early-onset Alzheimer’s disease. Mol Psychiatry (2022). PMID: 35393555.

Evidence of neuroinflammation in AD

In the age of deeper and cheaper DNA sequencing, new genetic discoveries have expanded our understanding of AD beyond the amyloid hypothesis. Notably, genome-wide association studies have implicated microglia, the brain’s resident immune cell, as a driver of AD symptoms. These studies found that mutations in a gene called “triggering receptor on myeloid cells 2” (TREM2) increases AD risk by 3-5x, fueling a new area of research to elucidate the role of microglia and neuroinflammation in AD disease progression.6, 7 The neuroinflammation hypothesis is relatively nascent, but our current understanding suggests that microglia are initially neuroprotective by eliminating plaques and tangles in the brain. However, as the production of amyloid and tau rapidly increases in later stages of disease, microglia fail to properly clear these proteins. Under this hypothesis, the microglia become chronically activated, creating a sustained pro-inflammatory environment in the brain that contributes to neuronal damage. Some emerging evidence from imaging studies in human patients has now indicated that the co-localization of amyloid, tau, and activated microglia in the brain are high risk-factors for cognitive impairment.8 Although the exact mechanisms linking neuroinflammation to AD pathology and neurodegeneration remain unclear, the involvement of microglia underscores that AD is a multifactorial disease, engaging many cell types and processes in the brain.

AD is now commonly grouped into a larger family of proteinopathies, where pathogenic proteins such as amyloid and tau are the hallmarks of the disease and drive symptom progression. As we discuss later, AD drug discovery efforts have primarily targeted amyloid β, either inhibiting its production or targeting it directly to stop the disease. Additionally, the emergence of the neuroinflammation hypothesis has also sparked new interest in microglia targeting therapies. Despite our growing understanding of AD biology, drug discovery has been long and challenging, and the first approvals of disease-modifying drugs that slow the progression of AD have only happened in the past two years despite decades of research efforts and failed trials.

Why is Alzheimer’s Disease drug discovery so challenging?

The central nervous system (CNS), a dense, intertwined network of neurons connected by innumerable synapses, is by far the most complex organ system in the human body. It is fundamental to our capacity to process information, experience emotion, and control movement. This intricate web is central to our survival, and the human brain is unique compared to other species, which complicates our ability to develop and test innovative therapies.  

Figure 4: Challenges in CNS Drug Development

Source: Adapted from biorender.com

Biology differences limit predictivity of preclinical models

The human brain is uniquely complex compared to animal models (e.g., rodents, non-human primates, and others), resulting in low-fidelity preclinical studies that do not translate when we bring therapies into the clinic. While the human brain is notably larger compared to model species (even when you account for body size), this alone does not explain the lack of translatability. Equally as important, humans have developed a more structured cortex than other animals, with more cortical folding that increases our cortical density and surface area. At the cellular level, human brain cells are also more structurally complex and diverse: our neurons are longer and more interconnected (increased dendritic tree branching), and some human brain cells (e.g., interneurons, astrocytes, and glia) have much broader molecular diversity compared to animal models. Taken together, these differences help explain why current model systems fail to accurately recapitulate human brain function. Drug discovery is an iterative process that relies on rigorous preclinical studies conducted in animal models. However, because the brain lacks representative models, the path to developing AD therapies is especially difficult and high-risk.

Delivery challenges to the brain

CNS-targeted therapies must also overcome the restrictive blood-brain-barrier (BBB) to achieve relevant drug exposure levels. In organs like the liver, kidney, and heart, drugs transport freely from the blood into tissue. In the brain, however, drug penetrance is limited by the BBB, a tightly joined cellular layer that lines blood vessels in the brain and protects it from harmful pathogens. However, the BBB is also a formidable barrier for drug discovery, as over 98% of small molecules and nearly all antibodies fail to penetrate the BBB at therapeutically relevant levels. While all vertebrates have an intact BBB, many drugs tested in animal models fail to have comparable drug exposure levels when used in humans, adding a layer of additional complexity to CNS drug discovery. The BBB makes the CNS the most delivery-sensitive organ, and, combined with low-fidelity preclinical models, it is especially challenging to predict translatability of CNS therapies before entering clinical trials.

Clinical trial challenges

AD drug discovery has also been limited by the inability to only enroll patients with amyloid or tau pathology, confounding trial results. Until recently, AD patients were included in clinical trials based on questionnaires that assessed cognitive impairment. However, cognitive impairment is caused by many underlying conditions, and patients were often misdiagnosed with AD despite no evidence of amyloid or tau pathology. In early clinical trials, approximately 25% of patients were later found to have no amyloid pathology, even though these trials tested drugs that targeted the amyloid pathway.9

In recent years, the field has made many advancements using positron emission tomography (PET) imaging to positively confirm presence of amyloid and/or tau in living patients.10 Previously, amyloid and tau pathology diagnosis was only possible in post-mortem histological samples, limiting our ability to enrich clinical trials with patients who were positive for amyloid or tau. Using PET, patients are injected with a radiopharmaceutical tracer that makes amyloid or tau protein quantifiable using non-invasive imaging. With this well-validated method, we can now enroll patients who are positive for amyloid/tau pathology and can also stratify them based on the severity of their plaque burden. Given that modern AD drugs act directly on the amyloid pathway, PET has been a pivotal innovation that has pushed AD drug development forward.

The widespread adoption of PET imaging is reflective of the new age AD drug development, driven by novel diagnostics, new therapeutic innovations, and a deeper understanding of disease biology. As we will discuss in the next section, for the first time in history, patients have access to disease-modifying AD therapies. We are hopeful that this trend of novel diagnostic methods and transformative medicines will usher in a new era of neurodegenerative disease drug development, allowing us to precisely diagnose and treat the wide range of neurodegenerative proteinopathies, including AD, Parkinson’s Disease, dementia, ALS, and Huntington’s Disease.

History of Alzheimer’s Drug Development

The first AD therapies from the 1990s and early 2000s were approved for their ability to manage AD symptoms, not because they were disease-modifying. While they helped mask disease progression, the efficacy of these drugs wanes over time and they become less effective as neurodegeneration progresses. These early symptomatic therapies are key to delaying the onset of symptoms, but there was—and continues to be—great unmet need for a drug that slows or reverses disease progression.

CategoryApproved drugsTarget NeurotransmitterMechanism & Rationale
Cholinesterase Inhibitorstacrine, donepezil, rivastigmine, galantamineAcetylcholineIncreases acetylcholine signaling as patients show loss of cholinergic neurons11
NMDA receptor antagonistMemantineGlutamateReduces neuron excitability by blocking glutamate signaling, as overactive excitability damages neurons11

Early drug development failures

The wealth of evidence implicating the amyloid β pathway has driven recent AD drug development efforts.12 The first attempts of disease-modifying drugs were inhibitors of γ-secretase and β-secretase (BACE1) activity, which act by decreasing total amyloid monomer production. However, these efforts largely failed. Semagacestat, for example, was the only γ-secretase inhibitor to progress to Phase 3 trials, where it failed to meet its primary endpoints, only modestly reduced amyloid levels, and cognitive decline slightly increased in the treatment group versus placebo.13 Similarly, the BACE1 inhibitors also failed to show improvements in cognitive decline relative to placebo, despite significant decreases in amyloid monomer production and moderate decreases in plaque burden.14 This is notable as many of the BACE1 inhibitor trials recruited patients with only very early or prodromal AD, suggesting that even the earliest symptomatic stages of AD may be too late to intervene with the secretase inhibitor class of drugs.

Turning a corner: The first clinical trial successes

The failure of the secretase inhibitors has narrowed the spotlight on monoclonal antibodies that directly bind to amyloid β.  The first breakthrough came in 2021 with the accelerated approval of aducanumab (Aduhelm developed by Biogen). Lecanemab (Leqembi, developed jointly by Biogen, Eisai, and BioArctic) and donanemab (Kisunla developed by Eli Lilly) followed soon after. While aducanumab was eventually withdrawn from the market, lecanemab and donanemab were the first disease modifying AD drugs to receive full approval from the FDA. PET imaging played a key role in achieving these milestones, as it enabled rapid, non-invasive monitoring of disease modifying behavior, and was the biomarker that enabled use of the accelerated approval pathway. While these drugs have provided some validation of the amyloid hypothesis, they only show modest slowing of disease progression 15,16 and there is ample opportunity to build better and more efficacious AD therapies. Additionally, a new, amyloid-targeted treatment risk has emerged in the form of amyloid-related imaging abnormalities (ARIA) caused by vasogenic edema/effusions (ARIA-E) or microhemorrhages (ARIA-H). The exact cause of ARIA is still not clear, but these are dose-limiting side effects that exclude carriers of the ε4 allele of APOE from receiving lecanemab and donanemab, who are at increased risk of developing these side effects.

What does the future hold?

The progress of the AD drug development space in the last 25 years is undeniable, and the momentum to generate better disease modifying therapies will only continue. Moving forward, we foresee a world where AD and other neurodegenerative diseases are diagnosed using criteria beyond their symptomatic presentation. Instead, Parkinson’s Disease, ALS, Huntington’s Disease, and dementia will be defined as proteinopathies with distinct molecular signatures. These protein-level diagnoses will guide clinical recommendations of protein-specific treatment options, attacking the underlying pathology of neurodegenerative disease. With this framework in mind and driven by the current era of technological innovation in the life sciences, we envision multiple trends that are shaping the neurodegenerative treatment landscape for better patient outcomes:

  1. Diagnostics you can order from your primary care physician: Much like the implementation of PET tracers, the approval and broad adoption of blood-based diagnostics will be a meaningful step forward that benefits everyone a part of the AD patient journey. While PET is costly and requires recommendation of a specialist physician, blood-based diagnostics can be ordered by a primary care or community physician with a routine blood draw at a tenth of the cost. In the past year, both the Lumipulse (Fujibre) and Elecsys (Roche and Eli Lilly) blood-based diagnostics were approved by the FDA, with the Elecsys test being the first test approved for use in the primary care setting. Much like a routine lipid panel as part of a yearly checkup, we envision a future where patients can routinely access a “neurodegeneration panel” to screen the type of neurodegenerative disease or a multi-omics diagnostic to detect evidence of neuroinflammation or presymptomatic disease. Combined with new advances in AI/ML to deconvolute multifactorial signals, we look forward to seeing a future of early diagnosis of neurodegenerative disease and neuroinflammation, opening the door for prophylactic treatment of patients.
  2. Moving beyond amyloid and passive immunotherapy: While lecanemab and donanemab are passive immunotherapies (i.e., antibodies that bind to amyloid without an apparent mechanism that clears plaques), we are looking forward to innovations that actively eliminate pathogenic proteins and strategies that move to targets beyond amyloid, such as tau. The first targeted protein degrader (vepdegestrant developed by Arvinas and Pfizer) is scheduled to be approved in 1H 2026 for breast cancer. On the heels of this historic FDA decision, the time for novel mechanisms of action in neurodegenerative disease beyond passive amyloid clearing is on the horizon. While the current iteration of protein degraders have unfavorable drug-like properties that make brain penetrance challenging, we are closely following development of novel degrader approaches, including protein-based methods and modulators of autophagy, mitophagy, and other protein homeostasis pathways. As we move into this new era of viewing neurodegenerative disease as proteinopathies, we believe in innovative therapies that go beyond amyloid, using mechanisms of action that actively eliminate pathogenic proteins.
  3. Patient stratification that powers faster, smaller trials: Patients with neurodegenerative diseases face difficult outcomes with few disease modifying options. In the AD space, patients with PSEN and APP mutations (e.g., autosomal dominant AD) can see symptoms as early as their 30s. Estimates on the prevalence of autosomal dominant AD vary, but even conservative estimates suggest 10s of thousands of patients in the US.While these patients may be at highest risk of an aggressive disease course, they also have the clearest understanding of their underlying causes. The strong genetic nature of early-onset AD makes them especially addressable by gene therapies, and positive results from uniQure’s AMT-130 in Huntington’s Disease should serve as a beacon of innovation to follow suit in other neurodegenerative diseases.17 In a large patient population like AD, being able to enrich the treatment group with more homogenous biological underpinnings should attract innovation, not deter it, as these strategies open the door for faster, smaller trials that still have outsized impact in the patient community.
  4. Delivery that opens the gates of the BBB: Building off the foundational antibody approvals, there remains much work to improve their efficacy. The BBB typically restricts biologics to 0.1% or less of the injected dose into the brain and several efforts are leveraging receptor mediated transcytosis to shuttle biologics across the BBB, increasing brain penetrance by 10-100x. Roche’s trontinemab will be the first clinical validation of an active transport approach by targeting the transferrin receptor.18 Many other strategies have followed suit in this area, targeting receptors such as human CD98 and ALPL.19,20 Given that the CNS is the most delivery-sensitive therapeutic area, we are following how shuttle technologies can improve drug penetrance in the human brain, widening the therapeutic index of CNS-targeted therapies.
  5. Drug discovery using human brains in a dish: Due to the low predictive power of animal studies for CNS drug discovery, the field has turned to alternative models of human biology. Brain spheroid models are 3D in vitro models that utilize neuronal and non-neuronal cells from humans to create miniaturized human brains in a dish. While these models are far from perfect and can only recreate small networks of cells without an intact BBB, they offer an alternative, human-based model that can be combined with results from animal studies. Importantly, the FDA has published guidance that it plans to phase out animal studies in favor of alternative models like brain organoids and understanding how to best implement these models is a necessity for the field moving forward.21
  6. Resetting microglia to take back control of inflammation: With the neuroinflammation hypothesis taking hold in our understanding of AD, microglia targeted therapies are likely to play a major role in the AD treatment paradigm by synergizing with the amyloid therapies. Our current understanding of neurodegeneration has not definitively clarified the role of neuroinflammation in neuronal loss, but the preclinical evidence of an immune component in neurodegeneration grows daily. Moving forward, treatments that modulate microglia to control of neuroinflammation are likely to be necessary therapies in the armamentarium that can be used in combination to treat neurodegeneration.
  7. Next-generation therapies progressing into the clinic: The therapeutic landscape for CNS diseases is nascent and new technological innovations are continuing to make their way into the brains of patients. In this next era of care for neurodegenerative diseases, we will see these therapies enter the clinic at scale, pushing the boundaries of how we think about patient treatment options. In Parkinson’s disease, we have already seen promising clinical data using pluripotent stem cell therapy.22 We are excited to see the continuation of these technological discoveries, including alternative cell therapies and strategies for cellular reprogramming, enter the patient treatment paradigm.

The next era of caring for neurodegenerative disease

Given these areas of core innovation that we are likely to see in the near term, we envision a multi-pronged approach in the future that gives patients with AD and other neurodegenerative diseases a multitude of options to address disease head on. Diagnostics will be the foundation of this new age of medicine, giving us predictive insight to identify the disease before symptom onset. In a perfect world, patients as young as their 40s will be empowered to take a panel of blood-based and genetic diagnostics, without the fear that their diagnosis will mean a slow-moving death sentence. Abolishing that fear will require disease modifying drugs that actively address the pathogenic underpinnings of disease, that are safe to use chronically and prophylactically. While this is a far cry from where we stand today, we believe that the innovations on the horizon will be the first steps to make this vision a reality.


  1. Andrade-Guerrero J, Santiago-Balmaseda A, Jeronimo-Aguilar P, et al. Alzheimer’s Disease: An Updated Overview of Its Genetics. Int ↩︎
  2. Heneka, MT, van der Flier, WM, Jessen, F et al. Neuroinflammation in Alzheimer disease. Nat Rev Immunol (2025). PMID: 39653749. ↩︎
  3. Andrade-Guerrero J, Santiago-Balmaseda A, Jeronimo-Aguilar P, et al. Alzheimer’s Disease: An Updated Overview of Its Genetics. Int J Mol Sci (2023). PMID: 36835161. ↩︎
  4. Andrade-Guerrero J, Santiago-Balmaseda A, Jeronimo-Aguilar P, et al. Alzheimer’s Disease: An Updated Overview of Its Genetics. Int J Mol Sci (2023). PMID: 36835161. ↩︎
  5. Eid A, Mhatre I, Richardson JR. Gene-environment interactions in Alzheimer’s disease: A potential path to precision medicine. Pharmacol Ther (2019. PMID: 30877021. ↩︎
  6. Jonsson T, Stefansson H, Steinberg S, et al. Variant of TREM2 associated with the risk of Alzheimer’s disease. N Engl J Med (2013). PMID: 23150908. ↩︎
  7. Guerreiro R, Wojtas A, Bras J, et al. TREM2 variants in Alzheimer’s disease. N Engl J Med (2013). PMID: 23150934; PMCID: PMC3631573. ↩︎
  8. Pascoal TA, Benedet AL, Ashton NJ, et al. Microglial activation and tau propagate jointly across Braak stages. Nat Med (2021). PMID: 34446931. ↩︎
  9. Karran E, Hardy J. Antiamyloid therapy for Alzheimer’s disease–are we on the right road? N Engl J Med (2014). PMID: 24450897. ↩︎
  10. Chapleau M, Iaccarino L, Soleimani-Meigooni D, et al. The Role of Amyloid PET in Imaging Neurodegenerative Disorders: A Review. J Nucl Med (2022). PMID: 35649652. ↩︎
  11. Raina P, Santaguida P, Ismaila A, et al. Effectiveness of cholinesterase inhibitors and memantine for treating dementia: evidence review for a clinical practice guideline. Ann Intern Med (2008). PMID: 18316756. ↩︎
  12. Karran E, De Strooper B. The amyloid hypothesis in Alzheimer disease: new insights from new therapeutics. Nat Rev Drug Discov (2022). PMID: 35177833. ↩︎
  13. Doody RS, Raman R, Farlow M, et al. A phase 3 trial of semagacestat for treatment of Alzheimer’s disease. N Engl J Med (2013). PMID: 23883379. ↩︎
  14. Egan MF, Kost J, Voss T, et al. Randomized Trial of Verubecestat for Prodromal Alzheimer’s Disease. N Engl J Med (2019. PMID: 30970186. ↩︎
  15. Mintun MA, Lo AC, Duggan Evans C, et al. Donanemab in Early Alzheimer’s Disease. N Engl J Med (2021). PMID: 33720637. ↩︎
  16. van Dyck CH, Swanson CJ, Aisen P, et al. Lecanemab in Early Alzheimer’s Disease. N Engl J Med (2023). PMID: 36449413. ↩︎
  17. https://www.uniqure.com/investors-media/press-releases ↩︎
  18. Grimm HP, Schumacher V, Schäfer M, et al. Delivery of the Brainshuttle™ amyloid-beta antibody fusion trontinemab to non-human primate brain and projected efficacious dose regimens in humans. Mabs (2023). PMID: 37823690. ↩︎
  19. Chew KS, Wells RC, Moshkforoush A, et al. CD98hc is a target for brain delivery of biotherapeutics. Nat Commun (2023). PMID: 37598178. ↩︎
  20. Voyager Therapeutics, Corporate Presentation (October 2025) https://ir.voyagertherapeutics.com/static-files/329b71f5-f944-496c-b757-372a92e82b55 ↩︎
  21. https://www.fda.gov/news-events/press-announcements/fda-announces-plan-phase-out-animal-testing-requirement-monoclonal-antibodies-and-other-drugs ↩︎
  22. Tabar V, Sarva H, Lozano AM, et al. Phase I trial of hES cell-derived dopaminergic neurons for Parkinson’s disease. Nature (2025). PMID: 40240592. ↩︎

The Limits of Generic LLMs: Why Biotech Needs Purpose-Built Tools

September 2025

Healthcare is one of the most data-rich and capital-intensive sectors yet remains decades behind in analytics. High-stakes decisions rely on incomplete data and systems that are slow, manual, error-prone, and expensive. The result is billion-dollar missteps, extended timelines in an industry already operating on decade-long horizons, and delays in bringing life-saving medicines to patients.

Despite rapid advances in LLMs, these breakthroughs have yet to meaningfully change how strategic decisions are made in biotech. Today’s LLMs are fluent language tools. They can summarize dense papers, extract entities, or polish prose, but language is not the same as evidence-backed strategy.

Biotech demands reasoning across fragmented, multi-modal data: trials, patient outcomes, regulatory precedent, and competitive context, all shifting in real time.

Addressing this call for platforms that are purpose-built for biotech, fluent in the networked language of biology, and capable of turning vast, messy evidence into actionable, trusted insights. Only with tools like these will strategic decisions in biotech and healthcare match the rigor the field demands.

In this whitepaper, we (1) examine how strategic decisions in biotech are made today, (2) show where today’s LLMs help and where they fail, (3) propose a bio-native architecture built on a shared data foundation, multi-hop reasoning, and UX-driven validation loops, and (4) introduce LABI, Luma Group’s AI for Biotech Intelligence, and how we aim to apply these principles in practice.

How Are Strategic Decisions Made in Biotech Today?

Consider a typical diligence scenario to evaluate a new therapeutic candidate. You ask an AI system: “What is the current standard of care in this disease, and how does this candidate compare on endpoints, patient selection, safety, and competitive position? Pull the underlying trials, relevant patient datasets, regulatory precedents, and commercial context, and return it as a citation-backed memo with tables.”

It is a compelling vision: a single click yielding evidence-backed answers that can be trusted. But, when pushed with inquiries like this, today’s LLMs fall short. Results are often incomplete and inaccurate, with hallucinations that make them unfit for high-stakes strategic decisions.

Beyond these limits, the diligence process in biotech is uniquely complex and fundamentally different from generalist investing. It requires deep, cross-functional expertise across domains such as foundational biology, discovery and development, clinical data, regulatory precedent, reimbursement strategy, and commercial dynamics. Each domain is a fragmented and hard-to-reach pool of millions of data points that are still largely hunted down and tracked manually. Connecting them into a coherent view requires months of labor-intensive and error-prone work. Moreover, the landscape is constantly shifting. As the right information surfaces, connections emerge both within and across domains, which then need to be tested against multiple possible outcomes. A failure in a novel class, for example, can cascade across development, commercial potential, and competitor pipelines, inducing a rippling effect across domains.

To address these complexities, the industry still takes a piecemeal approach: scaling by headcount, subscribing to multiple data platforms, augmenting existing workflows with generic AI, and tracking and pulling data into manual workflows. As a result, high-stakes biotech decisions are made with fragmented, incomplete data, through manual, error-prone processes that risk costly missteps.

Figure 1: Biotech decision-making today: High-stakes biotech decisions are made with fragmented, incomplete data, through manual, error-prone processes that risk costly missteps.

AI’s Impact and Limitations in Biotech Decision Making

The real constraints on AI in biotech are not computational power or algorithms but rather data and complexity.

Existing LLM successes hit complexity walls: Even where LLMs work well, they struggle with complexity. Code writing AI has immediate feedback when code compiles or breaks but fails with large codebases. Travel planning LLMs can pull from standardized databases but fail when preferences conflict or constraints shift. These examples show that AI breaks down when reasoning must span interconnected systems, even with built-in advantages like immediate feedback and standardized data.

Biology uniquely amplifies these challenges: Biotech faces the same complexity scaling problems without any structural advantages. The heterogeneity of biological data creates fundamental problems. Clinical trials for identical indications might report results in completely different formats, representing disagreements about what constitutes meaningful measurement in biological systems. Reasoning becomes exponentially more complex when drug interactions, competitive landscapes, protein networks, and patient populations are layered on.

Figure 2: Complexity and network effects in biotech: Biotech complexity emerges from connections: foundational biology to clinical data, regulatory strategy to reimbursement, market dynamics to competitive intelligence. Each layer interlocks, creating numerous reasoning challenges.

User experience solutions remain unexplored: Other domains experiment across the full spectrum from autocomplete to autonomous agents, yet even well-resourced teams are pivoting back to human-in-the-loop systems for complex tasks. Biotech has barely explored beyond basic autocomplete. This represents an opportunity: biotech can learn from experimentation in other domains and design purpose-built solutions for its unique data structures and reasoning requirements.

Building a Bio-Native Intelligence Platform

Building AI systems that can reliably reason across interconnected biotech datasets requires three fundamental design principles:

Get the data in one place, and one language: Before clever prompts, we need a common evidence layer of disparate data pulled into a standard format with clear labels. Today, the facts live in a hundred sources, from registries to PDFs to supplements to figure panels, and none of the sources describe their findings the same way. While industries like travel planning have structured data and flight or hotel aggregators, biology lacks both standardization and aggregation.

LLMs should read messy sources, suggest mappings, line up fields that mean the same thing, and flag conflicts for a human to review, but the shared format and hub must come first. As data lands in this structure, simple checks keep things comparable: units and scales match, time frames line up, patient groups are matched, and endpoints mean the same thing. Every fact keeps its source, date, and version, so teams can trace provenance and monitor changes. And should be built in a format that is graph-native so platforms can follow and justify links and support the networked reasoning biology requires.

Build models that reason over networks: Networks underpin biology, so our systems must reason over links, not just lines of text. Defensible answers require walking through several connected steps, so-called “multi-hop reasoning”. Modern LLMs are strong at single lookups and fluent summaries, but when multi-hop reasoning is required to respond to a user, they often lose track of constraints, mix cohorts, or skip checks.

At the system layer, perhaps that means moving from “chain-of-thought” to “web-of-thought”: approaches that can traverse multiple paths in a graph at once, check constraints, and reconcile conflicts before proposing an answer.

At the model layer, we might give the model a map of these connections and make those links matter. Instead of treating everything as words in a row, the model should score and select hops (e.g., from a target to its pathways, then to assays and cohorts) and weigh how strong each link is. Along the way, we can encode simple and transparent requirements, so relationships shape the result, not just nearby text. Adopting these principles should enable us to get answers that are traceable, comparable, and reproducible, the qualities high-stakes biotech decisions require.

Figure 3: Chain-of-thought vs. Web-of-thought: Left: a linear “chain-of-thought” compresses evidence into one sentence, ignoring dependencies and hidden constraints. Right: a web-of-thought maps multi-hop links and conditions for action. Biology is a network, not a list, so credible answers require multi-hop reasoning with checks and citations, not a single pass-through text.

Create a user experience that provides validation and feedback loops: Unlike coding LLMs, where feedback is immediate, in biotech the impact of a single choice may not be clear until years later in the clinic or market. Reinforcement learning solved similar challenges through simulation, as in AlphaGo and AlphaZero, but full simulations of biotech investment decisions are not feasible. Adversarial AI, such as digital twin investors that stress-test recommendations, is a possible long-term path but remains speculative. A more immediate approach is to use interaction patterns as training signals. Analysts already perform checks and cross-validations that could be standardized. Systems can learn from explicit corrections as well as implicit behaviors: which trial comparisons are explored or ignored, which analyses are saved or discarded, and how experts navigate between drug mechanisms. In this way, the interface itself becomes validation infrastructure, creating feedback loops that biotech has lacked, offering a new path to make AI trustworthy for guiding high-stakes decisions. Taken together, these principles shift biotech analysis from scattered, manual workflows to something closer to the “single-click diligence” vision. Modernized data pipelines provide standardized, queryable inputs. Network-native models generate multi-hop insights that reflect the interconnected structure of biology, and validation built into the user experience ensures systems continuously refine their domain expertise. The result is not just fluent AI, but trusted decision infrastructure capable of analyzing and supporting the high-stakes choices that define biotech investing and strategic decision making. 

Why we built LABI

As scientists, operators, and investors, we make decisions that demand comprehensive coverage of the data landscape. We need foundational biology, clinical results, regulatory shifts, and more, distilled into clear signals and continuously updated to reflect the dynamic nature of the data. Over the past two years, we have tested existing analytical and AI tools in our workflows and observed firsthand what many across the industry also recognize: generic systems fall short in biotech. 

This gap is why we built LABI, Luma Group’s AI for Biotech Intelligence. LABI is an end-to-end platform purpose-built for biotech, designed to deliver comprehensive coverage of all critical data sources, translate that information into validated, traceable, decision-ready insights, and ensure those insights are maintained in real-time as new evidence becomes available.

Our team has been addressing the foundational limitations of applying generic AI to biology by building a biology-native platform. Rather than augmenting existing workflows with off-the-shelf tools, we are designing LABI from the ground up to capture the full complexity of the field.

LABI aggregates, curates, and harmonizes critical data pools, including peer-reviewed manuscripts, patient-level datasets, biorepositories, clinical trial records, regulatory filings, and commercial and financial datasets, spanning millions of sources. This enables LABI to speak the language of biology (graphs, tables, statistical analyses, and more) and to reason in a highly networked, real-time manner, so insights are both comprehensive and accurate.

At its core, LABI makes biology navigable. It organizes genes, proteins, pathways, samples, and outcomes into a connected map, so comparisons are valid. Every answer is anchored in evidence and linked directly back to sources to eliminate the frustration and dangers of hallucinated outputs. Agentic AI workflows continuously scan and cross-check new information, surfacing what has changed, when it changed, and why it matters. The platform incorporates feedback loops so that every interaction strengthens evidence prioritization, sharpens comparisons, and hardens checks, with improvements compounding over time.

 LABI is the platform we long wished we had as biotech investors, a platform that helps uncover the most meaningful data and insights in a field defined by complexity and constant change. Making the wrong choice or missing the right one has enormous consequences: missed signals can cost billions, delay decade-long timelines, and ultimately slow the delivery of life-saving medicines. The impact of this challenge extends far beyond capital allocation, touching everyone in healthcare who makes high-stakes strategic choices, from clinical development to corporate strategy. This is a mission we are deeply committed to, and we look forward to sharing more as it evolves.

Acknowledgements

Luma Group would like to acknowledge Manav Kumar for his thoughtful discussion and contributions to this white paper.

The Case for Venture Capital in Biotechnology

Jamie Kasuboski, Partner at Luma Group

July 2025

There are many perceptions of what a venture capitalist (VC) is. In the biotech space, VCs are enigmatic because many of us never imagined becoming VCs, and many future VCs might have no idea they’re headed down that path either.

My story, like many in biotech, started with a passion to help sick people. I was fortunate to discover my passion at the age of six, when I told my parents that I wanted to be a genetic engineer. I didn’t fully understand what that entailed, but the film Jurassic Park sparked my curiosity. I was fascinated by the idea that nature had invented biological Legos called DNA that could be assembled to create humans, sea slugs, bananas, bacteria, mold, and, most intriguingly, entirely new life forms.

Less than two decades later, I received my PhD in Molecular and Cellular Biology. I loved deciphering nature’s clues and genetic codes to figure out the “how” and “why” in nature’s playbook. However, I didn’t yet know how to translate this knowledge from the lab into life-saving therapies. I continued my research journey post-PhD and eventually landed an industry postdoc position at Pfizer. At Pfizer, I soaked up every fact, lesson, piece of jargon, and process required to take an initial discovery and turn it into a drug. At this moment, something clicked for me: my true passion wasn’t just making discoveries; it was figuring out how to transform those discoveries into medicines capable of helping patients. My time at Pfizer also made it clear that I wasn’t ready to join an organization as large as Pfizer long-term. Thankfully, after years of trying different career trajectories and with the help of some great mentors, it became clear that biotech venture capital uniquely aligned both my personal goals: contributing to life-changing therapies, and professional goals: being able to earn a living while pursuing my personal passion.

As they say, “Do what you love, and you’ll never work a day in your life.”

Introduction: Venture Capital as the Lifeline of Biotechnology

Translating laboratory discoveries into lifesaving treatments for patients is lengthy, requires massive amounts of capital, and is an extraordinarily challenging journey that is mired in constant failures and setbacks. It requires collaboration across multiple stakeholders, from academic researchers, biotechnology companies, and contract research organizations to regulators and pharmaceutical companies. Unlike other sectors, such as technology or manufacturing, biotech innovation rarely progresses within a single organization. Instead, technologies pass through specialized ecosystems and subsectors, undergoing numerous collaborations and handoffs along the way. Additionally, unlike the tech and manufacturing sectors, which scale by repeatedly iterating the same products, biotech and pharmaceutical companies must continuously innovate or acquire innovation due to IP lifecycle constraints. With all these challenges and transitions, venture capital has emerged as a critical niche for shaping and driving this complex process by uniquely tolerating the industry’s substantial financial demands, prolonged timelines, inherent uncertainties, and high failure rates.  

Investing in biotechnology is not for the faint-hearted. The investment characteristics are poorly matched to traditional financial institutions like banks, later-stage private equity, and public equity markets, which favor shorter investment horizons and lower-risk ventures. Without substantial funding on the order of hundreds of millions of dollars, these discoveries and innovations will never translate into life-saving treatments.  Venture capital firms provide the lion’s share of this critical funding, uniquely equipped with the expertise, alignment, capital, and passion, defined here as the endurance to persist over time, not just enthusiasm, to bridge the risky gap between discovery and clinical development.

A Brief History of Drug Development and the Relationship with Biotech VC

Drug development historically involved large pharmaceutical companies undertaking extensive internal research and development, characterized by decades-long timelines, massive investment requirements, and high failure rates. In the 1970s and 1980s, the emergence of biotechnology, particularly the groundbreaking ability to genetically engineer proteins, revolutionized drug discovery. Biotech startups leveraged academic science and nimble innovation to drive new therapeutic breakthroughs, shifting the paradigm from traditional pharma-dominated R&D to a more collaborative and innovative ecosystem. The rapid rise and success of these startups hinged significantly on venture capital contributions, which provided essential early-stage, patient capital to overcome high-risk barriers and bridge the gap between academic research and commercialization.

Together, VCs, researchers, and entrepreneurs brought transformative innovations, such as monoclonal antibodies, gene therapies, RNA-based technologies, and others to market. This laid the groundwork for modern pharmaceuticals and reshaped the slow-moving, conservative market into the innovation-driven, dynamic ecosystem it is today, saving billions of lives in the process.

Figure 1: Notable Examples of VC-Driven Biotechs.

The Innovation Funding Gap in Biotech

Typically, it takes over a decade and billions of dollars to guide a promising discovery from initial research through clinical trials and ultimately into the market.1,2 Complicating matters further is the phenomenon known as “Eroom’s Law” (Figure 2), highlighting the troubling paradox that, despite technological advances, drug discovery productivity has steadily declined, resulting in escalating costs and diminishing returns.3,4 Over recent decades, the average cost of successfully bringing a new drug to market has surged dramatically, surpassing $2 billion per approved drug (depending on who you ask).5,6

These interrelated complexities have widened the innovation and funding gap: while groundbreaking academic research continues to flourish, translating these discoveries into viable commercial products has become increasingly expensive. This rising cost has significantly squeezed traditional pharmaceutical companies, resulting in an intensification of their dependency on external innovation and reducing internal R&D spend. Over the past two decades pharmaceutical companies have started to play an essential role in the clinical and commercial success of emerging therapies, serving as licensors that guide innovative technologies through the final stages of development and onto the market.

Biotechnology companies, whether private or public, often lack the substantial capital, specialized expertise for late-stage clinical trials, and the robust infrastructure required for successful commercialization. As a result, biotech firms commonly out-license or sell their assets to pharmaceutical companies, leveraging pharma’s financial resources and established commercial pathways to bring new therapies to patients’ bedsides. Influenced by Eroom’s Law and economic pressures, pharma companies have grown more conservative, preferring to wait longer before adopting new technologies, effectively becoming gatekeepers for the commercialization of potentially life-saving innovations.

Figure 2: Eroom’s Law depicted by illustrative historical regression of significant increase of drug discovery and development.

Source: Benchling.

It is precisely at this juncture of financial and scientific uncertainty that biotech venture capital plays an indispensable role. Passionate biotech venture capitalists actively embrace these early-stage challenges, investing precisely when others shy away to provide the bridge between groundbreaking scientific research and commercial viability. The importance of this funding model cannot be overstated; venture capital-backed companies have consistently delivered transformative therapies that shape modern medicine, and without them and their portfolio companies, the industry would quickly decline to a fraction of its size.

Venture capital support is not merely financial; it’s a strategic enabler of groundbreaking medical advances that significantly enhance human health outcomes. Through careful, informed, and visionary investments, biotech venture capital fuels innovation pipelines, enables scientific risks, and ensures that transformative ideas are not trapped in laboratories but rather brought effectively to the patients who urgently need them.

Pulse on the Current Market: When Enthusiasm Outpaces Passion

The rapid exit of generalist funds left hundreds of biotech companies facing financial strain, with low cash reserves and short runways, forcing many to scale back or shut down. This large capital void either needs to be filled by investors, which is unlikely given the sheer scale of capital needed, or there will be another market correction, which will entail more shutdowns, trade sales, and M&A opportunities. The mismatch between high burn rates and the long timelines needed for meaningful value inflection is pushing many promising companies into survival mode. Despite short-term pains, this correction has created compelling opportunities, as high-quality companies trade at steep discounts, offering attractive entry points for experienced investors with fresh capital and limited exposure to prior overvaluations.

Biotech capital markets are cyclical and highly sensitive to broader economic shifts, as seen during the COVID-19 era when the XBI, the S&P’s biotech index, nearly doubled in under a year, attracting trillions in capital. This influx, driven largely by generalist investors that lacked the deep understanding required for biotech investing, fueled innovation, accelerated drug development, and inflated valuations. But it also tipped the balance away from fundamentals. When immediate returns failed to materialize, many of these investors quickly exited, redirecting capital to areas like technology, triggering an abrupt market correction that disrupted the biotech ecosystem and led to the recent downturn.

Biotech VCs: The Sherpas of Innovation

Biotech venture capitalists provide more than financial resources. They serve as skilled sherpas, guiding startups through the challenging journey from early discovery, through clinical development, and ultimately to commercialization. Like Tenzing Norgay, the legendary Nepalese-Indian sherpa who helped Sir Edmund Hillary summit Mount Everest in 1953, the best VCs help startups navigate difficult terrain, carry critical burdens, and stay oriented toward the summit. They bring not only capital but also specialized experience, expansive networks, and strategic insight to each stage of a company’s evolution. So, pick your investors wisely.

VCs also fulfill an essential role as aligners. Biotech startups move through distinct phases, each with different personnel, structures, and missions. VCs ensure continuity. Moreover, effective VCs act as amplifiers, extending a company’s reach and influence within the broader ecosystem. This involves connecting startups with capital sources, facilitating interactions with pharmaceutical stakeholders, advocating externally to enhance the company’s visibility and reputation, and other critical processes.

Above all, the most distinguishing trait of the best biotech VCs is their deep-rooted passion. Many have years or decades of firsthand experience in research laboratories, biotech startups, or pharmaceutical companies, giving them an enduring resilience and ability to maintain unwavering enthusiasm despite the setbacks endemic to biotech innovation. This passion transcends mere enthusiasm; it embodies the capacity to persist through great adversity, remaining committed to the ambitious goal of bringing groundbreaking science to patients in need.

This mission requires VCs to be hands-on, guiding their portfolio through diverse challenges. With dozens of companies under management, VCs must tailor their guidance to each one. This demands not just expertise but time, attention, and strategic dexterity. As VC firms scale, managing their portfolios becomes more complex. Biotech investments often span decades, meaning a single VC firm may find itself managing multiple funds (often two to five at once), each with its own portfolio, LP base, and strategic objectives. The burden of guiding multiple companies across different stages while maintaining internal coherence stretches firms’ bandwidth and heightens the risk of misalignment.

It can be easy to overlook that venture capitalists operate their own business too. In addition to providing tailored support and resources to numerous portfolio companies, VCs must fundraise, manage investors, and oversee firm-level operations. Balancing these dual roles, shepherding others while running a firm, is an often-overlooked challenge. Additionally, VCs are still groups of people that can be prone to errors in decision making, risk calculation, or several other human shortcomings. The best biotech VCs understand this and try their best to stay aligned to fundamentals and remain disciplined even when others are not.  In short, they try to over-index on passion to help patients versus chasing short-term returns with enthusiasm.

Built for Biotech, Equipped for Impact

The world of biotech investing is as diverse as the companies and people who make up the industry. Historically, the most successful biotech investment firms have not been generalists, but specialists whose passion and focus mirror those of the scientists and entrepreneurs driving innovation. Most of these biotech funds take specialization even further, focusing their skills and strategies on specific niches within the broader biotech ecosystem.

We founded Luma Group with these same guiding principles, tailoring our strategy explicitly for the biotech industry and our portfolio. One of our initial decisions was to align our fund’s capital cycle with biotech product development timelines. Specifically, we launched a 15-year fund rather than the traditional 10-year structure. This choice reflects the reality that biotech development cycles often require more time than technology or other sectors and forcing a 10-year investment horizon onto biotech would be fundamentally mismatched. At the heart of our philosophy is a simple, unwavering belief: if we do our job, fewer patients will suffer tomorrow. Guided by this North Star, we set out to build a firm uniquely positioned to achieve such an ambitious goal.

Given the highly regulated and empirically driven nature of the biotech sector, it should come as no surprise that the greatest predictor of success is rigorous, accurate science. Amid all the uncertainties, getting the science right is predominantly about running the correct experiments informed by historical data. This principle has shaped a key mantra within our group: “Better data leads to better decisions, which leads to better outcomes for patients.”

This mantra is what guided Luma Group to establish a dedicated research division within our funds, supported by dozens of KOLs and advisors, from former pharmaceutical CEOs and top regulatory officials to PhDs and postdoctoral researchers. The breadth and depth offered by this network ensures coverage of every critical vertical in our portfolio and fund exposure:

  • Academic Experts provide visibility into groundbreaking innovations and technologies
  • Discovery and Development Specialists focus on execution and translating early-stage technologies into next-generation therapeutics, medical devices, or diagnostics
  • Clinical and Regulatory Advisors assist in navigating the rapidly evolving clinical and regulatory landscape
  • Commercial Experts help us understand pharmaceutical decision-making processes and broader market dynamics affecting our portfolio companies

To complement this network, we have developed a next-generation research platform built around proprietary analytical software known as LABI (Luma AI Brain Initiative). This platform provides both our fund and our portfolio access to trillions of curated data points through an intuitive interface that significantly reduces diligence times while leveraging advanced meta-analysis and analytics, empowering us to uncover insights others typically miss.

With these foundational elements in place, Luma Group has strategically aligned its passion and fund lifecycle with the asset class and companies that are driving innovation. With a lot of passion, hard work, and some luck, Luma Group can drive meaningful improvements in patient outcomes within our lifetime.

  1. Pisano, G. P. (2006). Science Business: The Promise, the Reality, and the Future of Biotech. Harvard Business School Press. ↩︎
  2. Booth, B. L., & Zemmel, R. W. (2004). Prospects for productivity. Nature Reviews Drug Discovery, 3(5), 451–456. ↩︎
  3. Scannell, J. W., Blanckley, A., Boldon, H., & Warrington, B. (2012). Diagnosing the decline in pharmaceutical R&D efficiency. Nature Reviews Drug Discovery, 11(3), 191–200. ↩︎
  4. Deloitte Centre for Health Solutions. (2022). Measuring the Return from Pharmaceutical Innovation 2022: Balancing the R&D equation. Deloitte Insights. ↩︎
  5. DiMasi, J. A., Grabowski, H. G., & Hansen, R. W. (2016). Innovation in the pharmaceutical industry: New estimates of R&D costs. Journal of Health Economics, 47, 20–33. ↩︎
  6. Deloitte Centre for Health Solutions. (2022). Measuring the Return from Pharmaceutical Innovation 2022: Balancing the R&D equation. Deloitte Insights. ↩︎

Data-Driven Biotechnology: How Multi-Omics Analytics Are Shaping the Future of Medicine

Jamie Kasuboski, Partner at Luma Group and Rob Plasschaert, Senior Director of Biology at Stealth Newco

Innovation in biotechnology is driven by uncovering novel biological insights and translating them into life-saving therapeutics, diagnostics and medical devices. Over the past two decades, breakthroughs have largely stemmed from analyzing vast biological datasets, such as those generated by human genome projects.

Today, advancements in artificial intelligence (AI) and machine learning (ML) have significantly enhanced our ability to systematically analyze massive datasets, identifying complex relationships across genomic, proteomic, transcriptomic, metabolomic and other data simultaneously. The cross-section of all of these “-omics” is what we define as multi-omics, which represents a large untapped domain for future biotech innovation.

The convergence of affordable, sophisticated AI/ML analytics and large-scale multi-omics data collection has marked a pivotal shift within biotechnology, from single-omics approaches to integrated multi-omics innovations.

June 2025

Introduction: Multi-Omics and the Era of Big Data in Biotechnology

Over 20 years after the first published human genome, biotechnology is now firmly a discipline of big data. The dissection of disease mechanism is done by creating a logical path of cause and effect that moves from origin (e.g. a genetic mutation) to disrupted biology process (e.g. non-functional protein and pathway) to presentation of clinical symptoms (e.g. cancer). Traditionally, the scope of this work has been limited by scale, and progress has been consistent but slow. Advances in methodology have transformed full-coverage “-ome” profiling—genome, transcriptome, epigenome, metabolome, and beyond—from bleeding-edge novelty into standard practice, offering more qualitative quality control. We can now routinely generate terabytes of molecular measurements from a single study. We, and many others, believe that connection between these large datasets will define the next era of multi-omics drug development. This emerging field focuses on the integration and analysis of large-scale molecular and clinical data, enabling the systematic dissection of biological cause-and-effect at scale. By viewing molecular disease through this multi-faceted lens, researchers can identify and translate novel insights into new therapeutics, diagnostics and medical devices.

Looking through the Compound lens of Multi-Omics

By combining large datasets that characterize disease etiology with clinically meaningful endpoints, multi-omic analysis is poised to deliver transformative insights. High-throughput profiling of the central dogma—DNA → RNA → protein—is now routine, and assays for modulators (e.g., epigenetic marks) and downstream effectors (e.g., metabolites) have dramatically decreased in cost while increasing in sensitivity, rendering them nearly run-of-the-mill. Fifteen years ago, whole-genome sequencing cost over $10 million per genome, RNA-seq was low-throughput, proteomics relied on 2D gels, and electronic health records (EHRs) remained siloed; today, sequencing runs under $500 per sample, single-cell multi-omic kits can simultaneously profile chromatin accessibility and gene expression, and modern proteomics platforms quantify thousands of proteins in a single day.

Clinically, routine laboratory tests, digital histopathology, and imaging now feed into AI-enabled pipelines that extract multi-scale features, while EHR data—once trapped behind Epic or Cerner—are routinely exported as de-identified OMOP/FHIR–formatted datasets via research data warehouses. Public resources such as MIMIC-IV, NIH’s All of Us Research Program, and the UK Biobank exemplify how ICU telemetry, standardized lab values, and de-identified clinical notes can be linked to genomics and metabolomics under strict governance. What once required bespoke protocols, custom ETL pipelines, and extensive manual annotation has evolved into a streamlined, plug-and-play ecosystem, enabling researchers to integrate multi-omic and clinical data seamlessly and uncover biological insights that were impossible to detect a decade ago. All of these sources provide a rich array of data spanning numerous dimensions, from molecular-level insights to comprehensive patient health journeys.

Table 1. Omic Modalities and Their Captured Insights

CategoryData ModalityWhat it captures and/or quantifies
     





  Molecular Mechanisms
GenomicsThe genetic blueprint of an organism’s genome
EpigenomicsReversible chemical marks (e.g., DNA methylation, histone modifications) that modulate gene expression
TranscriptomicsDynamic gene-expression programs (mRNA abundance and isoforms)
ProteomicsProteins, their splice variants, and post-translational modifications
MetabolomicsSmall-molecule metabolites whose levels change with cellular activity and stress
   






    Disease & Clinical Outcomes
Radiology & Functional ImagingMRI, CT, PET, ultrasound imaging that quantify disease states over time in various organs
Digital Pathology & Spatial SlidesWhole-slide histology, multiplex immunofluorescence, spatial transcriptomics mapping cellular phenotypes to anatomical context
Electronic Health Records (EHR)Structured labs, vitals, medications, procedures, plus unstructured clinical notes collected across years of care
Longitudinal Laboratory PanelsSerial hematology, chemistry, and biomarker tests (e.g., HbA1c, troponin) tracking disease progression or therapeutic response

Why is Multi-Omics Poised to Make an Impact Now?

Multi-omics approaches use multidimensional datasets whose complexity often surpasses the capabilities of classical statistical methods. Although earlier computational approaches were effective, recent advances in AI/ML have not only dramatically increased computational power but also enable seamless integration across multiple datasets—each with its own unique architecture. By integrating neural networks—particularly advanced deep‐learning architectures, graph neural networks, and probabilistic causal frameworks—researchers can now uncover insights and identify connections that were too subtle for earlier computational methods, turning analytical challenges into strengths and offering a powerful means to decipher biological complexity. AI models integrate heterogeneous data modalities—such as DNA variants, RNA expression counts, protein abundances and metabolite concentrations—into unified latent representations, preserving critical biological interactions across layers. For example, alignment models facilitate a comprehensive understanding of complex systems by embedding different data types into shared latent spaces, maintaining biological coherence across diverse “-omic” layers.1 Researchers have begun to apply these techniques to profile immune cells directly from patient samples. For example, Dominguez Conde et al. (2022) took early steps toward characterizing immune cells in both healthy individuals and diseased patients, aiming to understand how their multi-omic profiles shape the immune system’s adaptation and function in different tissue environments.2

Moreover, AI-based methods significantly improve data quality through noise reduction and imputation. Techniques, such as autoencoders and diffusion models reconstruct missing values, correct batch effects and enhance the signal-to-noise ratio in noisy assays. Variational autoencoders, for instance, have been successfully employed to impute missing data in single-cell multi-omics, dramatically enhancing analytical robustness.3 Additionally, supervised deep-learning models trained on clinical endpoints—including patient survival, relapse rates and therapeutic response—can accurately link complex molecular patterns to clinically relevant outcomes. These models distill intricate biological signatures into actionable insights, thereby accelerating precision medicine initiatives and facilitating personalized therapies (Lee et al., 2020).4

Foundation models trained on extensive multi-omics are increasingly valuable for generating testable biological hypotheses. These models predict causal interactions, protein structures and even simulate the effects of targeted genetic or pharmacological interventions. For instance, AlphaFold and similar AI systems demonstrate how computational predictions can effectively precede laboratory validation, dramatically shortening the cycle from data collection to biological discovery.5 AI is a critical translator, converting dense, molecular-level information into meaningful clinical insights and actionable therapeutic strategies, thereby bridging the gap between complex multi-omics data and tangible patient benefits.

Table 2. Standard Omic workflow


A Case Study in Low-Throughput Translation: γ-Secretase in AD

Consider γ-secretase inhibition in Alzheimer’s disease. Decades of biochemistry showed that γ-secretase generates amyloid-β peptides that aggregate into neurotoxic plaques; blocking the enzyme seemed like a slam-dunk therapeutic strategy. Yet Lilly’s semagacestat—a potent γ-secretase inhibitor—failed spectacularly in Phase III trials. Cognition worsened faster than in placebo, and adverse events spiked. One idea is that Amyloid processing is only one facet of a vast neurodegenerative network; γ-secretase also cleaves Notch receptors and other crucial substrates. If researchers had a multi-omic, systems-level view of neuronal biology—linking genomic risk alleles, transcriptomic stress responses, proteomic pathway crosstalk and metabolic dysfunction—they might have predicted these liabilities before thousands of patients were exposed. The lesson is clear: single-node interventions based on incomplete models can backfire.

Challenges That Remain

Although early precision medicine successes emerged from single-omic approaches, multi-omics strategies—despite their promise—face critical implementation hurdles. Foremost is data quality and standardization: unlike genomic sequencing’s unified formats, multi-omics suffers from inconsistent sample collection, processing methods and metadata curation, limiting cross-study comparability. A further obstacle is interpretability; predictive models often function as “black boxes,” failing to provide the transparent mechanistic insights regulators and clinicians require for trust and actionable decisions. Lastly, the scalability of experimental validation remains constrained, with wet-lab confirmation via phenotypic screening, organoid systems and CRISPR perturbations lagging far behind computationally generated hypotheses.

These bottlenecks—poor data standardization, opaque models and limited validation—represent significant barriers to realizing multi-omics’ clinical potential. Even with these challenges, research continues to make progress on removing these bottlenecks and with new predictive models combined with more efficient and standardized wet lab data collection. Additionally, new AI/ML approaches help fill in the bottlenecks and will be explored in a later section.

Table 3. Current Omic Bottlenecks and Pain Points

2. AI & Machine Learning make Multi-Omics Possible

AI tools are rapidly reshaping the landscape of biomedical research by solving longstanding challenges in data analysis, interpretation and utilization. Traditional methods in biology and medicine frequently face bottlenecks related to scale, accuracy and speed, limiting discovery and clinical translation. Today’s AI-powered tools offer unprecedented precision, automation and analytical depth, poised to resolve critical choke points throughout the multi-omics workflow—from initial raw signal cleanup to sophisticated drug design. Below, we highlight specific examples of the impact these AI tools are already making, along with a selection of emerging use cases. (For a deeper dive into our philosophy—more data isn’t better data; curated data is better data—see our prior AI white paper.)

For instance, one of AI’s transformative capabilities lies in converting biological sequences into accurate three-dimensional protein structures. Traditionally, researchers relied on experimental techniques like wet-lab protein crystallography, a process that could take months per protein and left most proteins structurally unresolved. AI-based solutions, exemplified by AlphaFold, have dramatically changed this reality. AlphaFold has generated readily accessible structural models for nearly 200 million proteins, empowering vaccine developers and enzyme engineers to rapidly obtain atomic-level detail in seconds instead of months.6

AI also significantly improves the quality and usability of genomic data. While advanced sequencing technologies such as long-read sequencers deliver critical insights, they often come with higher error rates compared to short-read counterparts. AI-driven models, including Google’s DeepVariant, address this issue by effectively “cleaning” raw genomic reads and boosting variant-calling accuracy to near-clinical standards.7 Such tools dramatically reduce the manual quality control burden and time—shaving weeks off the analysis pipeline and enabling faster translation from genomic discovery to clinical action. Additionally, AI facilitates the annotation and interpretation of complex single-cell datasets, a process that is notoriously labor-intensive and prone to subjectivity. Traditional manual annotation of million-cell datasets is both slow and variable across annotators. AI-driven solutions, such as the open-source popV ensemble, systematically assign cell-type annotations along with confidence scores. This automated process highlights only the ambiguous 10–15% of cells for expert review, significantly accelerating workflows and ensuring higher consistency and reproducibility across analyses.8

AI excels at integrating multi-omic data streams—such as DNA, RNA, and digital pathology—to create comprehensive predictive models. While individual biomarkers often fail to capture the complexity of diseases, AI-based multi-omic fusion models achieve remarkable accuracy. Recent pan-cancer studies employing deep-learning techniques have successfully combined diverse data types into unified survival risk scores. These AI-derived scores have consistently outperformed traditional stage-based predictions, delivering superior prognostic accuracy across studies involving more than 15,000 patients.9

Collectively, these advancements demonstrate AI’s potential to transform biomedical research, delivering faster, more precise and clinically relevant insights at unprecedented scales.

Table 4: Real-World Challenges and Examples of AI-enablement

Pain PointWhat AI Can DoEveryday Example
Too much data, not enough insightSpots hidden warning-sign patterns that clinicians would never have time to sift out manually.A machine-learning screen of newborn blood samples uncovered a 4-gene “early-warning” fingerprint for sepsis—flagging babies days before symptoms appeared.10
Different hospitals use incompatible
equipment
Lines up images or lab results from many sites so they can be compared as if they came from one scanner or one lab.An AI harmonization tool lets researchers combine breast-MRI databases into one study, boosting the accuracy of tumor-detection software across both hospitals.11
Important signals are buried in noiseCleans and sharpens data, filtering out scanner glitches or stray measurements.In lung-cancer screening, an AI system that de-noised CT scans spotted malignant nodules earlier and with fewer false alarms than expert radiologists.12
Key test results are missingPredicts likely values or tells clinicians which single test would add the most value, cutting down on repeat blood draws.A study showed that AI imputation could reliably fill in missing lab results in electronic health-records for stroke and heart-failure patients, improving risk models without extra testing.13
Translate big data into clinical outcomes difficultConverts continuous streams from wearables into medically meaningful alerts.The 400,000-participant Apple Heart Study used an AI algorithm in a smartwatch to flag atrial-fibrillation episodes with 84 % accuracy, prompting users to seek timely care.14

3. Factors Driving Growth in Multi-Omics & AI

Three forces—capital, capability and clinical pull-through—are reinforcing one another and accelerating adoption of multi-omics platforms and solutions in today’s market environment.

I) Table 5: Plentiful capital for big data bioscience

What’s HappeningWhy it MattersExample
Capital for AI platforms is abundantInvestors see multi-omics + AI as the next big moment for biotechGlobal multi-omics platform revenue is expected to nearly double—from ≈ $2.7bn (2024) to $5bn (2029)15
Big rounds for data-centric start-upsLarger war-chests let companies build both wet-lab and compute infrastructureThe 20 biggest biotech start-ups raised $2.9bn in Q1 2024, many with AI/multi-omics pitches16
Generative-AI boom spills into life-sciencesGeneral-purpose GenAI tools lower barrier to sophisticated modelingVenture funding for GenAI hit $45bn in 2024, up ~2× YoY17
Strategic pharma partnershipsPharma licenses data access and co-develops AI platforms instead of building in-houseUK Biobank’s new AI-ready proteomics program launched with 14 pharma partners

II) Capabilities for data generation and analysis continue to improve

The cost and economics of generating multi-omics data are changing rapidly: the price of whole-genome sequencing, once counted in the thousands of dollars, is now approaching the USD 200 threshold promised by the latest high-throughput instruments, effectively removing cost as the principal barrier to large-scale genomic studies.18 Parallel progress in proteomics has reduced mid-plex panel costs to well under USD 20 per sample, broadening access to routine protein profiling. At the same time, data resources have expanded in both depth and breadth. The UK Biobank has released metabolomic measurements for approximately 121,000 participants—and complementary panels quantifying roughly 3,000 plasma proteins—thereby creating an unprecedented reference for population-scale multi-omics analyses.19,20 These volumes would be unmanageable without a concurrent maturation of cloud-computing infrastructure. On-demand GPUs and browser-based “auto-ML” notebooks now allow investigators to execute multi-omic workflows that once required institutional high-performance clusters, placing advanced analytics within reach of modestly resourced laboratories. Finally, the regulatory climate is becoming markedly more receptive. Recent FDA guidance on the use of real-world evidence and tissue-agnostic companion diagnostics explicitly acknowledges integrated molecular signatures as acceptable decision-making inputs, thereby creating a clearer path from multi-omic discovery to clinical implementation.

III) Table 6: Early efforts point to big possible successes in the space

4. Luma Group’s Position and Vision

Scientific progress hinges on more than just data—it depends on the ability to make sense of it to improve outcomes for patients. At Luma Group, we invest in companies that are redefining how data is used to shape the future of drug discovery and development. While modern research tools can now produce unprecedented volumes of biological information—from single-cell sequencing to proteomic and metabolomic profiling—sheer quantity doesn’t guarantee clarity. The true advantage lies in the ability to connect disparate data streams, uncover hidden patterns and translate them into actionable insights.

For complex diseases, a holistic understanding of interrelated datasets is crucial to deciphering disease biology. Ultimately, these data function as an interconnected system, and forward-thinking companies increasingly rely on AI and ML to analyze these enormous datasets, revealing the complexities of many unmet medical needs and opening the door to new breakthroughs. We believe we are transitioning out of the genomics era and into the multi-omics era—one where integrated datasets and advanced analytical tools will transform the way we discover, develop and deliver the next generation of therapeutics.

Luma Group continues to champion this new era of innovation by focusing on companies that harness large multi-omics datasets alongside advanced AI/ML approaches. One of our earliest investments, Rome Therapeutics, built its discovery engine around an often overlooked repeatome, a human genome section that does not code for a human protein. By mining large patient datasets, the Rome team pinpointed a key correlation from the repeatome and pathways involved in autoimmunity, ultimately uncovering LINE1 as a novel target with broad therapeutic potential across multiple autoimmune indications.

We invested in Curve Bio because they applied similar multi-omics analysis to diagnostic applications. Their AI/ML-powered platform sifts through massive datasets to detect subtle changes in the methylation patterns of cell-free DNA—changes that strongly correlate with disease progression, particularly in liver tissues. These insights exceed standard-of-care detection and other methods in both sensitivity and selectivity, offering significant promise for earlier and more accurate diagnoses.

Our investment in Character Biosciences represents an archetype of Luma’s multi-omics investment strategy. As we recently detailed in a separate piece, Character Bio integrates ocular imaging, genetic profiles, metabolomics and patient natural histories to study dry AMD. The company uncovered novel biological pathways and therapeutic targets by applying AI/ML techniques to these massive datasets. With two lead assets set to enter the clinic within the next 12 months, Character Bio is on course to become one of the first to deliver approved therapies built on a multi-omics foundation, powered by advanced AI/ML analysis.

Our fund is optimistic that innovation within our sector will continue to grow as we leverage large multi-omics datasets with advanced AI/ML. At the heart of this growth are new, innovative tools and approaches that will lower the cost of generating these extensive datasets beyond just single-omics. We have begun to see a shift in how large consortiums are expanding their “-omics” footprint beyond just genomics. One prominent initiative that has embraced the power of multi-omics is the UK Biobank. This program has enrolled over 500,000 volunteers who will donate information—including biological samples, physical measurements, body and brain imaging data, bone density data, activity tracking and lifestyle questionnaire data—over the span of 30 years. Beyond genomics, the Biobank collects proteomic, metabolic, MRI imaging, natural history and other key datasets to holistically understand how these historically disparate data types interact. Their goal is to translate these insights into novel findings that can inform the development of new therapeutics and diagnostics.

Over the last decade, we have seen other private and public initiatives, such as the All of Us initiative and Project Baseline, set out to gather similarly large multi-metric data for the same purpose. The key to capitalizing on these datasets lies in understanding subtle and often hidden connections within them—an approach made possible through AI/ML methods that can identify insights too complex or minute for human intuition alone. In our portfolio companies, we have observed how crucial AI/ML approaches are for extracting valuable information and maximizing the potential of these datasets. This trend—integrating AI/ML with the collection of massive datasets through new and innovative tools—will likely continue and, in doing so, provide patients with a new generation of therapeutics and diagnostics.

5. Conclusion and Outlook

We have witnessed how impactful single-omics analyses, particularly genomics, have been in understanding disease pathology and progression, leading to dozens of approved drugs. However, single-dimensional “-omic” data inherently has limitations given the complexity of diseases and disorders we aim to treat. The initial wave of omics-based medicine was primarily driven by advances in genomic sequencing technologies and substantial reductions in sequencing costs, fueling significant innovation over the past two decades. Now, we see a similar technological advancement unfolding in other “-omics” domains—including proteomics, metabolomics, glycomics and beyond—with costs starting to decline in a manner comparable to genomics, setting the stage for further innovation. Yet, aggregating, managing and analyzing massive multi-omics datasets consisting of billions of data points poses unique challenges, necessitating more sophisticated artificial intelligence and machine learning methods. Both private and public sector initiatives are already emerging to address these challenges.

In the coming decade, we anticipate a new wave of innovation in multi-omics medicine, potentially surpassing the transformative impact initially driven by genomics. Luma Group aims to actively invest in and nurture this exciting frontier, positioning itself at the cusp of this transformative era in multi-omics medicine. If you’re building in the space, please reach out to us.


  1. Argelaguet, R., et al. (2021). Multi-omics integration approaches for disease modeling. Nature Reviews Genetics, 22(6), 345–362. ↩︎
  2. https://www.science.org/doi/abs/10.1126/science.abl5197 ↩︎
  3. Lotfollahi, M., et al. (2022). Mapping single-cell data to reference atlases by transfer learning. Nature Biotechnology, 40(1), 121–130. ↩︎
  4. Jumper, J., et al. (2021). Highly accurate protein structure prediction with AlphaFold. Nature, 596(7873), 583–589. ↩︎
  5. Jumper, J., et al. (2021). Highly accurate protein structure prediction with AlphaFold. Nature, 596(7873), 583–589. ↩︎
  6. https://www.nature.com/articles/d41586-022-02083-2 ↩︎
  7. https://pmc.ncbi.nlm.nih.gov/articles/PMC11466455/ ↩︎
  8. https://www.nature.com/articles/s41588-024-01993-3 ↩︎
  9. https://www.nature.com/articles/s43018-024-00891-1 ↩︎
  10. https://www.niaid.nih.gov/news-events/gene-signature-at-birth-predicts-neonatal-sepsis-before-signs-appear ↩︎
  11. https://pmc.ncbi.nlm.nih.gov/articles/PMC8508003/ ↩︎
  12. https://news.northwestern.edu/stories/2019/05/artificial-intelligence-system-spots-lung-cancer-before-radiologists/ ↩︎
  13. https://www.nature.com/articles/s41746-021-00518-0 ↩︎
  14. https://www.nejm.org/doi/full/10.1056/NEJMoa1901183 ↩︎
  15. https://www.bccresearch.com/market-research/biotechnology/multiomics-market.html?srsltid=AfmBOordStMDYvNp3BPq_s_wZgT3nGfnzBzGpzJrUp4Wd1VNObRVqVz1 ↩︎
  16. https://www.drugdiscoverytrends.com/20-biotech-startups-attracted-almost-3b-in-q1-2024-funding/ ↩︎
  17. https://www.mintz.com/insights-center/viewpoints/2166/2025-03-10-state-funding-market-ai-companies-2024-2025-outlook ↩︎
  18. https://www.biostate.ai/blog/genome-sequencing-cost-future-predictions ↩︎
  19. https://www.nature.com/articles/s41597-023-01949-y ↩︎
  20. https://www.ukbiobank.ac.uk/enable-your-research/about-our-data/past-data-releases ↩︎

Precision Therapy: Applying Proven Technologies to Inflammatory and Immunological Disease Treatment

The convergence of various factors, including advancements in OMIC data, repurposing of precision oncology technologies and high demand from patients and pharmaceutical companies, has positioned inflammation and immunology sectors for explosive growth, driving the development of safer and more effective precision therapies.

March 2024

Inflammation and Immunology Disease Landscape

Inflammatory and Immunology (I&I) diseases affect approximately 35 million Americans or over 10% of the population. Despite the prevalence, current treatments are ineffective in managing these conditions.

Unsurprisingly, this lack of effective treatments has led to significant demand for new and innovative therapies that are safe and efficient. However, biotech and pharma R&D focus to date has largely been on more severe indications (e.g., oncology), which despite a smaller size, have seen rapid market growth. Precision therapy methods, harnessing established and de-risked technologies derived from precision oncology, hold the promise of catalyzing a comparable revolution in the treatment of I&I diseases. This presents an opportunity for enhanced disease management and ultimately, superior patient outcomes.

The lack of innovation in I&I drug development has been a critical factor contributing to the slower growth of the I&I market. Interestingly, despite the two times larger patient population than oncology, I&I diseases only generate 60% of the market size (Figure 1).

Figure 1: Comparison of I&I disease and oncology markets.

Source: SEER, NIH, IQVIA, Global Data, Evaluate Pharma.

This discrepancy cannot be attributed to differences in treatable disease subtypes or drug treatment costs, but instead to the limited treatment options available for I&I diseases. Limited options have resulted in redundant use cases and lower overall market penetration.

In comparison, oncology has more than twice the number of marketed drugs, which are more diverse in their mechanisms of action and use cases. For example, the top 16 selling oncology drugs generated over $50 billion in sales in 2022 alone according to Global Data, accounting for approximately 50% of the total sales, and 13 out of 16 of these therapies have novel biological mechanisms of action. On the other hand, the I&I market is dominated by only five drugs that comprise around 50% of the market according to Global Data and have only three unique biological mechanisms of action. This dynamic highlights the need for innovation and the potential for market expansion in the space.

The historical safety issues in I&I drug development have been a primary reason for the lag in the market’s growth. Many promising drugs that have demonstrated efficacy in preclinical and clinical trials have failed to enter the market due to safety concerns associated with chronic use. A prime example is CD20 targeting antibodies like Rituxan (rituximab) have shown promising efficacy in the clinic for multiple I&I indications, the class of drugs harbors severe safety issues/risks (opportunistic infection, anaphylaxis, acute coronary syndrome and even death) with chronic use. This “safety Achilles heel” results from target redundancy in disease and healthy cells/tissues, where many tractable drug targets overlap with those expressed in healthy states.

The need for more innovation in addressing these safety concerns has hampered the progress of I&I drug development, with only Stelara (an anti IL-12/IL23 antibodies used for the treatment of Crohn’s disease, ulcerative clotisese, plaque psoriasis and psoriatic arthritis) out of the top five selling I&I drugs entering the market in the past 15 years. This lack of innovation has significantly impacted the growth and potential of the I&I market. However, it highlights a potential opportunity to drive market growth with novel approaches.

Looking forward, I&I drug development can learn from the similar challenges experienced in oncology drug development in the 1990s and early 2000s, which were overcome by shifting to a personalized medicine approach.

Chemotherapy drugs (i.e., antineoplastics) were the legacy standard of care in oncology and have broad mechanisms of action, systemically killing rapidly dividing cells. While this can successfully target cancer, these cells also exist in several non-cancerous tissue types like hair and the lining of the stomach, resulting in the severe side effect profile that is ubiquitous in the class. The non-specific targeting of cells created an immense treatment burden on patients and spurred the improvements in specificity, efficacy and safety seen in the last decade. New scientific insights from collecting and analyzing massive patient datasets enabled this shift, paired with innovative technologies that allowed therapies to target tumor cells while selectively sparing healthy cells. This shift to precision approaches ushered in the era of precision oncology, unlocking enormous market growth that quadrupled in less than 15 years (Figure 2) and vastly improved patient outcomes (Figure 3). I&I drug development can leverage similar strategies to address safety concerns and drive innovation, paving the way for its growth and success in the market.

Figure 2: U.S. oncology sales have increased dramatically over the past 10 years with approvals of immunotherapies (e.g., Keytruda) and other targeted agents (e.g., Tagrisso) driving new sales volume.

Source: Global Data, Evaluate Pharma.

Figure 3: The growth in the U.S. oncology market was enabled by the dramatic improvements in patient care outcomes supporting their clinical success. This is particularly evident in front line EGFR NSCLC.

Note: Front line standard of care prior to TKI approvals assumed to be platinum-based chemotherapy. PFS: Progression-free survival, ORR: Overall response rate. Source: FDA label, SEER, clinical trials.

To overcome the innovation bottleneck and thrive in the market, I&I drug development can take cues from the success of oncology by making two fundamental shifts:

  1. Embracing OMICs Large Data Sets: One crucial missing component in developing I&I precision therapies has been the need for novel biological insights from analyzing massive patient datasets. Until now, researchers have lacked the proper tools, such as single-cell sequencing, advanced fluorescence-activated cell sorting and advanced mass spectrometry, to accurately decipher the immune system’s complexities. By leveraging these technologies and analyzing extensive patient data, I&I drug developers can gain valuable insights into disease mechanisms and identify new targets for precision therapies.
  1. Repurposing Precision Oncology Technologies: The technologies that revolutionized precision oncology, such as heterobifunctional molecules, bispecific antibodies and cell therapy, can be repurposed for developing precision I&I drugs. In oncology, these innovative technologies have already shown success in targeting tumor cells while sparing healthy cells, and they can also be applied to I&I diseases. By repurposing these proven technologies, I&I drug developers can accelerate the development of safe and effective therapies, overcoming the stagnation in the I&I market.

Investors and companies who focus on analyzing patient data and repurposing precision oncology technologies early on will likely meet the demand for new therapies during the transition to the era of precision therapies in I&I drug development. This success may result in significant therapeutics creation for the large unmet medical need, which are positioned well to gain strong market traction.

I&I Opportunity

Multiple converging factors have positioned the I&I field for rapid growth, potentially surpassing the oncology market. This historically underserved market now has the necessary pieces to overcome historical bottlenecks in I&I drug development and usher in a new era of precision therapies.

One essential advancement is the availability of massive OMIC data, which can be used to address and advance therapeutic development in I&I diseases across several axes. New decoding technologies, like single-cell sequencing, provide novel biological insights for developing precision I&I treatments. This wealth of data has enabled better patient stratification, allowing researchers to characterize patients more accurately and identify responders while excluding non-responders who may pose safety risks. Additionally, this data has revealed new target proteins and cell types that could overcome historical safety issues by increasing drug specificity and expanding the repertoire of potential targets for I&I drug development.

Moreover, I&I drug development can leverage cutting-edge precision oncology technologies that have emerged from the advancements made in the last two decades. These clinically validated technologies, such as adoptive cell therapy and bispecific antibodies, can be repurposed to address safety concerns in I&I drug development. In addition, these modular technologies can be more easily adapted for cost-effective and rapid I&I drug development. For example, companies have repurposed oncology bispecific antibody technology to target immune cells involved in atopic dermatitis while sparing healthy immune cells.

The demand for I&I therapies in the market remains enormous, with many diseases still lacking effective treatments. Many large pharmaceutical and biotechnology players have identified this gap in the market and have been exploring opportunities to access the value. Given their lean and waning I&I drug pipelines, they have looked outwards, lending to several lucrative biotech acquisitions. For instance, companies like Arena Pharmaceutical ($6.7B in 2021), Pandion Therapeutics ($1.85B in 2021) and Momenta Pharmaceuticals ($6.5 billion in 2020) have been acquired by pharmaceutical giants Pfizer, Merck and J&J, respectively, showcasing the appetite for new precision I&I therapies and the potential for continued acquisitions in the future (Figure 4).

Figure 4: Select M&A and licensing I&I deal activity from large cap pharma/biotech since 2020.

TargetAcquirerDeal TypeUpfrontTotal
TelevantRocheM&A$7.1 B$7.1 B
Arena PharmaceuticalsPfizerM&A$6.7 B$6.7 B
Momenta PharmaceuticalsJ&JM&A$6.5 B$6.5 B
ChemoCentryxAmgenM&A$3.7 B$3.7 B
ChinookNovartisM&A$3.5 B$3.5 B
DICE TherapeuticsEli LillyM&A$2.4 B$2.4 B
Pandion TherapeuticsMerckM&A$1.85 B$1.85 B
Garcell BiotechnologyAstraZenecaM&A$1 B$1.2 B
Nimbus TherapeuticsTakedaLicense$4 B$6 B
EVQQ TherapeuticsGileadLicenseNot disclosed$658.5 M

Source: Company Filings, Pitchbook.

Overall, the convergence of various factors, including advancements in OMIC data, repurposing of precision oncology technologies and high demand from patients and pharmaceutical companies, has positioned I&I for explosive growth, driving the development of safer and more effective precision therapies.

Luma is strategically positioned as a pioneering investor, ready to capitalize on the burgeoning I&I market. We have extensive expertise and active engagement in this market’s two primary value drivers. Additionally, we have an established presence in the emerging field of precision I&I -focused biotech, with active investments in this rapidly growing sector.

Value Driver #1: Luma’s expertise in identifying essential OMICs technology is central to our investment approach and ingrained in our DNA. Our fund has strategically invested in core technologies fueling the new wave of precision I&I therapies. As a result, we possess a keen understanding of the power of OMICs data sets and how to leverage them for the best drug discovery. This knowledge places Luma in a prime position to identify companies and technologies that are highly differentiated and poised for success.

Value Driver #2: Luma’s proficiency in repurposable precision oncology technologies is supported by a deep understanding and proven track record of funding and founding and investing in multiple precision medicine oncology companies. As a result, we are well-versed in cutting-edge technologies like bivalent small molecules, heterobifunctional molecules, bispecific antibodies and others that can be rapidly repurposed to develop precision I&I therapies (Figure 5). This expertise positions us at the forefront of the intersection of precision oncology and I&I, unlocking new possibilities for innovative treatments.

Figure 5: Precision therapeutics modalities with potential for translation into I&I disease.

Source: Nature, Cell.

Committed to catalyzing the new precision I&I market, our fund is already actively engaged and primed to capitalize on the new wave of I&I disease market growth. For example, in 2023, we invested in ROME Therapeutics – a company pioneering a new class of precision I&I therapies called LINE-1 reverse transcriptase inhibitors (NRTIs) – demonstrating our proactive approach to identifying and seizing first-mover opportunities.

ROME’s groundbreaking approach targets the uncharted territories of the human genome, often referred to as the ‘dark’ or noncoding regions. Utilizing cutting-edge sequencing and analytical methods, they have made remarkable discoveries. Specifically, they’ve identified certain genomic regions intricately linked to the progression of multiple types 1 interferon-driven I&I diseases, such as Lupus, also referred to as interferonopathies.

Their research has led to the identification of LINE-1 reverse transcriptase (RT) as a new therapeutic target. This target operates upstream of already validated targets, offering a unique approach to treatment (Figure 6). Significantly, it minimizes the risk of infection from opportunistic pathogens, given it is not an immunosuppressant. This promising development is rapidly progressing, and ROME is on track to introduce its first drug to clinical trials in 2024. ROME is applying its approach to advance earlier-stage programs across a spectrum of autoimmune diseases as well as neurodegenerative diseases and cancer.

Figure 6: Rome Therapeutics’ mechanism of action versus approved SLE treatments.

Source: Company materials.

Additionally, we recognize the value in and actively seek investments leveraging precision oncology technologies for I&I therapies, further showcasing our commitment to driving innovation and capturing market opportunities.

Luma is well-prepared to lead the I&I investment landscape, leveraging our expertise in OMICs technology and data, repurposable precision oncology technologies and active participation in pioneering I&I companies.

Conclusion

The I&I market is a rapidly expanding investment sector, but a lack of groundbreaking therapies has hindered its growth. Despite this challenge, the foundational elements are now coming together to propel the development of innovative precision therapies and unlock significant market growth. Luma possesses the expertise, experience and enthusiasm to be a pioneering investor in this promising market. Therefore, we are the ideal investment partner to capitalize on emerging opportunities in the precision I&I market, offering the best growth potential.

  • Page 1
  • Page 2
  • Go to Next Page »

Footer

footer-logo

Quick Links

  • Home
  • LABI
  • People
  • Insights
  • Our Strategy
  • Contact Us

Social Media

© 2025 Luma Group · All rights reserved