At Spring, we built a high-throughput screening platform that sheds light on hundreds of cell behaviors across thousands of conditions — all in one screen — to discover drugs for aging and age-related diseases.
Fed by an automated lab generating terabytes of data, our platform unlocks valuable biological signals from a single powerful assay by combining high-content imaging and proteomics.
At the center is software we’ve called MegaMap, a tool built to give scientists the superpowers that lay at the intersection of human intuition and machine learning.
MegaMap is fueled by a suite of machine learning models that measures hundreds of high-content imaging and proteomics features. Combined, they open up new possibilities to build the most comprehensive cellular models of complex diseases in the world — with deeper understanding of drugs’ on-target actions, off-target effects, and safety signals.
With an explorable interface that unifies cellular function, morphology, metabolomics, proteomics, spatial interactions, and more, we built MegaMap to help scientists ask the hardest questions and get fast answers:
MegaMap gives scientists the superpowers needed to resolve hard queries like those above, decode unprecedented amounts of high-content imaging and proteomics data, and assess the behavior of thousands of drugs using primary human samples.
They can map their hypotheses to specific, functional understanding, and learn from unbiased hypotheses surfaced by the machine across hundreds of cell properties. Get a feel for (a small sample of) its power below:
Powered by a toolbelt full of computational gadgets
As drug developers, our goal is to build rapid understanding of these drugs’ actions, unexpected effects, and safety signals. MegaMap uses a packed toolbelt of novel computational tools to help us do so, all while learning from the terabytes of biological data generated by each experiment.
The first such tool is our automatic cell type classifier, which lets us run assays on heterogeneous sets of primary human immune cells:
In addition to the cell classification tool above, we deploy deep learning models at single-cell resolution to build cell-by-cell phenotypic profiles covering an array of valuable phenotypes and functions, a few of which you see here:
By applying these novel computational models to every cell in a population, we turn unstructured images into high-content, structured data, letting scientists easily query across biologically-relevant cellular functions.
Seeing things that humans might not
Cellular phenotypes vary in easily-interpretable dimensions such as size, shape, and color. But cells and their functions also manifest emergent patterns that are much harder for humans to even describe, let alone find on their own amongst hundreds of millions of cells.
We believe there are valuable biological signals in these hard-to-discover patterns. MegaMap and its toolbelt unlock these signals previously hidden in phenotypic screens — similar to the way DNA sequencing technology unlocked a trove of previously-hidden biological data.
Get a sneak peek at their power below: see how these embeddings can find similar cells among a pool of thousands of different samples, without anybody defining what “similar cell” means.
At right, nine matches for this phenotypic profile from Spring's ever-growing collection of high-content imaging data.Below, nine matches for this phenotypic profile from Spring's ever-growing collection of high-content imaging data.
These single-cell embeddings are the engine behind many columns you see in MegaMap. Using them, we’ve built deep learning models to capture a smorgasbord of cellular functions like immune cell interactions, subtypes of cell death, and cytokine expression for our scientists’ perusal, relying on the machine to distinguish cellular phenotypes that humans may not otherwise understand.
We think of MegaMap as kind of like Star Trek’s tricorder — but for pointing at primary cell samples to decode complex biology, discover connections, and screen drugs.
Try asking a few questions and clicking around:
Science led by humans
We build state-of-the-art tools so scientists can wield the superpowers that exist at the intersection of human scientific expertise and new computational technology.
From small details in MegaMap’s user interface to our strategic decisions, Spring’s work is guided by the goal of empowering scientists. In order for technology to radically improve drug discovery, it's not enough for a computational tool to be superhuman at interpreting massive amounts of high-dimensional data — it must unite this understanding with scientists’ intuition.
Much of MegaMap centers on this idea, including its cornerstone ability to combine human “phenotype curation” with machine learning results to uncover new targets and in a high-dimensional screen.
These features that our scientists curate in MegaMap are powered by the state-of-the-art machine learning tech described above. But the real output of our work — the advancement of targets and drugs — comes from a combination of technology with human scientists’ conviction.
This humble combination is the key to discovering and developing therapies for aging.
Giving scientists superpowers in the battle against aging
The diseases of aging present a notoriously complex biological problem — they are driven by biology that develops slowly over time and is not (yet!) well understood.
This is also an enormous opportunity. Those working in this space deserve the very best tools. We’re here to put the world’s best technology in their hands, starting with MegaMap and its gadgets above.
If this excites you, we’d love to hear from you. And we’re hiring.
- We use this system to study multiple cell types from many organs (and always primary samples), but our interest in immune aging has us focused on peripheral blood mononuclear cells (PBMCs), a heterogeneous immune cell population containing T cells, B cells, natural killer cells, monocytes, and more. ↩︎
- An embedding can be thought of as a ‘fingerprint’ of a given cell, a vector of numbers that capture an opaque latent representation of the cell’s phenotypic characteristics. ↩︎
For those not familiar: in high-content immunofluorescence imaging, every field of view contains from zero to hundreds of cells (depending on magnification, cell type, etc). And for every field of view, we capture up to six fluorescent channels, each of which is meant to light up a different slice of cell biology. Get a feel for these different channels by playing with the sliders below.
When we visualize this kind of data as humans, it’s common for Hoechst — which stains the cell’s nucleus — to be visualized in blue, while Phalloidin — which stains the cell’s actin cytoskeleton — could be represented in green. These choices are more conventions than anything, though, as any channel can be lit up in any color.