EVENTS | VIEW CALENDAR
Special Report on Drug Screening: Phenotypic finery
Special Report on Drug Screening
Microscopy meets the molecular in the search for phenotypic impacts in drug screening
By Randall C Willis
Late one night, a police officer approaches a man frantically searching the ground under a streetlight.
“Did you lose something?” the officer asks.
“Yes, my car keys,” the man replies.
The officer assists in the search but after a few fruitless minutes, begins to wonder.
“Are you sure you lost them here?” the officer asks.
“No, I lost them in the park,” the man replies, not giving up.
“The park’s a block away,” the officer retorts. “Why are you looking here?”
The man finally looks up: “Because this is where the light is.”
The ability to characterize human disease down to a malfunctioning enzyme or receptor or to a single mutated gene has led to an era of aggressively targeted therapeutics. This era has seen significant successes, but it has also seen numerous failures, where early in-vitro and preclinical success has failed to translate into humans.
It is not that the therapeutic is not hitting its assigned target. The problem is often instead that it is hitting its target and one or more unanticipated targets or that its intended target has pleiotropic (affecting two or more phenotypic traits) effects.
When you look for what you seek, you may find it, but you may miss other equally important facets of the equation.
To address this limitation of defined molecular endpoints, there is growing interest in the exploration of less-defined phenotypic impacts in drug screening.
Beyond the streetlight
Sam Cooper, co-founder of Phenomic AI, highlights the difference by comparing BRAF mutation-driven cancer vs. fibrosis.
“When you know the mutation, you can create a drug unique to that mutated form,” he says. “But if you look at fibrosis, the condition is probably not that genetic.”
“When you don’t understand the genetics of it and you can’t nail it down to a single protein, there are probably a lot of proteins going wrong at the same time,” he continues. “That is where the phenotypic approach works really well.”
Understanding what is happening in a cell, tissue or organism during disease evolution, or in a drug screen, means seeing as much as possible as openly as possible. And it means having as much physiological context as possible for the model being examined.
“With phenotypic screens, you can see if the compound is having the preferred effect regardless of what the target is,” explains Cooper’s colleague and co-founder Oren Kraus. “You can also probe toxicity and other facets directly in one assay.”
Although phenotypic analysis has been a mainstay of drug development since the earliest days of medicine—think Galen surveying patient symptoms, or pathologists hunched over histology slides—it is only recently that the methods and technologies have started to achieve a throughput to rival biochemical assays.
“I used to be a bench researcher and ran a core facility for a number of years on the academic side,” says Brendan Brinkman, now senior marketing manager at Olympus Life Sciences. “I have really seen the trajectory of the way things have advanced in the last 10 to 15 years; some of it is breathtaking.”
“We’ve had real increases in sensitivity and speed on the microscopy side, but also a more sophisticated understanding of the context-dependency of physiological responses to drugs,” he continues.
It is well understood, he explains, that cells in 3D culture or in tissues behave very differently from cells in 2D culture. Concomitant with that understanding has been the development of optical technologies to unveil what’s happening in those 3D contexts and the ability to translate the increasingly complex data arising from those experiments into meaningful information.
Brinkman is quick to point out the synergistic importance of molecular techniques, however—not just highlighting the multiplexing technologies for which his industry is known, but also the ability to manipulate cells and tissues with tools like the gene-editing technology and techniques known as CRISPR.
“This convergence of fundamental technologies is really leading to a significant increase in the potential for phenotypic screens, which can now begin to couple in things that were only available in a purely molecular type of analysis previously,” Brinkman enthuses. “Where you might extract cell populations and analyze the kinds of molecules that were in those cell populations, even on a single-cell level, we can now look at those cells and tissues in their context while we’re treating them with various drugs. That’s a real game-changer for us.”
Reflecting the importance of context, Purdue University’s Sherry Voytik-Harbin and colleagues described their efforts to develop a 3D tumor invasion model for high-throughput, high-content phenotypic drug screening. In a 96-well format, they suspended small clumps of pancreatic ductal adenocarcinoma (PDAC) tissue within a matrix of cancer-associated fibroblasts.
The researchers then monitored the impact of gemcitabine on tumor invasion using multiplex assays defining cell parameters such as number, proliferation and metabolic activity.
“Observations that gemcitabine is effective at inhibiting proliferation while not fully eradicating the tumor or hindering invasion is consistent with its mechanisms of action as targeting DNA synthesis,” the authors noted. “Additionally, these results align with those from PDAC xenograft models, which show gemcitabine substantially hinders tumor growth and proliferation but does not induce significant apoptosis or reduction of distant metastases and invasion-related markers.”
Perhaps more importantly, however, the researchers were keen to note that these results highlighted the need to extend screening assays beyond simple assessments of cell viability or cytotoxicity, to quantify a variety of phenotypic parameters.
“This type of [high-content] analysis using a 3D phenotypic model opens the door for deeper mechanistic understanding of drugs and more predictive results earlier in the development process,” they suggested.
Another challenge in monitoring cell and tissue behavior in a physiologically relevant manner is understanding that these systems are dynamic; cellular responses to environmental or contextual perturbation evolve with time. Thus, there is always a concern over what Olympus’ Brinkman describes as a stroboscopic effect.
“If you have a drug response curve that you want to have meaningful data out of, sometimes you have to be looking at that response over a course of time,” he says.
Not only does this provide you with a sense of tissue response continuity, but also it ensures that the event wasn’t an artifact of when you chose to look. He offers examples such as cellular migration, degradation or apoptosis or a time-course of on-off molecular interactions.
“Those kinds of things are really only possible in a live-cell context or at the very least, in a context where you can see the cells interacting with each other in a very clear way,” he says.
Late last year, David Drubin and colleagues at University of California, Berkeley and Howard Hughes Medical Institute examined precisely this question, undertaking what they called 4D cell biology to study clathrin-mediated endocytosis (CME) in stem cell-derived intestinal organoids.
To monitor vesicle migration within the organoids, the researchers fluorescently tagged two CME proteins. Furthermore, to correct for tissue-induced aberrations, they used adaptive optics with lattice light-sheet microscopy.
“The large 3D field of view allowed us to image through the organoid’s epithelial cell layer and to capture Cltr and Dnm2 dynamics on apical, basal and lateral membranes with a frame rate of 2.85 s/frame per channel,” the authors wrote. “The field of view allowed the simultaneous observation of multiple cells at a time.”
The sheer volume of data, however, required that they also develop a package of image and data analysis tools: open-source and freely available pyLattice.
Under untreated conditions, the clathrin-coated vesicles formed on each of the three membrane domains, much as was seen in other cell systems.
When the researchers treated the organoids with actin-stabilizing drug jasplakinolide, however, they found very different results from those previously published. The researchers offered several possibilities for the differences, but came to no definitive conclusions.
Aside from this interesting result, the researchers were upbeat about this first step into 4D high-content analysis.
“This study shows how a confluence of technologies from stem cell biology, 3D organoid culture, advanced microscopy and big data image analytics can open the door for tissue-scale 4D quantitative cell biology,” they concluded.
Not to be outdone, Samantha Peel and colleagues at AstraZeneca and Emulate performed similar experiments to develop an end-to-end high-content workflow using organ chips. In particular, they used a liver-chip model, which could be useful in predicting drug-induced liver injury.
Using confocal fluorescence microscopy, the researchers could not only distinguish hepatocyte and endothelial cells at single-cell resolution, but also validate stains for parameters such as morphology, proliferation, apoptosis and mitochondrial structure.
The researchers probed the liver chip with benzbromarone, which is known to cause hepatoxicity via a mitochondrial mechanism, and noted a significant treatment effect, including changes to mitochondrial structure. Similar analysis of apoptosis-inducing staurosporine produced increases in an apoptosis marker.
For the researchers, the results speak to something deeper than organ chips simply doing the job for which they were designed.
“Although biomarkers of cellular injury in response to drug exposure can be measured, we have demonstrated here that high-content imaging offers a complimentary approach of capturing cell phenotype changes in response to a hepatotoxic compound, thereby enhancing the application of these systems for mechanistic studies,” they wrote. “We have described morphology, proliferation and apoptosis endpoints as examples, but with the rapid advancement of sophisticated image analysis algorithms and use of machine learning, the potential phenotypes that could be analyzed can extend to any available antibodies and markers.”
“Moreover, this approach is not limited by the small number of cells within a chip,” they continued. “By imaging in three dimensions, we can separate the cell layers and read cellular phenotype at the single-cell level, allowing quantification of heterogeneity and an understanding of phenotype which would not be apparent in population-averaged measures.”
Yet even the most elaborate 3D model or organ-on-a-chip system is still quite targeted. Yes, you’re screening within a more physiologically relevant context, but the results are still defined by the cells and tissues you’ve chosen to screen.
A bigger picture
To address this challenge and to open drug screening efforts to more serendipitous discovery—whether positive or negative—researchers turn to model organisms, such as rodents and non-human primates. But even here, there have been historical limits.
“In this case, ethical concerns arise, and, from a practical point of view, these animals are expensive and their handling is labor-intensive,” Martin Gijs and colleagues at Ecole Polytechnique Fédérale de Lausanne suggested recently. “Thus, these studies cannot involve a high number of specimens, which prevents any high-throughput experimentation.”
These challenges were part of the group’s rationale for performing their systemic screening efforts in the nematode Caenorhabditis elegans.
From their perspective, the worm offered several attractive features as a model system:
“C. elegans is the only organism you can study at the speed of in-vitro cell culture, if you have the right systems,” offers Adela Ben-Yakar, founder of Newormics and researcher at the University of Texas at Austin. Building that “right system” has been the mission of Ben-Yakar and colleagues for several years now.
“My journey started about 15 years ago,” she recounts. “We were using lasers to do axotomic injured axons and for the first time, we could see nerve regeneration in C. elegans.”
“Of course, it was a long and very laborious process,” she continues. “You had to immobilize the C. elegans, then find where they were immobilized, then perform the imaging.”
An engineer by training, and given that the worms were cultured in liquid medium, Ben-Yakar realized that microfluidics might be the solution.
“There were a couple of other groups who were doing this, and we were all creating these really fancy and complex microfluidic platforms that enabled immobilization of C. elegans at high-resolution at better and higher throughput than what we did manually,” she remembers. “But these were still a very small number of animals.”
Furthermore, the devices were so complicated, she recalls, that it would take a long time for even an engineering student to master it.
As she indicated in a review earlier this year, the optimal microfluidic system needed to:
Using these criteria as a template, Ben-Yakar’s group developed the vivoChip, a 96-well format microfluidic platform imaged on an inverted microscope.
Late last year, Ben-Yakar and colleagues described their efforts to apply the vivoChip to a screen of compounds targeting amyloid precursor protein-induced neurodegeneration.
In earlier work, the group developed a C. elegans model that carried a single copy of human APP, and they were able to identify neuroprotective small molecule ligands—norbenzomorphans—of the sigma 2 receptor, aka transmembrane protein 97.
They extended this screening effort on a larger set of norbenzomorphans, seeking to identify compounds that produce dose-dependent neuroprotection. Because any neurological effects would occur early in worm development, the researchers focused their attention on changes in the cellular morphology of specific neurons.
“To enable imaging conditions with a good signal-to-background ratio and high contrast, we designed a unique tapered 3D channel with nearly a constant aspect ratio, which facilitates maintaining the lateral orientation of the animals as they are pushed into the channels by an on/off pressure cycle,” Ben-Yakar and colleagues wrote.
They then imaged the worms at 12 different heights in 5-μm increments to ensure they could resolve the neurons in the best focal plane, resulting in a total of 4,608 images over a 96-well plate, each well immobilizing 40 worms.
Of the nine norbenzomorphans they screened over a range of doses, two ligands were highly neuroprotective, decreasing neurodegeneration in SC_APP mutants to wildtype control levels. The researchers noted that these results fall in line with other efforts demonstrating the efficacy of a related norbenzomorphan both in C. elegans and a transgenic APP mouse model.
As Ben-Yakar suggests, however, theirs is not the only lab looking to perform high-content analysis of C. elegans via microfluidics and microscopy.
Gijs and colleagues, for example, recently described their chip-based screening platform and automated image analysis. Lower throughput than the vivoChip, the Swiss platform divides larger channels into small corrals, each of which holds a handful of freely swimming worms, grown from larvae.
As a proof of concept, the researchers monitored the response of worms to different concentrations of doxycycline, taking images every 20 minutes for 52 hours. Specifically, they examined the antibiotic’s impact on growth using bright-field imaging and on oxidative stress using GFP expression and fluorescence imaging.
“Worms that were treated with doxycycline showed slower growth and increased GFP expression compared with untreated worms,” the authors noted. “These results indicate that a higher mitochondrial stress may affect and delay the development process.”
They then performed a motility assay in real-time using a stereomicroscope with dark-field illumination, monitoring the worms’ dose-response to the anaesthetic tetramisole. Interestingly, the researchers noted subtle differences in the response times of different worms experiencing the same treatment, highlighting the importance of single-worm resolution analysis.
The implications of the experiments were multiple, according to the authors.
“We demonstrated the opportunity to link a phenotypic effect (such as the arrest of growth during development) to an underlying molecular mechanism (such as the increased mitochondrial stress), which is very powerful when searching the biomolecular pathway causing a disease,” they concluded.
“By taking advantage of the multiplexing capability of our microfluidic design, we also quantified the dose-response relation of an anesthetic with real-time, large field-of-view, darkfield imaging of the entire chip at once, which highly increases the throughput of a step that is performed on any potential drug during the development pipeline, to test its effect and its toxicology,” the authors added.
From an optical perspective, imaging any object in three dimensions can be tricky.
“If you want to look at something at a high resolution, you really need to go in sections,” Ben-Yakar says. “But again, the nice thing is that C. elegans is very transparent and there is minimal scattering, so you can image it with the conventional imaging tools.”
Compare this to organoids or spheroids, she continues, where there is much more light scatter, and it can be difficult to image deeply enough to study the entire organoid. This challenge is further aggravated by tissue movement in the growth media.
Recognizing the biological power of 3D tissues, however, Ben-Yakar has set her group to the task of developing microfluidic tools to enable better, higher-resolution imaging of organoids and spheroids.
Likewise, C. elegans are not the only transparent model systems being studied.
Although adult zebrafish are opaque to light, their embryos are transparent, allowing researchers to perform drug screens either for efficacy in disease models or for toxicity issues.
Earlier this year, Jochen Gehrig and colleagues at Acquifer and University Children’s Hospital Heidelberg described their development of a smart imaging workflow using a zebrafish model of human cystic kidney disease.
Rather than use microfluidic channels, the zebrafish platform relied on microplate wells filled with moulded agarose that produced consistent positioning and dorsal imaging of the embryonic pronephroi. And to improve throughput, the researchers developed a two-step imaging procedure that initially centered on the embryos at lower resolution before zooming in on the regions of interest. This allowed them to acquire high-resolution image stacks for more than 20,000 embryos.
“Once the image-based readout was established, the automated pipeline enabled the screening and profiling of a library of 1,280 compounds in thousands of embryos in about one year,” the authors wrote. “As the work relies on a simplified module for advanced feedback microscopy workflows and accessible open-source image processing techniques, we believe this pipeline can serve as an example and template for biomedical research labs with limited resources that aim to conduct large-scale phenotypic scoring in a whole organism model.”
Clearly, the throughput and timelines with zebrafish are not yet as good as those with C. elegans, but progress is being made.
As more and larger model systems are being explored, however, technical advances are not simply required on the optics and husbandry side of things. As suggested earlier, expansion of these technology areas have led to an exponential growth in data, requiring novel analytical resources.
Deluged in data
“Automated microscopy enables capturing multiple features including real time to more adequately assess the full response to drug treatment,” OcellO’s Leo Price and colleagues recently noted in a review of 3D cell-based drug screens. “The greater morphological complexity of tissues cultured in 3D makes this type of high-content analysis particularly valuable, retrieving rich information that would be overlooked by single endpoint assays.”
Capturing multiple xy images in a 3D experiment, sometimes over multiple image channels, can dramatically increase data volumes over a comparable 2D experiment, the authors suggested. To cope with this, many software platforms adjust 3D stacks to 2D.
“A lot of the screening tools grew up in the 2D world,” explains Olympus’ Brinkman. “Even when you have confocal-type technology, which is just looking at optical sections, those 3D volumes of data might be compressed down into a maximum-intensity projection; we call it 2.5D data.”
“It loses all of the information about spatial distribution, it changes the accuracy of cell counts, and ultimately, it changes things like ID50 responses,” he continues. “You get different drug dosage results, specifically depending on how you measure it, whether you’re measuring in 3D or 2D.”
For this reason, Price and colleagues explained, OcellO developed its own in-house software to permit true 3D phenotypic analysis and single-cell segmentation.
Commercially, a similar effort at Olympus resulted in NoviSight, which the company launched in the United States last September.
“It wasn’t so easy to automatically identify objects in three dimensions and quantify their volume across not only a single field of view or single image data set, but also across a whole experiment within a microplate,” Brinkman explains. “You’re doing segmentation analysis across all of these different structures—nuclei, mitochondria, whole cells or even whole organoids/spheroids—and yet that kind of analysis was extremely difficult.”
“What we set out to do was to find a way to improve the object recognition with specific kinds of 3D algorithms, and create a graphic user interface that made that really accessible and deliver results that were informed by our expertise in the optical field to make sure that the accuracy was as high as possible,” he adds.
And much as Newormics’ Ben-Yakar talked about developing screening systems that could be used without a degree in engineering, so too is Brinkman adamant about data analysis tools not requiring a degree in computer sciences.
“As we develop these tools, they have to be usable,” he notes. “There is a fundamental difference between academic research, which is often on the cutting- or bleeding-edge of new technologies, and the throughput and statistical significance and reproducibility of screening assays.”
Part of that usability was simply making the graphical user interface (GUI) of NoviSight as intuitive as possible so that it didn’t require years of training to use.
“To give you a very concrete example, if we take a bunch of volume images from a 96-well plate, the way that the GUI works is you can click on a well, you can review the data on that well, you can do your segmentation analysis based on various characteristics that you’re looking for, and then you get a scatterplot, which looks like a flow cytometry scatterplot,” he explains. “You can click on an individual point in that scatterplot and it will pull up the image data from that.”
“It’s all linked graphically, so you can move freely through the software and go from a specific data point back to the image, or go from the image to a specific data point to understand not only what one snapshot of information is telling you, but also where that individual cell or organelle was inside the tissue and what well and what experiment,” he adds. “And you can look at it in three dimensions; there’s a 3D viewer, so you can turn it around and look at how the segmentation analysis worked.”
“The nice thing about high-content is that the experiments themselves have a lot of data in them,” enthuses Phenomic AI’s Kraus. “If you can skip the part of humans having to annotate, you can really leverage the AI a lot better.”
Brinkman suggests that this is where things like artificial intelligence are going to come into play.
Kraus got his start working at the University of Toronto, developing analytical tools to study protein localization from yeast high-content imaging studies. It was during this time that he met Cooper, who was performing similar deep learning with mammalian cells at Imperial College London. Quickly, the two realized that they had an interesting technique for high-content screening.
In 2017, Kraus and colleagues described their effort to improve automated analysis of the yeast image data with a deep convolutional neural network they called DeepLoc.
“Machine-learning models that have been used to cluster or classify individual cells using selected features, although useful, often fail to classify complex phenotypes,” explained the authors. “In contrast, DeepLoc updates its parameters by training directly on image data, thereby enabling it to learn patterns optimized for the classification task.”
This resulted in a 47-percent improvement over the comparator algorithm. Furthermore, they explained, DeepLoc overcame challenges such as segmentation by training directly on “bounding boxes” around single cells.
“The lack of dependence on accurate segmentation and the large variety of patterns that can be learned from the large training set enabled DeepLoc to accurately classify cells in challenging datasets where cell morphology is abnormal, such as yeast cells treated with pheromone,” they explained. “Furthermore, this feature enabled DeepLoc to analyze images generated through a highly divergent microscopy screen performed by another laboratory with limited additional training.”
DeepLoc became the focus of efforts to spin out their expertise into Phenomic AI.
Expanding on this effort, Kraus says the company is developing a machine-learning technique based on unsupervised learning.
“If we don’t know exactly which conditions are relevant that we want to classify against, we train the model to predict every single treatment,” he explains. “Even though we know those treatments ahead of time, the model learns a really good clustering of the data that way.”
He highlights a SMAD4 marker that localized to the nucleus under only one set of conditions, whereas it localized to the cytoplasm under all other conditions.
“So, that’s the only one that really shows up as a unique cluster,” he says.
Regardless of one’s comfort level with artificial intelligence and machine learning, the nature, diversity and scale of the data sets arising from high-throughput high-content imaging screens is quickly making it harder to do meaningful analysis without them.
“We’re seeing a lot of more complex data sets coming through—3D spheroids, organ-on-a-chip—things we really can’t analyze with the pre-deep learning approach,” Kraus says, particularly when looking at more complex phenotypes like time-course data and really trying to understand the biology in a more automated and accurate way.
“Before a lot of the deep-learning techniques, the workflow was basically segmenting every single cell in the image and then extracting a lot of measurements like size, area, shape, density,” he explains. “The actual segmentation part was really pretty challenging and needs to be retuned for every single screen. If that part is not accurate, then the numbers don’t really mean anything.”
And that process is made all the more challenging in complex co-culture 3D assays, he presses, where it can be quite difficult to define a single-cell boundary with so much going on.
“A lot of the AI overcome that by training models directly on the pixel data, so you don’t have the complication of having to convert the images into numbers or features,” he says. “The model does that itself directly from the images.”
Another way to avoid segmentation, according to Kraus, is by training the system on full fields of view rather than single cells.
“You can just extrapolate those directly from experimental layout,” he explains. “If you know what treatment is in every single well, you kind of get the label information for free.”
As well, he adds, skipping segmentation means not having to worry if cell lines or protocols are changed.
“The devil is in the details,” Brinkman weighs in. “People think that artificial intelligence will immediately spit out an answer, but the training data sets are really important. It’s an important tie-back to the understanding that you must have consistency in the quality of the data that’s going into the system.”
“You have to have consistent image quality, acquisition conditions, cell culture,” he remarks. “There’s a whole assay manual that comes out of NIH with guidance about how to culture your cells and make sure that the cell culture passage timing is reproducible because if you passage at a different time or different passage numbers, then you can get different results. All of these things definitely play a role in whether you can have confidence in the assay results.”
Aside from their own in-house projects—looking to develop fibroblast-targeting immunotherapies versus cancer and fibrosis—says Cooper, Phenomic AI is looking to partner with other companies in pharma to enter new areas of biology.
A big part of that goal, he notes with earnest emphasis, is the company’s ongoing efforts to hire people with pharma experience.
Bright fields ahead
So, where is the intersection of phenotypic and molecular analysis going next? Brinkman laughs.
“I have to be a little careful about that,” he chuckles. “I am a strategy guy and an inventor, so I can’t say too much explicitly.”
“Clearly, we need to continue to pursue technologies that allow us to image cells and tissues in a physiologically relevant way,” he continues. “We need to do that faster. We need to do it more accurately.”
And, he adds, we need to start looking at the possibilities of manipulating cells and tissues genetically, not only through drug screening applications, but also using things like CRISPR technology that can allow us to really test hypotheses.
“The combination of cell manipulation technology with the ability to view cells and tissues in a physiological context, those kinds of technologies are really going to be the future of drug discovery,” he offers.
“We’re getting to the convergence of being able to recapitulate certain physiological responses, not only in microtissues or organs-on-a-chip or bodies-on-a-chip, and that’s coming together with the ability to analyze the data in more meaningful ways and more reproducible ways,” Brinkman reflects. “It is a really exciting time for the field.”