|
Cell Biology Special Report:
Feeling out phenotypes
July 2015
EDIT CONNECT
SHARING OPTIONS:
It was bad enough that Uncle Vernon
decided that the best time to hold the family reunion in Alabama was mid-July, when the humidity blows past 215 percent and the thermometer never drops below
110°F, but now your camera doesn’t seem to work.
You’ve crowded all your sweaty relatives into the
shadiest spot you could find, but even that relief isn’t enough to ease the spirits as you fiddle with the shutter button and electronic menu.
As you continue to fiddle, however, your irritating cousin connects your current difficulties to a recent incident
involving an over-enthusiastic barbecue, and suddenly the camera snaps a photo.
As your mom tries to determine who
has fainted in the heat, you scan your camera, unsure as to why it suddenly worked. Looking at the image, you see people smiling, presumably at your
cousin’s little jab. And then it hits you.
The camera has a smile sensor.
It wouldn’t take a photo when everyone was miserable. But in the presence of humor, that misery changed its appearance from frown into a
smile. However temporarily, the phenotype of your family changed.
And the same basic technology that took that
photo is also available in the research setting where, in its ultimate expression, only experimental conditions that provide a desired change move
forward.
Phenotype renaissance
The ultimate goal of
any medical treatment is to change a clinical outcome for a patient, whether it is the shrinkage of a tumor, the amelioration of pain, the clearing of skin
blemishes or a multitude of other goals. Historically, these gross morphological and pathological changes were the sole basis of intervention as doctors
would give a patient medicine and see if they got better or worse.
As our understanding of disease pathology at
the organ, cellular and subcellular level improved with the introduction of new technologies, however, our approach to health became much more targeted. The
gross, blurry picture of health became much more focused and refined.
Whole new avenues of drug development opened
up with the advent of the omics technologies, where previously qualitative analyses became much more quantitative, and disease states were described in much
more specific terms of gene sequence, metabolite levels and protein expression and modification.
Unfortunately,
despite some amazing medical successes arising from these reductionist approaches, success has not been a given, and despite many new therapies acting
precisely as expected on their targets, unexpected surprises have arisen.
“I think everyone got really
excited about reductionist approaches and particularly genomics and the idea that one gene would equal one target would equal one small molecule,”
opines Merrilyn Datta, chief commercial officer at tissue informatics company Definiens.
“The fact is that with complex, multifactorial disease, if there’s any heterogeneity in the target mechanism, you run into problems.”
![]() Her solution is to reincorporate some of those early medical principles with the newer methodologies, noting,
“At the end of the day, we may not know every mechanism of action, but we need to stop a phenotype [editor’s note: phenotype, in this
context, meaning disease].”
“I think the scientific interest has always been there to
understand the phenotype side,” adds Philip Lee, director of global marketing for cell culture at EMD Millipore. “But from a technology and market perspective, I do tend to agree that there is a realization that you can’t keep
going ahead with the pure omics technologies without catching up with some of the less-well-developed fields around cellular phenotypes and how to translate
the systems and the information into actual behavior.”
Both Datta and Jacob Tesdorpf, director of high-
content instruments and applications at PerkinElmer, highlight the importance of cellular context by looking at immuno-oncology (see also our special report, Body, Heal Thyself, in the June 2015 issue of DDNews).
“Immuno-oncology is all about context in the tissue,” Datta says. “How many immune cells have made it to
what part of the tumor? You can’t do that without looking at the phenotype.”
“Inflammatory cells
can really work in conflicting ways,” Tesdorpf explains. “They can be either tumor-supporting or tumor-killing. And you really need to understand
the balance of these different types of cells quite well to get a good prognosis, and maybe even for therapy.”
To analyze the immune cell makeup, the predominant tool has been flow cytometry, he continues.
“Everybody knows his CD4, his CD8, whatever surface markers that have been used for many years now to classify different types of immune
cells,” he continues. “And that works very nicely if you try to look at these cells in blood. But once you come to a solid tumor, flow cytometry
fails, and even if you try to mash up the tumor and isolate the cells from the tumor, you lose the context.”
This is where technologies such asPerkinElmer’s Opal platform and its multispectral imaging and analysis software step forward, he says,
allowing researchers to multiplex immunology markers in a single assay on a single slide and really get at the picture of how the immune response is
distributed in the tumor environment. (These platforms were recently described in detail in a review published by PerkinElmer’s Clifford Hoyt in
Methods.)
“The cells can compensate in so many different ways that if you’re not really
looking at the manifestation of cancer and the phenotype of that cancer, it is really tricky to know if you’re hitting right things,” Datta
continues, suggesting Definiens has seen an “explosion” in the number of conversations about tumor microenvironment. “What numbers of these
types of cells are in the microenvironment under different conditions? Or what’s the relative area covered by immune cells versus non-immune cells in a
microenvironment?”
“Once you do it computationally, the way you can manipulate the data and find out
what matters is much stronger,” she states. “The human eye could maybe count some of the cells, but we can’t do it in the same way that a
computer can just massively quantify all different aspects.”
To address these issues, researchers have
attempted to combine the best features of both reductionist omics technologies and more contextual cellular, tissue and model-organism approaches.
Pass a tissue
As was described at length in last year’s
special report on cell biology, titled Life Moves On (July 2014
issue of DDNews), a significant amount of movement into phenotype analysis has come from the cell imaging and high-content screening side of the
lab, where advances in microscopy and image analysis have both expanded and enriched cell and tissue analysis.
This, in fact, is the sole mission of Definiens, which recently hosted an international symposium dedicated to tissue analysis in a process they
proprietarily call tissue phenomics.
“With tissue phenomics, we ‘datafy’ our tissue with image
analysis,” the University of St. Andrews’ Peter Caie told the
symposium. “And image analysis…can look at morphometry, can look at biomarker distribution, it can look at many different things. And we
build up a complex hierarchical image-based signature, which we use to stratify patients in either a prognostic or predictive manner.”
“We really want to move away from probabilities or population probability statistics and move toward personalized,
informed clinical decision-making,” he added.
And key to this datafication, says Datta, is the ability to
correlate that information with other data types such as clinical outcomes or genomics.
Caie’s research is a perfect case in point; he used image analysis to
correlate morphological features with survival in patients with mid-stage colorectal cancer, in the hope of distinguishing those patients who would respond
well to surgical resection from those who would relapse and die.
As he described in the Journal of
Translational Medicine, his group identified three factors that seemed to correlate with poor prognoses: tumor budding and lymphatic vessel density and
invasion.
“The ability to quantify prognostically relevant histopathological features, in a robust and
routine manner through automated image analysis, will not only standardize the practice and negate observer variability but will free up a
pathologist’s valuable time,” the study authors wrote. “We believe that as digital pathology becomes more commonplace within the clinic,
automated quantification of histopathological features, as demonstrated here, will become an invaluable tool in the pathologist’s repertoire to
stratify high-risk cancer patients.”
“The reality is that having the ability of multiplexing gives you
the ability to control for certain effects,” offers Tesdorpf. “It gives you a much broader understanding of what is going on in your sample. And
ultimately this will provide more robust data.”
And as with Caie’s patient-stratification work, some
researchers are investing in personalized therapeutic screening, according to Tesdorpf.
“They take tumor
biopsies and expand the tumor cells from that patient biopsy and then try to basically treat the cells in vitro to find which drugs are most
efficient for this tumor at this stage for this patient,” he explains.
“There are a lot of companies
out there trying to look at either circulating tumor cells, cell-free DNA or DNA from tumor samples to get a good understanding of what genetic
rearrangements and modifications are present in the tumor,” he expands on his point. “And then, where there are known drugs that are specifically
effective against one of these mutations, direct therapy in that direction.
“What I think is also important,
especially if you think about individualized screening, is the software has improved even more in providing a number of different tools to really describe
the phenotype, first from a features perspective point of view—what kinds of features can you extract and use as descriptors—and then from a
clustering, machine-learning point, where you can then use these tools to come up with a robust descriptor and say, if a cell looks like this, that’s a
good sign; if a cell looks like this, it’s a bad sign.”
More than pictures
Of course, cells are not static entities, and there is more to their phenotype than how they appear.
![]() “If you take something as simple as toxicity or cell death, it’s not simply a binary response. There are signals
that are changing over time that influence the end result, and the ability to study those dynamics is going to be a very large and growing field for cell
biology.”
He also offers the example of stem cell differentiation, where the path that a certain cell takes
is dependent on multiple events that occur over a long period of time. Thus, he argues, you want a system that can dynamically track how the colony develops
and allows you to modify the environment on the fly.
“It’s a good illustration of the importance of
being able to modify your experimental conditions as the cells themselves are making decisions and changing over time,” he concludes.
For Millipore, one of the answers to this challenge came with the coordination of microfluidics with cellular imaging in
their CellASIC ONIX platform, which allows researchers to vary experimental conditions over time.
“Using our
microfluidic control technology with perfusions and solution flow, you can change the different drugs or inhibitors or stimuli that the cells are exposed to
while you’re collecting data from them in a live-cell context,” Lee explains. “That is a good first step toward normalizing and giving
experimenters control over the cellular environment aspect, which traditionally has been very difficult and causes variation between results in different
labs with different experiments.”
According to Lee, the company is constantly learning new ways that their
platforms can be applied in the laboratory, highlighting the importance of end users in the development of any new technology.
“We definitely learned a lot about working with the right types of customers to understand applications of
technology,” he says of the CellASIC ONIX platform’s early days. “The only way to get to that level of specialization is to work closely
with customers at the leading edge, developing the cells, developing the assays, understanding what matters and what doesn’t matter.”
Michelle Visagie and colleagues at the University of Pretoria recently described their efforts to study the apoptotic properties of an antimitotic estradiol in breast cell lines using Roche’s
xCELLigence platform. Rather than rely on cellular imaging, xCELLigence uses changes in electrical impedance that occur as cells attach to and proliferate
across microelectrodes that run across the bottom of porous multiwell plates, thus allowing researcher to perform label-independent experiments.
Publishing in Cell & Bioscience, the researchers showed that the compound blocked cell cycle in three different
breast cancer cell lines, and they confirmed apoptotic induction using flow cytometry. Interestingly, despite concentration-dependent inhibition of cell
proliferation in all cell types, one cell line was able to recover after 24 hours of exposure to the test compound.
Cells, of course, don’t have to move to undergo physiological changes in response to disease progression or therapeutic intervention. In many
cases, the changes take place within the cells themselves, whether in the form of altered gene expression or metabolism.
This was highlighted recently by the work of Bayer Pharma’s Patrick Steigemann and
colleagues, who examined how cells in different parts of a tumor respond to growth in hypoxic or even anoxic conditions. Among the tools they used, which
included 3D spheroid cultures and cell imaging, was Seahorse Bioscience’sXFe extracellular flux analyzer, which monitored oxygen consumption
and glycolysis to give a sense of metabolic processing (see sidebar article below, How’d they do that?).
At the recent American Association for Cancer Research meeting in Philadelphia, Seahorse
introduced a test kit for its next-generation analyzer, the XFp, which facilitates real-time analysis of live cells to determine their baseline and stressed
metabolic phenotypes.
In announcing the launch, company chief scientific officer Dr. David Ferrick described the
system as “a way to map the metabolic phenotype of any cell, regardless of its energy and metabolic status, by integrating our real-time measures of
mitochondrial respiration and glycolysis into a single powerful test.”
But whether the platform uses imaging
or any other analytical process, the end goal is the ability to make informed decisions about the next experiment or the next stage of treatment.
Informed decisions
Even when you manage to identify something
that is a potential hit, Tesdorpf cautions, you may not have a clue of what your target might be. And in the past, he says, target deconvolution from that
phase was very difficult.
“These days, target deconvolution is still a key bottleneck in a true phenotypic
drug discovery approach, but at least there are a couple of things you can do,” he enthuses. “There are clustering technologies where you
basically use information about drugs that have known targets and may produce similar phenotypes as a starting point to identify potential
targets.”
“There are siRNA technologies that you could use to pinpoint the target,” he adds,
tipping his hat to the omics strategies. “There is a whole lot of genomics information that you have that might help you to find the
target.”
Part of the challenge, from Datta’s perspective, is helping cell biologists think in
computational terms.
“Particularly for my generation of graduate students, obviously you learned statistics
and you learned to look at data and for structures in your data, but you don’t grow up doing a ton of programming,” she explains. “But I do
think the next generation of scientists—we always talk about digital natives these days—have grown up a little bit more with learning computer
programming in high school. So I think it’s going to change over time—the line will blur between bioinformatics and cell biology.”
Tesdorpf agrees.
“The people in our field are getting more and more
informatics-savvy,” he says. “There are lots of tools available both from the commercial end as well as from an open-source perspective that help
with that.”
The challenge as he sees it may be much more in terms of how researchers understand the data
they are accessing, as the connection between the feature and the outcome may not be obvious and yet still be highly informative.
“It’s easy if you can say a good cell is a big cell and you can measure the size of the cell, which easily
translates to a biological effect that is somehow expected or at least is understandable from the type of biology you’re trying to do,” he
explains. “But that might not be a sufficient way to discriminate between phenotypes and to make subtle differentiations. What happens with a lot of
the better classifiers is that you get a lot of features lumped into one sort of decision-making matrix that are not necessarily easily translated into
something that the biologist might relate to.”
He offers the example of cellular texture.
“Texture is a way to describe an intensity distribution. If I have all of the intensity in my cell concentrated into
one single spot, it’s a different kind of texture than if I have the same total amount of fluorescence distributed in some sort of ripples or something
like that. Texture is a very robust and convenient way to describe this sort of different distribution, but it is sometimes more difficult for a biologist to
accept that a very abstract value like texture gives them a great way of differentiating that.”
Datta,
however, suggests that this lack of intuitive connection could potentially be a strength of the analytical systems.
“My favorite example is from Andy Beck,” she says, referencing the
director of the Molecular Epidemiology Research Laboratory at Beth Israel Deaconess Medical Center. “He did a hypothesis-free experiment where he
taught the Definiens software about 600 different features—which is a lot; it’s more than people had done prior to this point—and he
started doing correlations with different types of data.”
“Prior to this, everybody thought any kinds
of markers that we find that correlate with longevity in the cancer patient, or let’s say more positive clinical outcomes, are going to be within the
tumor,” she continues. “Doing this approach, the computer came out with some stromal markers. Everybody thought there can’t be markers in
the stroma, but sure enough, when you let the algorithms run and let the computer mine the data without our bias that it must be in the tumor, you do see
these other markers that you wouldn’t have detected before.
“It’s the irony that we think
we’re more creative than the computer, and maybe we are, but in our biases, we miss things.”
Millipore’s Lee, meanwhile, goes back to the heterogeneity issue inherent in cellular systems.
“Whether you’re thinking about phenotypes or cell experiments, one of the biggest challenges that cell biologists inherently know in their
training is that not every cell is the same,” he presses. “Even if you have a well with 2,000 cells in it and you collect billions of bits of
data from it, the fact is that certain cells are doing different things than other cells.
“And if you
can’t really tease that apart, you’re going to end up with a mass of information that’s really hard to interpret what’s going
on.”
This is where the lessons learned from omics technologies come into play.
A nod to omics
There is definitely a bit of rivalry between the
omics- and phenotype-driven camps, with those in the latter often looking on in awe at what the omics have accomplished.
Carlo Bifulco, medical director of the Providence Oregon Regional Laboratory, expressed his frustrations to the tissue phenomics symposium last autumn.
“Where are we after
14 years?” he asked. “We’ve made progress, with some very exciting things today. But the progress hasn’t been as fast as I would like
it to be.”
“I just want to compare this to how things have changed in genomics, and the huge
investments getting made in genomics, and how we are a little lagging behind,” he explained, highlighting that between 2001 and 2013, the cost of
sequencing an entire genome has dropped from about $100 million to less than $10,000.
“I definitely
don’t feel like we’ve had that kind of growth on our side.”
At the same time, technology
developers are quick to acknowledge the groundwork that omics has established in preparing the scientific community for what was to come.
“Genomics drove the concept that we could deal with Big Data, we could deal with a whole genome,” offers Datta.
“Now we’re kind of importing that mindset over to cell biology.”
“Had genomics not driven
the whole idea of a bioinformatics pipe and really driven the idea that we’re going to be connected to the Internet and have all kinds of data storage
in labs, I don’t know that we would have gotten so far so quickly in terms of our thinking of using bioinformatics for images in big
pharma.”
No one is arguing that phenotype should replace omics technologies, however, but rather that there
is room for the approaches to inform each other. In fact, from Tesdorpf’s perspective, one has limited options without the other.
“Even if you are using a phenotypic approach, ultimately you will have a targeted drug,” he explains. “The
pharma company discovering that drug will have to identify the target to greatly enhance the chances of getting approval at the FDA.”
How’d they do that?
One of the
key challenges of treating cancer, at least in solid tumors, is less that the cancer cells are proliferating uncontrollably but rather that the tumor itself
is quite heterogeneous. Some cells—particularly those near the angiogenic vasculature—may be growing rapidly, while those closer to the tumor
core can be more dormant, coping with hypoxic or even anoxic conditions.
This may be one reason why cytostatic
chemotherapies only provide limited relief from the cancer, according to Bayer Pharma’s Patrick Steigemann and colleagues in a recent paper published
in Experimental Cell Research.
“Dormant cancer cells could potentially lead to disease relapse
after cytostatic-based chemotherapy,” they wrote. “Therefore, targeting this cell population could be of interest to enhance cytostatic-based
chemotherapy.”
To test this theory, the researchers grew multicellular tumor spheroids (MCTSs) as a model
for solid tumors in 384-well clear-bottomed plates and probed their viability against two bioactive molecule libraries using fluorescent dyes and high-
content imaging.
Initial probing with two known cytostatic agents—cisplatin and paclitaxel—showed that
indeed cell death only occurred in the outer spheroid region, reflective of the actively proliferating cancer cells, and that the core cells, once cleared of
the dead cells, were still viable.
Screening the two libraries (1,120 compounds), the researchers identified nine
compounds that selectively killed the inner core of the MCTS while leaving the outer ring of cells viable. Further validating the use of 3D culture, none of
these compounds induced cell death in 2D cultures under similar conditions.
As two of the nine compounds were
well-known inhibitors of the respiratory chain, the researchers examined whether the other compounds giving a similar phenotype impacted cellular
respiration. They monitored both oxygen consumption and lactate production using the electro-optical XFe extracellular flux analyzer, and
determined they could classify all of the hits as respiratory chain inhibitors.
Repeating the experiments on
several cancer cell lines—colorectal adenocarcinoma, epithelial prostate cancer and colon cancer liver-metastases—they noted the same core death
phenotype.
“Given that cancer cells are predominantly glycolytic (Warburg effect) and the observation that
the targeted cells are located in tumor areas with lower oxygen supply, the identification of respiratory chain inhibitors to selectively target cells in
MCTS core regions was rather surprising,” the authors wrote, suggesting a level of complexity not previously noted. “However, MCTS slices stained
with pimonidazole, a marker for hypoxia, show that the target dormant cells are not hypoxic but rather are located in regions of intermediate oxygen supply,
with anoxia only in the innermost spheroid regions.”
The findings open the door to new combinations
of chemotherapeutic agents that include cytostatics with respiratory chain inhibitors to ensure that not only are the actively proliferating cells destroyed,
but also the dormant inner cells that might otherwise trigger relapse.
Synthetic biology reveals mechanism of gene-overexpression to induce cell reprogramming
TOKYO—As the Tokyo Institute of Technology (Tokyo Tech) notes in a recent news release, in iPS technology, gene overexpression can induce
reprogramming of a cell from differentiated state to stem cell state. However, the mechanism of reprogramming via gene-overexpression remains unclear in
spite of the reproducibility of iPS technology.
However, Daisuke Kiga, Kana Ishimatsu and colleagues at Tokyo Tech
and RIKEN Advanced Science Institute now say they have devised a theoretical expression of cell reprogramming, and proved the idea by using synthetic-biology
experiments where simplified genetic circuits were constructed in living cells.
According to Tokyo Tech, the
artificial genetic circuit consists of a bistable basal switch and tunable over-producing system. Modulated induction of over-expression temporarily creates
a monostable system “and thus easily controls the inner state of those cells with the circuit. When cells with one of the basal two steady states are
modulated, their cell-inner states around the watershed of basal bistable system are affected by the potential landscape of the genetic circuit. In addition
to the effect, fluctuation of the bio-reaction divides the cell populations into two.”
This cell culture
experiment, according to the researchers, demonstrates that the fine and subtle manipulation of the initial cell states, through the regulation of gene-
overexpression levels, results in the generation of programmable bimodal distribution from monomodal distribution. The team's mathematical analysis
further suggests that the reprogramming strategy can be applied to various types of natural gene networks.
The
original paper is titled “General applicability of synthetic gene-overexpression for cell-type ratio control via reprogramming” and appeared in
ACS Synthetic Biology. The work was supported by the Department of Computational Intelligence and Systems Science and the Earth-Life Science
Institute at Tokyo Tech, as well as RIKEN .
For more on stem cell issues, see the special report in our August
issue of DDNews.
An
alliance to transform cell-based screening in drug discovery
ALBUQUERQUE, N.M. & BASEL,
Switzerland—Genedata and IntelliCyt recently announced an alliance that they say “brings together the market-leading strengths of each company to
transform screening of suspension cells.” The technology alliance integrates the highly scalable data analytics and workflow capabilities of Genedata
Screener software with the high-throughput (HT) flow cytometry capabilities of the IntelliCyt iQue Screener system. The integrated solution is said to enable
drug discovery researchers to efficiently screen large libraries against suspension cells and multiplex beads in phenotypic drug discovery, antibody
screening, immunology and biomarker research—what the partners say is “an industry first.”
With
the novel HT technology from IntelliCyt built into the iQue Screener instrument, throughput in flow cytometry experiments is reportedly increased by tenfold
to twentyfold over other plate-based flow cytometry approaches. The IntelliCyt iQue Screener system allows analysis of more than 10,000 cells or beads per
second, each with six to 15 parameters, resulting in 60,000 to 150,000 data points per second from a given sample.
“Advanced computational analysis solutions are required to handle the quantity and complexity of data obtained in high-throughput flow cytometry
experiments in large-scale screening applications,” according to Janette Phi, chief business officer at IntelliCyt. “The combined solution of
Genedata Screener software and the IntelliCyt iQue Screener platform addresses this unmet industry need. Our solution provides a powerful option for our
customers for robust and accurate data analysis across all plates from high-throughput flow cytometry data generated by iQue.”
In addition, the Genedata/IntelliCyt solution supports the FCS3 data file standard.
“The alliance between Genedata and IntelliCyt underscores our commitment to support all emerging technologies that drive value for our
customers,” notes Dr. Othmar Pfannes, CEO of Genedata. “It’s especially exciting to see a screening technology mature and scale to high
throughput and that Genedata Screener easily supports yet another technology that empowers researchers to quickly interpret flow cytometry results and
optimize result quality.”
Single-cell epigenetics comes to the Fluidigm
C1 system
SOUTH SAN FRANCISCO, Calif.—Fluidigm Corp. recently announced the availability of
Single-Cell ATAC-seq, an epigenetics application for the C1 system that allows researchers to explore the regulatory systems that drive cellular function.
This application is freely available to researchers on Script Hub, a new web portal within Fluidigm’s C1 Open App program.
As described in a paper published in Nature entitled “Single-cell chromatin accessibility reveals principles
of regulatory variation” researchers have used ATAC-seq to identify single-cell DNA accessibility profiles from diverse cell types. Understanding the
accessible regions of the genome will reveal the role of DNA-binding proteins, nucleosomes, and chromatin compaction in regulating gene expression. Until
now, Fluidigm says, researchers needed at least 500 cells to identify accessible chromatin regions, which misrepresented the heterogeneity present in the
biological system.
“We believe scATAC-seq will enable the interrogation of the epigenomic landscape of small
or rare biological samples allowing for detailed, and potentially de novo, reconstruction of cellular differentiation or disease at the fundamental
unit of investigation—the single cell,” according to Dr. William J. Greenleaf, principal investigator and an assistant professor in the
Department of Genetics at Stanford University.
Adds Candia L. Brown, Fluidigm’s single-cell genomics
business director of product marketing: “We designed the C1 to be very flexible. The Open App program is an ecosystem between method development
laboratories and our cell biology users. We created this program to give our users maximum flexibility to expand their capability and experimental strategy
over time and to showcase creativity. We’re thrilled to see the C1 community come together and pioneer the next frontier of single-cell biology:
single-cell epigenetics.”
Code: E071529 Back |
Home |
FAQs |
Search |
Submit News Release |
Site Map |
About Us |
Advertising |
Resources |
Contact Us |
Terms & Conditions |
Privacy Policy
|