Living organisms, tissues, cells and molecules are highly dynamic. The importance of their continuous and long-term observation has been recognized for over a century but has been limited by technological hurdles. Improvements in imaging technologies, genetics, protein engineering and data analysis have more recently allowed us to answer long-standing questions in biology using quantitative continuous long-term imaging. This requires a multidisciplinary collaboration between scientists of various backgrounds: biologists asking relevant questions, imaging specialists and engineers developing hardware, and informaticians and mathematicians developing software for data acquisition, analysis and computational modeling. Despite recent improvements, there are still obstacles to be addressed before this technology can achieve its full potential. This Commentary aims at providing an overview of currently available technologies for quantitative continuous long-term single-cell imaging, their limitations and what is required to bring this field to the next level. We provide an historical perspective on the development of this technology and discuss key issues in time-lapse imaging: keeping cells alive, using labels, reporters and biosensors, and hardware and software requirements. We highlight crucial and often non-obvious problems for researchers venturing into the field and hope to inspire experts in the field and from related disciplines to contribute to future solutions.
Organisms are composed of vast numbers of different molecules and cells that interact spatially and temporally. Researchers have access to a wide array of molecular and cellular assays to unravel these interactions (see examples in Fig. 1). However, most techniques are not sensitive enough to detect rare or transient events. Moreover, these assays are typically designed to look at cells or molecules at single time points and the complexity of highly dynamic processes can hardly be captured by such snapshot analyses (Kokkaliaris et al., 2012; Landecker, 2009; Schroeder, 2008; Schroeder, 2011).
The necessity for continuous observation of biological processes has long been recognized and can be traced back to the works of pioneers such as Antonie van Leeuwenhoek (17th century). van Leeuwenhoek used microscopes to look at capillaries in living rabbit ears where he observed blood flow and suggested that blood circulation is a closed system (Dunn and Jones, 2004; Frischknecht et al., 2009). The development of photography, microcinematography, fluorescence microscopy, cell and tissue culture and other techniques in the 19th and 20th centuries then made it possible to observe molecules in living cells and organisms continuously over long periods. This has been reflected by an almost exponential rise in studies using time-lapse imaging in the recent scientific literature, which spans most of the natural science fields. However, and although time-lapse imaging is conceptually simple, researchers today still face some of the same technical hurdles as they did in the early 1900s when trying to continuously monitor living specimens over extended periods of time.
The development of time-lapse microscopy was mainly fuelled by experimental needs from embryologists and developmental biologists who pioneered lineage tracing and fate mapping experiments. Interestingly, the embryologists and stem cell researchers of today still require such techniques to answer long-standing biological questions. For instance, time-lapse imaging has been successfully used to identify the asymmetric self-renewing division mechanism of muscle satellite cells (Kuang et al., 2007), to prove the existence of hemogenic endothelium (Bertrand et al., 2010; Eilken et al., 2009; Kissa and Herbomel, 2010), to demonstrate lineage instruction by cytokines (Rieger et al., 2009), to investigate the mechanisms of germ layer formation during gastrulation (Burtscher and Lickert, 2009), to study neural stem cells and neurogenesis (Asami et al., 2011; Costa et al., 2008; Costa et al., 2009; Costa et al., 2011), and immunology (Henrickson et al., 2008; Junt et al., 2007; Sung et al., 2012), to name a few.
Time-lapse microscopy is a multidisciplinary technique that has been developed through the interactions between mechanical engineers, chemists, physicists, biologists and, more recently, computer scientists. The aim of this Commentary is to point out some of the crucial points when designing time-lapse imaging experiments for specific questions. Excellent reviews and online resources already exist on transmitted light microscopy, fluorescence microscopy, confocal and multiphoton microscopy and live-cell imaging (Herman, 2002; Lichtman and Conchello, 2005; Piston, 1999; Stephens and Allan, 2003; Frigault et al., 2009) and a basic knowledge about these techniques is expected. For these reasons, we aim here to complement the literature by focusing on probing cellular processes using long-term continuous single-cell imaging with an emphasis on stem cell biology. Nevertheless, the concepts discussed here can be applied to any cell population.
Unfortunately, and in contrast to what users venturing into the field of long-term time-lapse microscopy would like to hear, there is no single best way of how to do it right. Depending on the specific question at hand and available materials (biological and hardware), the multitude of involved parameters will typically have to be adjusted to find the perfect compromise for each project (Schroeder, 2011). We will thus focus on discussing potential problems at the different steps involved in time-lapse microscopy, including those the user will realize only long after having committed to this approach, and their potential solutions. However, it is important to point out that there is no single approach available that suits all experiments, and users have to be prepared to optimize the perfect combination of settings for each new question and cell type. Even in laboratories that specialize in these approaches, and continuously incorporate the latest hard- and software, each new project requires several rounds of optimizing settings and strategy.
Time-lapse microscopy – a historical perspective
The need for continuous observation of life
In the late 19th century, embryology pioneers such as Charles Whitman, Edmund Wilson and E. G. Conklin looked at how early development proceeded from a single fertilized egg to a complex organism (Stent and Weisblat, 1985; Stern and Fraser, 2001). These scientists used simple organisms, such as fresh water leeches (Clepsine sp.), nematodes (Caenorhabditis elegans) and marine filter feeders (tunicates) because their development was rapid, spanning only a few hours. More complex organisms take longer to develop and were difficult to keep alive under the microscope for long enough. Moreover, it was easy to recreate the native environment of the organisms using fresh or sea water (Whitman, 1878). Microscopes were fitted with a drawing tube that projected the image onto paper, which facilitated manual drawing of the image by the observer (Paddock, 2001). This method was limited by the observer's bias and artistic skills. Nevertheless, the seminal studies by those scientists were the first lineage-tracing experiments. Around the same time, and a few years before the invention of cinematography by the Lumière brothers, Étienne-Jules Marey was using his chronophotographic gun to study physiology and more specifically animal movement such as the flight of birds (Landecker, 2006). He founded the Marey Institute with the goal to manufacture and standardize instruments, as an attempt to make physiology an exact science. Although there have been many technical improvements since (Box 1), most core components of microscopes, and the problems associated with long-term live imaging, have remained surprisingly similar over the century.
Box 1. Microcinematography – time-lapse imaging from its analog birth to its digital present
The first reported microcinematography device was assembled by Marey and Lucien Bull in 1891 (Talbot, 1913). Several improvements were implemented during the next decade by Bull, James Williamson, Edmund J. Spitta and Antoine Pizon. These modifications had the aim of reducing vibrations (using heavy oak tables), improving contrast (by use of contrasting agents), keeping the specimens alive by reducing heat from illumination sources (through the use of lamps, shutters and filters). They also used perfusion to refresh culture medium. Amazingly, these technical hurdles are still at the heart of running a successful time-lapse imaging experiment today.
The first published time-lapse microscopy movies were produced at the Marey Institute by Pizon (Pizon, 1905) who studied colony formation in the tunicate Botryllus. His first movie represented 775 images taken at three images per hour, thus spanning nearly 11 days. In 1908, Ries studied the fertilization and early development of sea urchins (Ries, 1909). Almost simultaneously, Chevroton and Vlès were achieving the same at the French marine biology station of Rostoff (Chevroton and Vlès, 1909). Unfortunately, none of these early time-lapse microscopy movies have survived to this day. The earliest movies still available are at the Pasteur Institute in Paris and were taken by Jean Comandon, who mounted a camera on a microscope to study syphilis-causing bacteria (Spirochaetes) (Breithaupt, 2002; Comandon, 1909; Frischknecht et al., 2009). To view Comandon's first movie, see supplementary movie 1 in Roux et al. (Roux et al., 2004).
At the time of Pizon, Ries and Comandon, 35- or 16-mm film cameras were used for micro-cinematography. Exposure times had to be carefully monitored, film loss in failed experiments was expensive and there were long waiting times before data could be analyzed. Ways to transform optical images into electrical signals have been developed since 1873 but only in 1934 did Zworykin invent the first practical video camera (Inoué and Spring, 1997). 35- or 16-mm cameras remained the method of choice until the 1980s when they were largely replaced by monochrome video cameras (Fink, 2011). Today, microscopes are equipped with digital cameras (Hiraoka et al., 1987).
How to keep your cells healthy for continuous imaging
In order to perform long-term time-lapse imaging of mammalian cells, the most critical parameter is to keep cells alive and normal, that is to recreate as closely as possible their native environment. From the seminal work of Harry Eagle and others half of a century ago (see below), most biologists have a good knowledge of what cells require when they are placed in culture: an appropriate culture medium supplying nutrients to the cells, temperature and pH control, oxygen and CO2 supply, as well as maintenance of osmolarity. For time-lapse imaging, other important factors are to reduce evaporation and phototoxicity to a minimum. In this section, we will discuss various means of how to maintain optimal conditions long-term under the imaging setup.
Culture media, CO2 and buffering systems
Nearly all media are bicarbonate-based and require CO2 at concentrations higher than atmospheric conditions to maintain a physiological pH. The role of CO2 in cell culture extends beyond pH maintenance, as it is also required for many cellular processes including glucose metabolism and proper functioning of ion and acid–base transporters (Bonarius et al., 1995; Kanaan et al., 2007). Most cell cultures today are maintained in humidified 37°C incubators that are supplied with 5% CO2. However, constant gas exchange in incubation chambers can lead to quick evaporation of culture medium. Removal of the cells from the incubator results in rapid changes in pH that can adversely affect the cells, even in the presence of bicarbonate. To avoid this, other buffering agents can be used. The most common is HEPES [4-(2-hydroxyethyl)-1-piperazineethanesulfonic acid] at 25 mM, but as demonstrated by Eagle, many other buffers can be used and are non-toxic to most cell lines (Eagle, 1971). For primary cells, however, each investigator should carefully test these buffers for side effects on their favorite cells types. Phenol Red is typically included to monitor pH changes in culture medium but should be avoided in fluorescence time-lapse microscopy owing to its fluorescent properties in the blue–yellow spectrum.
While running a time-lapse experiment, the easiest way to maintain pH it to use a microscope equipped with an incubator for which both temperature and CO2 can be adjusted and monitored. A simple alternative is to saturate the culture medium with CO2 and use sealed incubation chambers. Although not optimal for long-term imaging, which, depending on cell type, is typically longer than 4 days, this approach can be used when a few cells are being analyzed to generate data from single cells, such as in a rare stem cell population. It must also be noted that most tissue culture plastic vessels are gas permeable and that some leakiness is to be expected. Because a large volume of medium is required, ways to limit cells movement (thus limiting the surface that needs to be covered by the microscope) need to be designed, such as creating extracellular matrix protein islands onto which cells adhere (Ravin et al., 2008), placing agarose, hydrogel or silicone microwells (Gilbert et al., 2010; Lutolf et al., 2009a; Dykstra et al., 2006) in a larger well, or by use of microfluidic devices (Lecault et al., 2011; Tay et al., 2010; Taylor et al., 2009). The use of large volumes of medium avoids evaporation issues and the need for a perfusion apparatus to refresh the medium continuously or periodically, but can contribute to increased noise in fluorescence imaging. Evaporation changes the osmolarity and concentration of the medium components and is a problem in particular for very small wells. This is especially true for large incubation chambers that cannot be humidified to prevent damage to oxidation-sensitive microscope parts. Again, the use of sealed chambers, large volumes of medium or covering the medium with water-vapor-impermeable liquids (e.g. mineral oil) can be considered. Adding fluorocarbon oil to the medium to provide oxygen to the cells while removing excess CO2 (Sluder et al., 2007) should be thoroughly tested on relevant cells before use. Commercial media such as Leibovitz's L-15 medium are also available that allow cell survival in atmospheric CO2 levels. However, in our hands these media allow long-term primary bone marrow cell survival in atmospheric air but without sustaining cell proliferation, even when supplemented with HEPES and/or sodium bicarbonate (unpublished observations).
Phototoxicity is as major concern in long-term imaging. Although seemingly obvious, it is important to point out that low-quality images of healthy cells are preferable over high-quality images of dying cells (Schroeder, 2011).
Phototoxicity is mainly caused by short-wavelength light that reacts with cellular components or experimentally added dyes (Pattison and Davies, 2006). This results in the production of reactive oxygen species and free radicals, and DNA damage (Dixit and Cyr, 2003; Godley et al., 2005; Grzelak et al., 2001). The most straightforward way to circumvent these issues is by minimizing illumination. Transmitted light images can be recorded at high temporal resolution to allow tracking of cells. Fluorescence illumination should be used only when necessary, at low frequency and for low exposure times. Decreasing excitation intensity and extending exposure times can be helpful to reduce toxicity. At any time, the use of UV-excitable dyes should be avoided as much as possible, and red or far-red dyes should be preferred over green, yellow and blue ones. Furthermore, sensitive cameras and fast shutters should be used. Shutter-free systems (e.g. with fast switchable LED excitation) can drastically reduce toxicity and improve temporal resolution, but commercial systems to be used off-the-shelf for the typical biology user have not yet been developed.
Mimicking the in vivo microenvironment
One important caveat of in vitro imaging is the near impossibility to recreate the in vivo microenvironment of the cell type studied. Cells in normal tissues require a combination of heterotypic cell interactions, paracrine and endocrine signals, adhesive cues from the extracellular matrix, signals from blood vessels and neural stimulation, among others, most of which are not fully understood. Many cell types do not behave normally when sorted to homogeneity and require interactions with other cell types. This can prove problematic when trying to generate single-cell data. The use of cell-type-specific reporters or genetically encoded labels can greatly facilitate the discrimination of the stem cells and their progeny from the surrounding supporting cells. It is also possible to isolate the cells of interest from a mouse line expressing a fluorescent protein under a ubiquitous promoter and co-culturing them on non-fluorescent feeder cells.
Recently, the bioengineering of artificial microenvironments recapitulating the in vivo cellular niches of cells has much improved (Discher et al., 2009; Gobaa et al., 2011; Lin et al., 2012; Lutolf et al., 2009b). These strategies are very promising but are typically not commercially available and remain difficult to integrate with long-term continuous imaging by typical users.
In the following section, we will describe in more details various approaches to label cells for continuous single-cell imaging as well as to probe cellular process non-invasively in time-lapse microscopy.
Tracking, probing and manipulating cellular processes continuously
Concurrent with the nearly exponential increase in the use of time-lapse imaging over the last two decades, many techniques have been developed or adapted to visualize, analyze and manipulate living cells and molecules non-invasively. In this section, we will discuss the main approaches used to visualize cells, molecules and organelles continuously in living cell as well as methods used to analyze and manipulate molecular machineries in real-time.
There is a wide variety of commercially available dyes that stain plasma membranes and thus allow the experimental tracking of cells; PKH26, PKH67, CFSE and wheat germ agglutinin are good examples. The fluorescent carbodiimide dyes also come in different colors. These stable dyes are easy to use but are diluted at each cell division, making them sub-optimal for long-term imaging of proliferating cells. Depending on the initial cell size, they are typically diluted below detectable levels after four to eight cell divisions. There are also a variety of organelle-specific fluorescent dyes that can be used for live cell imaging and to assess the subcellular localization of a protein. These dyes are specific for endoplasmic reticulum (glibenclamide), Golgi (ceramide), lysosomes (dyes that become fluorescent in acidic conditions) and mitochondria, but these are also gradually diluted with cell divisions. As some of these dyes can affect cell behavior and metabolism, any potential effects on specific cell types have to be carefully excluded before their use.
Labeling cell nuclei can also facilitate manual and automated tracking of single cells. The DNA-staining agent Hoechst 33342 dye can be used; however, since it requires excitation in the near-UV it should be avoided. Other commercially available dyes, such as SYTO dyes and DRAQ5, are preferable, but in our hands have also proven cytotoxic in primary skeletal and marrow cells (mesenchymal and hematopoietic stem/progenitor cells) and embryonic stem cells (unpublished observations). For these reasons, the use of nuclear fluorescent proteins is preferable.
It is also possible to take advantage of the characteristics of certain cell types to label and track them. For instance, fluorescent-label-conjugated plant lectins or acetylated low-density lipoproteins can be used to visualize endothelial cells and/or macrophages (Eilken et al., 2009). Furthermore, another useful dye for time-lapse microscopy is calcein acetoxymethyl ester (calcein-AM). This non-fluorescent membrane-permeable compound is modified in living cells by intracellular esterases, resulting in a green fluorescent product that becomes trapped in the cytoplasm. Calcein-AM is thus useful not only as a tracking dye but also to monitor cell viability during live imaging.
Finally, a simple, inexpensive and versatile way to detect cell surface proteins on live cells is to use antibodies, similar to in normal flow cytometry or immunofluorescence staining (Eilken et al., 2009). However, because antibodies can either block or activate their targets, their use must be carefully controlled to not influence cellular behavior (Eilken et al., 2010).
Fluorescent proteins and fusions
Prasher and colleagues (Prasher et al., 1992) reported the cloning of green fluorescent protein (GFP) from Aequorea victoria, which was first purified and characterized some 30 years before by Shimomura and colleagues (Shimomura et al., 1962) and then expressed in heterologous organisms (Chalfie et al., 1994). In 1987, the laboratory of Martin Evans in Cambridge created the first genetically modified mouse line (Kuehn et al., 1987), leading to innumerable reporter mouse models that have been of great use in live-cell imaging. The derivation of GFP variants of different colors (Ai et al., 2007; Cubitt et al., 1995; Heim and Tsien, 1996; Zhang et al., 2002) allows simultaneous labeling of proteins, cells and even whole organisms with genetically encoded fluorescent tags. Most of these proteins have been extensively modified using targeted mutagenesis to optimize their color, brightness, maturation temperature and speed, and pH sensitivity, among others (Kremers et al., 2011). A recent review (Newman et al., 2011) provides a non-exhaustive list of 76 fluorescent proteins covering the entire visible spectrum (near-UV, cyan, green, yellow, orange, red and far-red). Photoconvertible and photoactivatable fluorescent proteins are also available (Bancaud et al., 2010). An exhaustive discussion of all these fluorescent proteins and their properties in beyond the scope of this Commentary, but excellent recent reviews exist on the subject (see above mentioned reviews and Lippincott-Schwartz and Patterson, 2003; Müller-Taubenberger and Anderson, 2007; Zhang et al., 2002). We will here discuss a number of ways in which fluorescent proteins can be used in time-lapse microscopy.
The choice of an appropriate fluorescent protein usually depends primarily on its excitation and emission spectra, but other important parameters include its brightness (molar extinction coefficient and quantum yield; the efficiency of a fluorophore to absorb and emit light, respectively), half-life, sensitivity to photobleaching, maturation speed, tendency to form aggregates (which might increase cytotoxicity) and usefulness in fusion proteins (whether it renders its fusion partner non-functional, affects its subcellular localization or modifies its half-life). There are now a wide variety of mouse lines available that express various fluorescent proteins either ubiquitously or in specific cell types. Alternatively, fluorescent proteins can be delivered through, for example, viral transduction. These genetically encoded labels are inherited by all progeny and are thus superior to vital dyes for long-term imaging of proliferating cells.
One important caveat in the use of the currently existing fluorescent proteins is their broad excitation and emission spectra. This usually causes significant bleed-through of fluorescent signals in channels that could be used for additional reporters or dyes. Although combinations of five or six fluorescent proteins are possible (e.g. Kremers et al., 2011), this is difficult to apply to long-term imaging (and remaining bleed-through can require spectral linear unmixing). To simultaneously image multiple reporters in single cells using a more conventional technology, fluorescent proteins can be targeted to distinct subcellular localizations. For instance, fluorescent proteins can be targeted to the nucleus, nuclear membrane, Golgi, mitochondria and chromatin by the addition of peptide sequences (see Table 1; Fig. 2). This allows the experimental discrimination between different reporters on the basis of their location rather than their spectral properties. This approach can be useful to minimize phototoxicity, since two reporters can be viewed simultaneously in the same channel, thus reducing repeated light exposure.
Many cellular processes rely on intracellular biochemical changes (for instance kinase activity during signaling pathway activation) that cannot be monitored using simple fluorescent protein reporters. Several biosensors have been created to probe these processes. Biosensors use fluorescent proteins and typically fall into three categories: those based on intensity, localization (or translocation) or Foerster resonance energy transfer (FRET). Biosensors already exist that monitor a number of cellular processes: promoter or protein dynamics; lipid dynamics [e.g. PtdIns(3,4,5)P3, diacylglycerol (DAG) and phosphatidylserine]; halide, zinc and Ca2+ ion fluctuations; intracellular pH; redox status; cAMP and cGMP concentrations; nitric oxide (NO) production; presence of reactive oxygen species; ATP concentration and ATP:ADP ratio; glutamate and sugar concentrations; membrane potential; small G-protein activation; kinase activation and activity; phosphatase and protease activity; O-glycosylation; histone acetylation and methylation; cell cycle status; actin cytoskeleton dynamics; and mechanical strain. An exhaustive description of all these biosensors is beyond the scope of this Commentary but the interested reader should consult recent reviews (e.g. Newman et al., 2011; Endele and Schroeder, 2012; Aoki et al., 2012, and references therein). A potential pitfall associated with the use of some biosensors is the fact that they can saturate binding sites for the endogenous targets and thus affect the cellular processes under study (Haugh, 2012). In addition, FRET-based biosensors are difficult to deliver to primary cells by lentiviral transduction owing to the propensity for recombination between the two FRET partners (typically CFP and YFP) during reverse transcription (Aoki et al., 2012).
Once a biological question that requires long-term continuous imaging has been posed and the relevant cells, reporters and cell culture methods have been identified, the next obvious requirement is the choice of the appropriate image acquisition hardware. Frigault et al. provide a flow chart to help choose the appropriate imaging modality for various experimental needs (Frigault et al., 2009), and this will not be discussed here. Similarly, we have already discussed various incubation apparatuses to keep cells alive on the imaging setup. In the next section we will focus on microscope components and computer requirements for long-term continuous imaging.
Imaging of your favorite cells – hardware requirements
Various techniques have been described for in vivo three-dimensional imaging of living cells, including magnetic resonance imaging, confocal and multiphoton microscopy and opto-acoustic resonance tomography. However, these techniques are currently all limited by either by their poor sensitivity, lack of useful contrast agents, low temporal or spatial resolution, insufficient tissue penetration, or the difficulty in keeping animals alive and immobilized long-term on the imaging setup (Schroeder, 2008; Schroeder, 2011). Therefore, short-term imaging over a few hours and in small volumes is possible, but for long-term continuous imaging of single cells, in vitro techniques are still required. Confocal and multiphoton microscopes can be used for in vitro imaging, but are expensive and generally more phototoxic than epifluorescence and should be used only when three-dimensional information is absolutely required. In this case, multiphoton excitation, large pinhole size, low laser powers, high scan speed, low resolution and high-speed resonance scanners should be used. As an alternative, for imaging thick samples or fast moving molecules, single plane illumination microscopy (SPIM) can also be used (Huisken et al., 2004). In many cases however, wide-field epifluorescence microscopy is sufficient. Current limitations of these techniques and some proposed solutions are summarized in Fig. 3.
As a general rule, the light path of the microscope should be kept as simple as possible. If possible, differential interference contrast (DIC) prisms, phase-contrast objectives, mirrors, beamsplitters, optivar lenses and superfluous filters should be removed. Objectives with low magnification dramatically increase the observed area and can help increase temporal resolution. Objectives with high numerical aperture should be used for weak fluorescent signals. High-end objectives are chromatically and spherically corrected for up to four wavelengths and are also plan-corrected (ensuring the whole field of view is in focus at the same time) in addition to having large numerical apertures. However, highly corrected objectives usually favor optical correction at the expense of light transmission (LoBiondo et al., 2011). Thus, objectives that are specifically designed for high fluorescence transmission, and thus have lower exposure times are often preferable (reduced phototoxicity) and cheaper, in particular when imaging only one or two fluorescent channels.
For illumination, many options exist and choosing the right light source will affect experimental outcome. For fluorescence, the most widely used light sources are mercury and metal-halide arc lamps. These lamps have a similar spectral output with emissions peaks at 365, 405, 436, 546 and 579 nm, but the energy output of metal halide lamps is slightly lower than that of mercury lamps. However, metal halide bulbs usually last longer than mercury bulbs. Xenon lamps are also sometimes used; their spectral characteristics are more homogeneous across the visible spectrum but their relative energy output is much lower than that of mercury or metal halide lamps (Herman, 2002). Several manufacturers now offer LED illumination devices. LEDs are smaller, produce less heat and can rapidly be switched on and off, thus eliminating the need for shutters (which are slow and extremely prone to mechanical malfunction – it must also be noted that temporal resolution is usually limited by mechanical parts such as shutters, filter wheels and stage) and allowing faster image acquisition and reduced phototoxicity. However, LED systems from various manufacturers are not comparable. In addition, they currently all suffer from ‘teething’ problems and often require self-developed software for useful hardware control. Novel LEDs with improved brightness and spectral properties are beginning to appear on the market and might make LED illumination the best choice for epifluorescent imaging in the near future, but currently they require constant re-evaluation.
Although of obvious importance, choosing optimal filter sets for the fluorochromes used is often neglected. The choice of optimal excitation and emission filters and dichroic mirror combinations is crucial for improved signal detection and ensuing decreased exposure and phototoxicity. For single-color imaging, short- or long-pass emission filters can be preferable. For multiple colors, separate filters optimal for each fluorochrome should be used and images acquired sequentially. If the signals are strong enough, multi-bandpass filters can be used to speed up image acquisition.
As for objectives, illumination source and filters, there is no single camera that fulfills all experimental requirements, and camera types need to be carefully assessed for individual needs. Modern microscope cameras are solid-state sensors that are either complementary metal oxide sensors (CMOS) or charge-coupled devices (CCDs). CCD cameras (including intensified, electron bombardment and electron multiplication; I-CCD, EB-CCD and EM-CCD, respectively) are the most widely used because, until recently, they were more sensitive and had the lowest noise levels. However recent scientific-grade CMOS cameras are now available that out-compete CCD-type cameras. Important properties for long-term time-lapse microscopy include exposure mode (for CCD cameras), spectral sensitivity or quantum efficiency, noise, chip size, pixel or photodiode size (and combining the latter two, total number of pixels), full-well capacity (for CCD cameras if quantification of fluorescence is required), dynamic range, bit depth and binning capacity. A detailed description of the various types of cameras and their properties can be found in recent reviews (Joubert and Sharma, 2011; Salmon and Waters, 2011).
Color cameras typically have a lower resolution and sensitivity owing to their design, so unless multiple wavelengths are viewed at the same time, monochrome cameras should be used and channels images taken sequentially (or simultaneously using multiple cameras on microscopes with beamsplitter lightpaths). The quantum efficiency of a camera refers to the percentage of photons that are detected (transformed into photoelectrons) by a single photodiode and is wavelength dependent. Back-thinned CCD or CMOS chips have higher quantum efficiencies because the image can be focused on the chip on the opposite side from the electronics, but the thinning process is expensive and can cause variability across the chip. All cameras generate background noise that can come from, for example, thermal disturbances on the chip, which can be lowered by cooling the chip. The chip size determines the area of the view field that is imaged by the chip. A larger chip size thus means that fewer positions need to be imaged to cover a whole sample and thus allows higher data throughput and temporal resolution.
Recent automatic hardware focus systems have reached sufficiently robust technological standards and can be helpful if the imaging setups are in a room with frequent temperature variations.
Acquisition and analysis of imaging data – software requirements
All current computer programs for image acquisition and analysis have their intrinsic strengths and weaknesses (Box 2 and Eliceiri et al., 2012; Schroeder, 2011; Walter et al., 2010), and these often only become apparent after using them under experimental conditions. It is therefore crucial to test them for the specific task at hand before (financially) committing to one solution.
Most commercial imaging hardware setups are sold with proprietary software that can be used for simple time-lapse experiments. These include AxioVision (Zeiss), NIS-Elements (Nikon) and cellSens (Olympus). Other commercial packages include, among many others, Metamorph (Molecular Devices) and Volocity (Perkin-Elmer). The advantage of these packages is that they are designed for the specific hardware used and thus should not have any compatibility issues with hardware components. Their main disadvantage is their lack of flexibility. Indeed, their use greatly limits custom changes made to an imaging hardware such as the use of novel illumination sources. Thus, laboratories using various hardware components that are self-assembled for specific needs and with frequent changes in experimental designs are limited by these packages. Therefore, researchers typically have to write their own code to fulfill their needs.
One open-source package that can be used to control most microscopy hardware components is μManager (Edelstein et al., 2010), which offers greater flexibility than typical commercial software packages. It runs as an ImageJ plug-in (see below) that can perform most basic operations and is compatible with most commercial microscopes, cameras, stages, shutters, filter wheels and illumination sources. An open-source package that uses drivers from μManager for more complex acquisition protocols is YouScope (Lang et al., 2012).
Time-lapse imaging typically generates massive amounts of data. Although the whole human genome can be encoded in a few gigabytes of data, even a single imaging experiment can yield a thousand- to a million-fold more data. Adequate data storage, backup and access systems typically overwhelm current IT service departments at most institutions and dedicated solutions have to be found that enable not only data acquisition, but also their timely analysis.
Image analysis, quantification and display
After acquisition, the (often enormous amounts of) image data have to be analyzed in a statistically meaningful way. Although obvious, the time required for this step is usually drastically underestimated.
For general image analysis functions such as visualization, co-localization and basic quantification, many commercial programs are available and include Imaris (Bitplane), Volocity (Perkin-Elmer), ZEN (Zeiss), MetaMorph (Molecular Devices) and Amira (VSG). There are also an increasing number of open-source software applications and each one has its own particular strengths. Again, as many packages have been developed to answer a specific need, there is no software that will be perfect for every researcher, and even smallest novel requirements will typically require the use of programming skills.
One of the first open-source packages for analysis of microscopy data was ImageJ, which was created over 25 years ago by the National Institutes of Health (NIH) (Schneider et al., 2012). The ImageJ development team now numbers over 500 collaborators and the available plug-ins exceed this number. But ImageJ is also victim of its success, as so many plug-ins are available (many of them being redundant) that it becomes difficult to choose the appropriate one. Furthermore, many laboratories write their own plug-ins, which are not necessarily available when the software is installed, although packages such as Fiji (Schindelin et al., 2012) are trying to circumvent this issue by providing a standard package of plug-ins which is automatically updated from a central wiki-based website. Finally, ImageJ plug-ins are not always written in a comprehensive manner that allows other researchers to modify them when new technology becomes available. Nevertheless, many recent software packages still use ImageJ or its plug-ins to perform basic analysis tasks. Some, such as Icy (de Chaumont et al., 2012), also integrate μManager to allow control of motorized microscopes components.
Other packages that can be useful for image analysis include (not exhaustively) BioImageXD (Kankaanpää et al., 2012), OMERO, Vaa3D, CellProfiler, Fluorender, ImageSurfer, 3D Slicer, Image Slicer, Reconstruct, OsiriX, IMOD, SIMIoBioCell (Ravin et al., 2008), TimeLapseAnalyzer (Huth et al., 2011), TLM-Tracker (Klein et al., 2012) and TTT (Eilken et al., 2009; Rieger et al., 2009). All of these are specialized to accomplish specific tasks in image analysis – and are optimized for images with very specific properties. Some, such as CellProfiler and Icy, allow a user without programming skills to create pipelines for batch-processing of images. However, none of these packages provide all the necessary tools in one package to address all requirements for generating single-cell data from continuous long-term time-lapse experiments.
A typical time-lapse imaging experiment that requires data from single cells usually requires single-cell tracking, signal quantification over time, morphometric measurements, genealogic tree generation (with the possibility of annotation for signal intensity), gating, statistical analysis, display and visualization flexibility, as well annotations of metadata. Most laboratories today use multiple software packages, but due to a lack of common interfaces, this usually requires tedious data conversions or even programming skills. For automated cell tracking and signal quantification, cell segmentation and auto-tracking algorithms are required. Although many of the mentioned software packages offer this functionality, their everyday use is typically highly limited. Under conditions where cells migrate or divide rapidly, or touch each other, automated tracking algorithms become too error-prone. Manual tracking, although time consuming, is still the only reliable option in most long-term imaging experiments.
Presenting data derived from time-lapse imaging is also problematic as most scientific publications display still images in figures. Although journals now frequently publish movies as supplementary material, most readers still only carefully inspect the main figures in a publication. Displaying multidimensional data (such as multicolor staining in flow cytometry or immunofluorescence) can be problematic and, when a temporal dimension is added, the complexity of the data makes it difficult to analyze and display it in a reader-friendly manner.
Taken together, there is an increasing effort in the imaging community to develop integrated open-source, flexible and multipurpose packages for image acquisition, storage, management, analysis and visualization. This aspect probably will remain the major bottleneck in deriving meaningful data from multiparametric long-term continuous single-cell time-lapse microscopy for some time (Cardona and Tomancak, 2012; Carpenter et al., 2012; Myers, 2012).
Conclusions and perspectives
It has now been over 120 years since Jules-Étienne Marey and colleagues invented the first time-lapse apparatus. These pioneers were already addressing the technical hurdles we still face – sample viability, system stability, optimization of illumination and contrast. With the incremental development of microscopy techniques, cell purification and culture methods, genetic manipulation of whole organisms, protein engineering and computer sciences, time-lapse microscopy has now reached a point where it can become a standard approach to answer long-standing biological questions. However, despite the evolution of techniques and hardware, we still face some of the same technical hurdles as Marey and colleagues did more than a century ago. Most importantly perhaps, the rapid evolution of hardware components in the past decades allows the generation of terabytes of data per day or more, in a single laboratory. However, we currently lack integrated bioinformatics tools that allow us to batch-process and analyze all of this data in a user friendly and meaningful way. Although several software packages exist to perform basic analysis of microscopy data, none of these integrate all aspects that are required, such as time-dependent single-cell segmentation, morphometric analysis, reporter signal quantification, genealogic tree generation, statistical analysis, display and visualization options, and modeling, as well as the precise annotation that can be used for archiving and searching previous datasets. In order for time-lapse microscopy to reach its full potential, these tools will have to be generated through a close collaboration between biologists (the end users) and computer scientists. In some cases, it will even be necessary that the biologist and the computers scientist are the same person. Nevertheless, the field has reached an exciting point, as every aspect of this technology is being rapidly developed. It should, however, be remembered that time-lapse imaging will remain a multidisciplinary technique that requires either extensive knowledge of all the fields it covers, or at least extensive collaboration between various groups in order to be successful.
The work of our laboratory is supported by the German research council (DFG). D.L.C. is a Canadian Institutes of Health Research postdoctoral fellow.
- © 2013. Published by The Company of Biologists Ltd