Journal of Cell Science partnership with Dryad

Journal of Cell Science makes data accessibility easy with Dryad

Models of signalling networks – what cell biologists can gain from them and give to them
Kevin A. Janes, Douglas A. Lauffenburger


Computational models of cell signalling are perceived by many biologists to be prohibitively complicated. Why do math when you can simply do another experiment? Here, we explain how conceptual models, which have been formulated mathematically, have provided insights that directly advance experimental cell biology. In the past several years, models have influenced the way we talk about signalling networks, how we monitor them, and what we conclude when we perturb them. These insights required wet-lab experiments but would not have arisen without explicit computational modelling and quantitative analysis. Today, the best modellers are cross-trained investigators in experimental biology who work closely with collaborators but also undertake experimental work in their own laboratories. Biologists would benefit by becoming conversant in core principles of modelling in order to identify when a computational model could be a useful complement to their experiments. Although the mathematical foundations of a model are useful to appreciate its strengths and weaknesses, they are not required to test or generate a worthwhile biological hypothesis computationally.


A decade ago, we welcomed the first signalling-network models that were strongly grounded in wet-lab experiments (Hoffmann et al., 2002; Schoeberl et al., 2002). Excellent models now exist for many canonical signalling circuits in a variety of biological settings. However, such models should not be viewed as an end product but rather as a tool for addressing systems-level challenges in cell biology (Janes and Lauffenburger, 2006). Have models fulfilled this role and have they provided biological insights that experimentalists should bother to care about? Here, we answer ‘Yes’ to both questions and predict that signalling-network models will soon become indispensable for modern research in the field. Fortunately, the current wealth of data-intensive methods has primed today's cell biologists to embrace modelling, even though they may lack formal training in the underlying mathematics.

In this Opinion, we propose that empirical cell biologists have much to gain from signalling-network models, and much to give by ensuring that these models stay in touch with reality. We begin with a brief primer on how computational models can be critically assessed from a biological standpoint. Then, we walk through a series of important insights about cell signalling that have stemmed from computational-systems modelling. We conclude with future perspectives on where signalling-network models are just beginning to have an impact and will continue to do so in the coming years.

Evaluating computational models non-computationally

Quantitative models have a rich history in the physical-chemical sciences and engineering, but such methods are underemphasised in the life sciences (Bialek and Botstein, 2004). Biology may not have quite the same theoretical foundations of chemistry or physics, but that does not necessarily preclude useful modelling. For example, engineers routinely build models in the face of unknown variables and incomplete information, using the models to help interpret their measurements and manipulations. Biologists have much in common with this perspective and could, thus, find clever ways to incorporate modelling into studies of signal transduction.

The practical hurdle to modelling for most biologists is the mathematics of model formulation and the associated computational algorithms used for analysis. Although this fundamental knowledge is good to have, it is not absolutely required to evaluate a computational model of cell signalling. Indeed, some of the most influential signalling-network models have come from established cell-biology labs (Albeck et al., 2008; Lee et al., 2003; Smith et al., 2002). A good modelling paper should read no differently than a good experimental paper – the tools change, but the spirit of the science should remain the same. There are simply a few key points to keep in mind when assessing quality and importance.

First is that computational models should be useful simplifications. It is said that biologists and theoreticians speak of two different things when using the term ‘model’ (Di Ventura et al., 2006; Endy and Brent, 2001). However, both are referring to useful simplifications of complexity; they just achieve the simplification in different ways. There are common and straightforward ways to evaluate the usefulness of a computational model and gauge whether it is providing a simplification that is biologically meaningful. A fair question to ask, for example, is whether a simplification is needed at all. A pathway or network may be so poorly defined or actively developing that a detailed computational model is premature. Alternatively, if the core signalling pathway is a processive enzymatic cascade without feedback loops, it is unlikely that a computational model of the pathway will reveal anything new (Huang and Ferrell, 1996). The converse is that simple ‘toy models’, which explore possible connectivities with arbitrary parameters, can be extremely useful when cleverly employed (Box 1). For instance, exhaustive computational searches through two- and three-protein toy models revealed that only a handful of signalling configurations can give rise to systems-level properties, such as perfect adaptation, switch-like behaviour or cell polarization (Chau et al., 2012; Ma et al., 2009; Shah and Sarkar, 2011). Computational modelling can thus provide a useful and efficient way to define the input–output properties of small signalling circuits (Brandman et al., 2005; Mangan and Alon, 2003; Tsai et al., 2008).

Another benefit of many computational models lies in their ability to make predictions. However, biologists should be aware that not all comparisons between model and experiment are a priori predictions as they might expect. At the least-stringent level, models can be ‘tuned’ through their parameters to fit experimental measurements as closely as possible (Fig. 1A). This can be useful for training model parameters that are otherwise difficult to measure; however, the comparison is not a prediction at all but rather a model calibration. Such calibrations should be clearly indicated in the figure caption, but one commonly finds this information missing, which gives the false impression that the model is making new predictions.

Fig. 1.

Not all model-measurement comparisons are created equal. (A) A starting model is first refined through the process of calibration. Here, model parameters are trained so that the model matches the calibration data (black circles) as closely as possible. There are a variety of parameter-estimation approaches that can be used for model training at this step. Regardless of the training method, model estimates of calibration data are not predictions. (B) Some data can be withheld during calibration (purple circle) and then reintroduced afterwards to obtain crossvalidated predictions. (C) Stringent model predictions relate to new data (red shapes) that are outside of the original training set.

If a training dataset includes multiple experimental conditions, one can achieve a type of prediction with the training data through a process called crossvalidation (Fig. 1B). During crossvalidation, one or more experimental conditions are withheld from the training set and the model is calibrated with the remaining data. Then, the calibrated model is challenged to predict the withheld condition(s), and the withholding-training-prediction process is iterated through the entire training set. Crossvalidated predictions are valid, but these can ultimately yield weak predictions if, for example, the training set is comprised of seven different doses of the same cytokine (meaning that the crossvalidation model is calibrated on information from the other six doses). The most-stringent predictions are obtained through conditions that are clearly different than the training set (Fig. 1C), such as perturbations to the network or timed addition of stimuli (Chatterjee et al., 2010; Janes et al., 2008; Lee et al., 2012; Miller-Jensen et al., 2007; Schoeberl et al., 2009). Experimentalists are well-suited to assess whether data are explicitly in the model training, implied in the training or outside the training entirely, even if they do not understand how the training itself is performed.

The last consideration is that there is no ‘one size fits all’ modelling approach, which can universally accommodate the breadth of applications related to cell signalling. Unlike technology platforms such as next-generation sequencing, there is not direct competition among computational methods, and a single dominant modelling approach will never emerge. The diversity of models can be daunting because it means that different mathematics and algorithms are involved for each application. On the plus side, it focuses the discussion on whether a modelling approach is best for a specific application rather than whether it is the best overall (Janes and Lauffenburger, 2006). This should favour biologists, who are poised to determine whether the application is compelling and in need of a useful simplification as described above.

Wiring diagrams matter more than the gauge of the wire

An important class of network models involves those that encode the elementary chemical reactions and transport processes underlying signal transduction (Aldridge et al., 2006). These chemical-kinetic models (also called physicochemical models, mass-action models or mechanistic models) are usually comprised of dozens to hundreds of rate equations that describe how signalling molecules change as a function of others (Box 1). Each rate equation requires several rate parameters (also called kinetic constants), which capture how avidly an enzyme acts upon its substrate, how fast a protein moves from one location to another in the cell, and how quickly a protein is synthesised or degraded. Some rate parameters can be gleaned from earlier biochemical studies that quantified isolated reactions in a test tube. However, chemical-kinetic models will also have a substantial number of rate parameters that must be calibrated to a particular training dataset.

To a cell biologist, this may look like cheating. For example, you cannot take a microscope image and scale different regions unevenly to get the picture that you want (Rossner and Yamada, 2004). However, the analogy is flawed, because it assumes that virtually any picture (i.e. model output) can be achieved with the free parameters. We now know that this assumption is generally untrue, as most individual parameters are not leveraged to define the computed network response (Gutenkunst et al., 2007). Indeed, one routinely finds that most rate parameters can change over several orders of magnitude without substantially affecting the model output (Bentele et al., 2004; Chen et al., 2009; Nakakuki et al., 2010).

If specific model parameters do not matter so much, then what does? Over the years, various chemical-kinetic models have converged upon a common answer: network wiring (Craciun et al., 2006; Feinberg, 1987; Feinberg, 1988). It turns out that the particular combination of feedbacks and crosstalk circuits profoundly influences the behaviour of a signalling network as a dynamic system (Altan-Bonnet and Germain, 2005; Lander et al., 2002; Santos et al., 2007). The systems-level functions of many biologically recurring network configurations have been studied in detail (Brandman et al., 2005; Chau et al., 2012; Mangan and Alon, 2003; Tsai et al., 2008). The thematic importance of wiring is now so recognised that it has been suggested as a tool for discriminating between competing models (Harrington et al., 2012; Kuepfer et al., 2007). Models have even proposed theoretical feedback regulators that have not yet been identified but must exist in order to reconcile the observed network dynamics with current knowledge (Nakakuki et al., 2010). The implication is that we should take great care in defining the core topology of a signalling network first; thereafter, we can simply hone in on the handful of rate processes that exert the greatest leverage on system performance. Alternatively, there are other modelling formalisms that require only the topology and a qualitative sense of how proteins and pathways are logically related (Box 2).

Specific perturbations have complex consequences

Because of the network wirings of biology, there is really no such thing as a specific perturbation. Signalling networks are so interconnected that primary targets give rise to secondary effects and adaptation, which grow to dominate the system over time (Araujo et al., 2007; Fritsche-Guenther et al., 2011). This is terrible for biological intuition; for instance, inhibiting Raf or MEKs should never lead to hyperactivation of ERKs, but both do (Duncan et al., 2012; Hatzivassiliou et al., 2010; Poulikakos et al., 2010).

Despite the challenges, network-level experiments and modelling have made headway toward deciphering one type of adaptation: the secondary waves of autocrine signalling triggered by environmental stimuli. Models of the NF-κB pathway were used to show that the particular signalling dynamics induced by bacterial LPS are caused by autocrine synthesis and release of TNF (Covert et al., 2005; Werner et al., 2005). Modelling the host-cell response to virus infection also revealed a central role for NF-κB via autocrine TNF (Garmaroudi et al., 2010), indicating an innate anti-pathogen response. By statistically modelling a large TNF-signalling dataset in epithelial cells, we found that TNF sets off a contingent cascade of multiple autocrine factors, including TGFα and IL1-family ligands (Janes et al., 2006). Interestingly, although the individual TNF-induced factors are broadly conserved across epithelia, the interconnectedness and magnitude of autocrine signalling is highly lineage specific (Cosgrove et al., 2008; Janes et al., 2006). This suggests a mechanism whereby cell-specific responses to environmental cues are tuned by the precise signature of secondary autocrine factors (Amit et al., 2007; Miller-Jensen et al., 2007; Shvartsman et al., 2002a). The iterative, time-dependent, and spatial components of autocrine signalling have provided ample opportunities for network modelling and will continue to do so (Batsilas et al., 2003; Joslin et al., 2010; Shvartsman et al., 2002a; Shvartsman et al., 2002b).

Information flow in signalling networks – it is all relative

In metabolic networks, pathway activity can be gauged directly by material flux, whereby reactants are converted to products, which then become the reactants for the next biochemical conversion (Oberhardt et al., 2009). In signalling networks, however, the currency of information is not as obvious (Cheong et al., 2011; Toyoshima et al., 2012). Cells can, theoretically, perceive the absolute level of a post-translational modification, its stoichiometry with respect to the unmodified state, the change relative to baseline or, among other possibilities, the duration that a modification persists. Properly characterising information flow is important, as it may help to explain the surprising outcomes that result when signalling pathways are chronically disrupted by mutation (Berger et al., 2011; Soussi and Béroud, 2001). For example, inactivation of one copy of the gene encoding the tumour suppressor PTEN causes early prostate neoplasia, but loss of both copies drives senescence and suppresses tumorigenesis (Chen et al., 2005).

Early quantitative experiments suggested that biological functions were correlated more closely to relative fold changes in signalling activity than to absolute levels (Miller-Jensen et al., 2006; Sasagawa et al., 2005). Therefore, cells might rely on a pathway's minimum-to-maximum dynamic range to transmit information (Janes et al., 2008). The fold-change phenomenon was later studied more formally in a series of concurrent reports, which found this mode of signal processing in the Wnt–β-catenin and ERK2 pathways (Cohen-Saidon et al., 2009; Goentoro and Kirschner, 2009). An accompanying theoretical study also linked fold-change detection to a particular network wiring called ‘incoherent feed-forward loop’ (IFFL) (Goentoro et al., 2009). An IFFL is made up of a split pathway, with one branch activating and the other inhibiting a common downstream effector (Mangan and Alon, 2003). IFFL-containing network configurations can achieve perfect adaptation to a step-input stimulus (Box 1) (Ma et al., 2009). Goentoro and co-workers have shown that the extent of adaptation is a function of the relative step input rather than the absolute size of the step (Goentoro et al., 2009). IFFL motifs are abundant in gene-expression networks and occur in contexts such as Ras activation, suggesting that fold-change detection is a widely recurring theme (Ferrell, 2009; Milo et al., 2002). Collectively, these studies provide a dramatic simplification for large-scale data acquisition, because relative quantification of signalling is much more easily obtained than absolute quantification (Albeck et al., 2006; Bajikar and Janes, 2012). They also reinforce the added value of monitoring network responses to perturbations as opposed to simple measurements of the baseline signalling state (Irish et al., 2004; Irish et al., 2006; Janes et al., 2008).

Synergy stops at pairs

Cells regularly receive multiple stimuli at the same time, providing an opportunity for synergistic signal processing when the right combinations come together. Mining for synergies exhaustively seems like an experimentally impractical challenge at first. To explore all possible combinations of 15 ligands, for example, would require 215 (equalling 32,768) experimental conditions, more conditions than genes in our genome (Fig. 2). Luckily, network-level modelling and experiments have combined to show that cells operate by a much simpler set of rules (Janes, 2010).

Fig. 2.

The combinatorial advantage of defining microenvironments through stimulus pairs. As the number of stimuli increases, the number of all possible combinations increases exponentially (red; note the logarithmic scale). By contrast, the number of stimulus pairs increases much less rapidly (green). For reference, the number of combinations is shown alongside the estimated number of posttranslational modifications (PTMs), mRNA species, genes and signalling genes in humans.

Synergy or antagonism between pairs of input stimuli is a recurring theme but one that is rare with respect to all possible pairwise combinations. Looking at cytokine secretion of macrophages treated with 22 different ligands, Natarajan and co-workers detected non-additive interactions in only ∼13% of the Embedded Image possible stimulus pairs (Natarajan et al., 2006). Importantly, a number of independent studies have now shown that quantitative output synergies with higher-order input combinations are negligible (Chatterjee et al., 2010; Geva-Zatorsky et al., 2010; Hsueh et al., 2009). This finding is important, because it opens the door for ‘pairwise scanning’ of ligands to define cellular response capabilities, followed by linear or nonlinear modelling to predict the response to more-complicated inputs (Chatterjee et al., 2010; Geva-Zatorsky et al., 2010). Doing so allows for a dramatic reduction in the number of experimental conditions that need to be measured. In the 15-ligand example above, for instance, only Embedded Image conditions would need to be tested, with the remaining ∼32,500 inferred computationally (Fig. 2). Models of cell signalling can thus improve the efficiency of experimental designs when used prospectively before a study has started or while it is underway.

Hidden dimensions in complex networks

Combining all of our knowledge about a signalling network yields a picture that seems irreducibly complex (Caron et al., 2010; Oda and Kitano, 2006; Oda et al., 2005). With ∼105 interactions among ∼104 proteins (Papin et al., 2005), it may seem remarkable that anything gets coordinated inside the cell. Nevertheless, several lines of evidence suggest that there are coherent threads of simplicity within these networks. For example, despite all the intricacies of autocrine signalling described earlier, the release of autocrine EGF-family ligands maps linearly to activation of Ras, phosphorylation of ERKs and the extent of proliferation (DeWitt et al., 2001; Joslin et al., 2010). One can search for simple input–output modules by taking this type of candidate approach, but the same goal can be achieved faster and more comprehensively by using statistical ‘data-driven’ models (Janes and Yaffe, 2006).

One common type of data-driven model identifies sets of measurements that are correlated and groups them together to identify combinations that accurately predict outputs of interest (Geladi and Kowalski, 1986). For signal transduction, these combinations point to ‘hidden dimensions’ within a network, where multiple signalling proteins may be coordinately regulated to execute a common function (Jensen and Janes, 2012). Such models have proved to be remarkably versatile for signalling networks, capturing adaptors, effectors, cell-fate control and cytokine-release profiles in different settings (Beyer and MacBeath, 2012; Cosgrove et al., 2010; Gordus et al., 2009; Janes et al., 2005; Janes et al., 2006; Janes et al., 2008; Kemp et al., 2007; Kumar et al., 2007a; Kumar et al., 2007b; Lau et al., 2011; Lee et al., 2012; Miller-Jensen et al., 2007; Tentner et al., 2012). Therefore, the question is no longer whether these model-based simplifications of signalling networks are effective but, rather, why they work so well as often as they do.

Recent theoretical work has suggested that the fundamental kinetics of cell signalling require only a few hidden dimensions to obtain a useful approximation, no matter how complicated the network (Dworkin et al., 2012). These hidden dimensions may derive from the vigorous degree of crosstalk that interconnects pathways, enabling a limited spectrum of measurements to contain surrogate information about the unmeasured network (Kirouac et al., 2012). For instance, one of our early models suggested a strong link between the stress kinase MK2 (also known as MAPKAPK2) and TNF-induced apoptosis (Janes et al., 2005). It was years later that we clarified the MK2 mechanism of action through posttranscriptional stabilization of TNF-induced IL1A, a pro-death autocrine cytokine that was uncovered after the original model had been built (Janes et al., 2006; Janes et al., 2008). More recently, the principle of hidden dimensions has been expanded to interlinked cell responses, where data-driven modelling helped to reveal a novel necrotic response to virus infection that occurred together with apoptosis (Jensen et al., 2013). Statistical models efficiently boil down complex signal–response datasets to hidden dimensions, which can then be unpackaged mechanistically with contemporary biological experiments.

Clever correlation leads to causality

Modelling can also be used to bend some of the rules ingrained in signalling research. For example, biologists are commonly trained that correlation should not be mistaken for causation. However, that does not mean that correlation-based methods cannot be used cleverly to uncover new mechanisms (Vilela and Danuser, 2011). The trick is to observe signalling networks in a manner that makes spurious correlations unlikely (Fig. 3). Then, simple modelling or analysis can be used to aid hypothesis generation. In the data-driven approaches described above, for example, one often measures signalling from a stimulus or perturbation across a broad landscape of conditions that would cause spurious correlations to break down.

Fig. 3.

Separating meaningful correlations from spurious correlations. (A) When cells are treated with an acute stimulus (black circle), many pathways are activated concurrently (arrow), and the stimulus dominates the observed changes. (B) Using sensitive techniques that can discern fluctuations without an acute stimulus, subtler correlations (assessed by the Pearson correlation coefficient, R) can be discerned that are hopefully more meaningful. In the example here, Signal 3 is weakly correlated with the others (−0.2<R<0.2) and signals 4 and 5 are perfectly anti-correlated (R = −1) in B, but they all appear correlated (R>0.9) with an acute stimulus in A. This example exploits the differences in time scales between the slow changes induced by the stimulus and the fast fluctuations that happen spontaneously. All plots are shown on the same arbitrary scale.

An alternative method, co-opted from computational signal processing in engineering, is to look at time-delayed correlations between nodes within a network. These ‘cross-correlations’ can point to regulatory interactions, provided that there are no external driving forces to cause spurious correlations (Dunlop et al., 2008). For example, if the concentration of one signalling protein consistently drops shortly after a second protein is activated, then the resulting cross-correlation will suggest that the second protein inhibits the first. The cross-correlation would be weaker with no connection between the two proteins, even if there were a third protein that controlled both, because two reactions are harder to coordinate over time than one (Vilela and Danuser, 2011). Cross-correlation-based methods have been used with speckle microscopy and FRET sensors to unravel the signalling and mechanical events controlling F-actin assembly and cell protrusion (Ji et al., 2008; Machacek et al., 2009; Tkachenko et al., 2011). The analyses were able to clarify the coordination of focal adhesion assembly, Rho-family GTPase activation, and second-messenger signalling with a spatial resolution of microns and a temporal resolution of seconds. As a tool, cross-correlation should become more widespread with improved reporters that track multiple signalling events in living cells over time.

Correlations can also be strategically spread out over many cells instead of many time points. For instance, correlated cell-to-cell fluctuations in protein expression were recently used to infer targets of PKA and Tor signalling in yeast (Stewart-Ornstein et al., 2012). This type of approach does not explicitly require single-cell measurements, because repeated samplings of small groups of cells can provide enough fluctuation to organise core biological functions (Janes et al., 2010). Importantly, a lack of correlation in this setting can be just as powerful as a positive or negative correlation. Weaker-than-expected correlations in FOXO signalling among breast epithelia showed that the FOXO target-gene network is intersected by RUNX1, another tumour suppressor that recently has been implicated in breast cancer (Banerji et al., 2012; Ellis et al., 2012; Janes, 2011; Wang et al., 2011). Past breakthroughs in biology have stemmed from the analysis of fluctuations (Luria and Delbrück, 1943), and so it would be exciting to see these principles applied more widely with the molecular tools of today.

Future perspectives

The best computational-systems work poses a compelling biological puzzle that requires a model to solve (Arkin and Schaffer, 2011). These approaches have already made a positive impact on how we think about cell biology, but where can models of signalling go from here? One immediate application lies at the intersection of signalling networks and targeted therapeutics. Network-modelling approaches are now being used to identify new drug targets, drug regimens and mechanisms of drug action (Kleiman et al., 2011; Lee et al., 2012; Schoeberl et al., 2009). ‘Clean’ molecularly targeted drugs lead to ‘messy’ system-wide adaptations (see above) (Chandarlapaty et al., 2011; Duncan et al., 2012; Gioeli et al., 2011), so we expect network models to be featured more prominently in the future for these purposes.

Longer term, we see potential for models to move ‘downward’ from signalling to gene expression and ‘outward’ from single-cell to multi-cell behaviour. How transcription-factor binding sites contribute to gene expression is complicated, but systematic analyses are beginning to suggest that promoter activity is largely a function of binding-site location and multiplicity (MacIsaac et al., 2010; Segal et al., 2008; Sharon et al., 2012). We thus expect that many new computational models will be developed that link signalling dynamics to transcriptional signatures (Cheng et al., 2011; Huang and Fraenkel, 2009). Likewise, as tools advance for studying single cells at the network level, we anticipate improved models of cell–cell communication, cell heterogeneity and multi-cell properties (Anderson et al., 2006; Feinerman et al., 2008; Jørgensen et al., 2009; Kirouac et al., 2009; Nir et al., 2010). An ambitious but worthwhile long-term goal should be to build faithful models of cell signalling in vivo, and work has already begun in this direction (Lau et al., 2012) (Box 3). Long term, such efforts are likely to require hybrid modelling approaches that combine different mathematical formalisms (Anderson et al., 2006; Bajikar and Janes, 2012; Hayenga et al., 2011). Cell biologists should be able to follow – and, ideally, participate in – these newer developments with the same mind-set as outlined above. Remember: the tools may change, but the thinking is the same.

Computational models of cell signalling were once viewed as incomprehensible abstractions that were detached from the biology they claimed to study. Over time, this perception has changed as computational scientists armed with quantitative datasets have brought theory to practice. Now is time for empiricists to meet us in the middle and begin to view modelling as a legitimate means for studying signal transduction. Network models are not the answer to every question but just like flow-cytometry or immunoblotting or quantitative PCR, they should have their place in every cell biologist's toolkit.

Box 1. From toy models to chemical-kinetic models

The input–output schematics in the Figure illustrate how signalling networks can get complicated very quickly. Even with a simple input–output system of three proteins (A, B and C), as shown in Figure (i), where the different arrowheads indicate a positive or negative influence, there are 16,038 possible network wirings that connect the input to the output (Ma et al., 2009). Nevertheless, by making a few simplifying assumptions about each connection and the strength of its influence, one can profile the dynamic properties of all possible 3-protein networks computationally (top). Pioneering work by Ma and co-workers showed that among all such models, only two general configurations enabled perfect adaptation of the output to a step-input stimulus (Ma et al., 2009). Similar connectivity constraints have since been uncovered for other systems-level circuit properties (Chau et al., 2012; Shah and Sarkar, 2011). These computational efforts are conceptually distinct from detailed chemical-kinetic models, which seek to capture one biological wiring as accurately as possible (ii). Both model types use the same mathematics – ordinary differential equations (iii) – to capture how information is relayed between proteins.

Embedded Image

Box 2. Formalised network wiring with discrete logical models

As shown in the wiring diagrams of the Figure, biologists often have a solid qualitative sense of how signalling pathways are configured: kinase A phosphorylates substrate C, such that when A is active (‘on’), C is phosphorylated (‘on’), and vice versa (left). Of course, the biology may be more complicated – two kinases (A, B) could act upon a substrate redundantly, meaning that A or B gives rise to phosphorylation of C (middle). Alternatively, substrates may not be fully engaged unless phosphorylated by two kinases, causing both A and B to be required (right). These wiring diagrams can be interconnected and simulated by network models that use discrete logic to propagate on–off states. Discrete logical models have recently emerged as tools for cell signalling. Saez-Rodriguez and co-workers built multiple models of hepatocyte signalling in response to stimuli and signalling inhibitors (Saez-Rodriguez et al., 2009; Saez-Rodriguez et al., 2011). Starting with a comprehensive literature-curated network, the authors refined the wiring to capture measured patterns of immediate-early signalling. Refinement involved extensive network ‘pruning’ to remove literature-derived connections that did not appear to hold true for hepatocytes. Additionally, their models suggested new links between signalling proteins that had escaped the curated database but were supported by the literature or their own follow-on experiments. Discrete logical models provide a formal mechanism for developing context-specific signalling networks that are most consistent with available data. The coarseness of discrete on–off changes in signalling is addressed by more-complicated ‘fuzzy’ logic models, which allow graded transitions between states (Aldridge et al., 2009; Morris et al., 2011).

Embedded Image

Box 3. An in vivo cell-signalling model evolves through systems-level and hypothesis-driven experiments

A major challenge for in vivo studies of cell signalling lies in properly defining the biological boundaries of the system (grey dashed border in the flow diagram of the Figure). What are the core and auxiliary cell types involved, and how are they communicating with one another? A recent publication nicely illustrates how statistical models can interact with directed experiments to redefine in vivo system boundaries over the course of a study (Lau et al., 2012). The authors sought to examine the contribution of the adaptive immune system to the apoptotic response of intestinal epithelial cells (IECs) following systemic administration of tumour necrosis factor (TNF). Lau and co-workers excluded a role for commensal bacteria by showing that antibiotic treatment had no effect on IEC apoptosis. They then analysed Bioplex assays of cytokines and signalling proteins by using a statistical modelling approach called partial least squares discriminant analysis (PLSDA), which highlighted a role for monocyte chemotactic protein-1 (MCP1, also known as CCL2). Surprisingly, later immunohistochemistry (IHC) experiments showed that MCP1 is most-strongly upregulated and expressed in goblet cells of the epithelium. Antibody neutralization (Ab neut) of MCP1 accelerated IEC apoptosis, and profiling various lymphoid and myeloid lineages by FACS showed that MCP1 suppressed the recruitment of plasmacytoid dendritic cells (pDCs). The authors supplemented their PLSDA model with antibody-neutralization experiments directed at MCP1 and pDCs to uncover a TNF-sensitizing role for interferon-γ (IFNγ) in their earlier Bioplex data. The work demonstrates how modelling can participate in the iterative contraction and expansion of system boundaries that describe cell signalling in vivo.

Embedded Image


  • Funding

    Work in the laboratory of K.A.J. is supported by the National Institutes of Health Director's New Innovator Award Program [grant number 1-DP2-OD006464], the American Cancer Society [grant number 120668-RSG-11-047-01-DMC], the Pew Scholars Program in the Biomedical Sciences, and the David and Lucile Packard Foundation. This report was partially supported by National Institutes of Health grants U54-CA112967 (NCI Integrative Cancer Biology Program), R24-DK090963, and R01-EB010246 to D.A.L. Deposited in PMC for release after 12 months.


View Abstract