Next Article in Journal
Nonlinear-Adaptive Mathematical System Identification
Previous Article in Journal
Deformable Cell Model of Tissue Growth
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Dynamic Data-Driven Modeling for Ex Vivo Data Analysis: Insights into Liver Transplantation and Pathobiology

1
Department of Surgery, University of Pittsburgh, Pittsburgh, PA 15213, USA
2
McGowan Institute for Regenerative Medicine, University of Pittsburgh, Pittsburgh, PA 15213, USA
3
Department of Surgery, Thomas E. Starzl Transplantation Institute, Pittsburgh, PA 15213, USA
*
Author to whom correspondence should be addressed.
Computation 2017, 5(4), 46; https://doi.org/10.3390/computation5040046
Submission received: 15 October 2017 / Revised: 12 November 2017 / Accepted: 16 November 2017 / Published: 23 November 2017
(This article belongs to the Section Computational Biology)

Abstract

:
Extracorporeal organ perfusion, in which organs are preserved in an isolated, ex vivo environment over an extended time-span, is a concept that has led to the development of numerous alternative preservation protocols designed to better maintain organ viability prior to transplantation. These protocols offer researchers a novel opportunity to obtain extensive sampling of isolated organs, free from systemic influences. Data-driven computational modeling is a primary means of integrating the extensive and multivariate data obtained in this fashion. In this review, we focus on the application of dynamic data-driven computational modeling to liver pathophysiology and transplantation based on data obtained from ex vivo organ perfusion.

1. Introduction

Irreversible hepatic fibrosis followed by progressive loss of hepatic function, a condition known as cirrhosis, is the common final pathway for end-stage liver disease, regardless of initial etiology. In spite of scattered clinical therapies involving medications, diet, and interventions such as transjugular intrahepatic portosystemic shunting (TIPS), the only definitive treatment for cirrhosis is hepatic replacement via transplantation. As is the case with other organs, the number of transplants that can be performed is limited not just by the number of acceptable donor grafts, but also by the viability of these grafts at the time of transplantation. Unfortunately, the current standard of cold and hypoxic preservation often leads to the loss of organ viability between procurement and transplant. Extracorporeal organ preservation with machine perfusion providing oxygenation, in which organs are isolated in an ex vivo environment over an extended time-span, is a concept that has led to the development of numerous alternative preservation protocols designed to better maintain viability. An often-overlooked benefit of these perfusion methods, and the focus of this review, is the unique opportunity they offer researchers for extensive biological sampling free from systemic influences. Most immediately, this information can be used in the monitoring of organs undergoing preservation, with the goal of better predicting post-transplant function. More broadly, the quality and quantity of the data obtained in this fashion allows for potentially novel insights in the study of organ physiology and pathology. To fully harness the power of this technique, however, a proper understanding of the interplay among variable types is imperative, and computational modeling is the primary means of integrating such multivariate data. In this review, we focus on the application of dynamic data-driven computational modeling to liver transplantation and pathophysiology. We discuss this material both directly and in the form of case studies. It is our hope that the integration of these computational modeling methods will lead to improved organ preservation protocols, technologies, and solutions, as well as yielding novel and clinically actionable insights into the underlying disease processes.

2. Liver Pathophysiology: Challenges and Opportunities Related to Liver Transplantation

Though multi-omic studies and the modeling techniques discussed in this review are not specific to any organ, much of our work has focused on the liver and will be presented as examples. A brief background of hepatology relevant to transplantation and the projects discussed is presented here in order to provide context for the case studies that follow, and also to illustrate the potential for deriving insights regarding the underlying liver disease using computational tools in combination with serial measurements from diseased organs during preservation.

2.1. Liver Metabolism/Biochemistry

Many biochemical processes key to metabolism are located exclusively or primarily in the liver, and more specifically in the hepatocytes that comprise 80% of this organ [1]. Within carbohydrate metabolism, for example, these processes include gluconeogenesis, glycogenesis, and glycogenolysis [2]. In fatty acid metabolism, the liver is the site of ketone body formation and phospholipid production [1]. Other processes exclusive to the liver are part of larger pathways that involve multiple organs, such as the liver’s role in converting lactate produced in muscle to glucose during the Cori cycle [2]. The liver is also the site of numerous detoxification processes. This is true for both external substances, such as alcohol [3], along with endogenously produced substances such as ammonia in the urea cycle [2]. Similarly, due to the liver’s connection to the gastrointestinal tract, it is the exclusive site for numerous digestive processes, such as lipid metabolism and subsequent bile formation [4]. The processes mentioned here, and all others occurring in the liver, are highly regulated. The liver is able to respond to both fed and fasting states appropriately via nutrient, hormonal, and neuronal regulation [1].

2.2. Liver Transplantation

When liver physiology is perturbed, the result is often some form of disease that can lead to either eventual or abrupt failure of the organ. Cirrhosis and drug-induced liver injury, respectively, are the most common causes of chronic and acute liver failure [5]. Regardless of disease duration, transplantation is the definitive treatment for virtually all types of end-stage liver disease. The main causes of cirrhosis are chronic hepatitis C virus (HCV) infection, alcoholic liver disease, and nonalcoholic fatty liver disease (NAFLD)/nonalcoholic steatohepatitis (NASH), with relative rates differing by ethnicity [6]. While HCV infections currently account for ~30–45% of liver transplantations in the USA and Europe, NAFLD/NASH is already the leading cause of cirrhosis in certain minority groups [6], and it is predicted that NASH will become the most common indication for liver transplantation by 2020 [7]. Today, indications for liver transplantation have also grown to include (but are not limited to) primary biliary cirrhosis, sclerosing cholangitis, autoimmune hepatitis, alcoholic cirrhosis, cryptogenic cirrhosis, hepatitis B virus (HBV), biliary atresia, metabolic liver diseases, and hepatocellular carcinoma (HCC) [8].
Out of the 29,532 organs transplanted within the US in 2014, 6729 were livers [9]. Livers consistently make up the second largest category of organs transplanted per year, and, as such, are the focus of this review. Since the first attempted human liver transplant by Starzl in 1963, the procedure has developed from being initially experimental and high-risk to being widely-accepted, with sustainable survival rates (90% in the first year). There are several techniques for liver transplantation involving both deceased and living donors, including split liver allografts, reduced-size livers, transplantation across incompatible ABO blood type matches, and auxiliary liver transplants as heterotopic implants for a short period of time [10,11]. Auxiliary liver transplantation (ALT), a technique which uses a partial donor lobe to support the recipient organ [10,11], can be subdivided into three separate surgical techniques: heterotopic ALT, auxiliary partial orthotopic liver transplantation, and whole graft ALT [12,13].
As organ transplantation becomes more ubiquitous and successful, new techniques must be applied to increase the number and quality of transplantable organs. In the past, organs were kept in simple cold storage containers—an effective method, but only as long as preservation periods were relatively short. Now, the imbalance between organ supply and demand has necessitated the transport of organs over longer distances, which increases ischemic time, and the use of previously discarded allografts that qualify as extended criteria donors (ECD), which have experienced a variety of ischemic insults prior to procurement despite having close to normal anatomical features. Machine perfusion, an ex vivo system resembling extra-corporeal membrane oxygenation (ECMO) that continuously perfuses and oxygenates organs at various temperatures [14], has emerged as a reliable technique to extend organ shelf life and minimize ischemia-reperfusion complications inherited from ECD organs.

2.3. Liver/Transplant Immunology and Inflammation

The immune system plays a major role in all chronological stages of liver transplantation. In the context of “omic” data, the immune system interconnects features from genomics, transcriptomics, proteomics (i.e., interleukins, interferons, immunoglobulins), metabolomics (i.e., prostaglandins, leukotrienes, free radical reaction products), and microbiomics. The use of reductionist approaches to study transplant immunology has led to vast improvements in the field since its conception nearly 50 years ago. However, the complexity of the immune response and the interconnectivity of the systems involved has left the field with the many unanswered challenges. In hopes of generating avenues for discovery in the field of liver transplantation—and in hopes of avoiding a formidable stalemate on improvements in its efficacy and outcomes—data-driven and mechanistic modeling may prove particularly adventitious in better understanding the dynamic interactions of the immune system’s multi-omic parts. Here we will discuss the role of the immune system across the perioperative timeline of liver transplantation: preoperative, intraoperative, and postoperative.

2.3.1. Preoperative

Many liver pathologies (i.e., steatosis, scarring, and fibrosis) are associated with such acute and chronic inflammatory diseases and are mediated by a host of immune cells and aberrant signaling pathways. Liver cirrhosis, for example, portends decreased macrophage and neutrophil chemotaxis, with alcoholic cirrhosis showing marked T-cell and NK-cell impairment. Others, such as HCV, are associated with altered B-cell expansion and immunoglobulin dysfunction. Moreover, many cases of end-stage liver disease are concomitant with decreased production of complement and acute phase reactant proteins from the liver, as well as a “cytokine storm” of pro- and anti-inflammatory mediators [15]. Thus, it becomes apparent that a challenge for future experimental design includes utilizing methods diverse enough to capture such a hypervariable system.

2.3.2. Intraoperative

Issues of organ preservation and histocompatibility emerge as important immunologic considerations as the donor organ and recipient are prepared for transplantation. In recent years, there has been an increase in liver donations after cardiac death (DCD) accompanied by poor graft survival and inferior clinical outcomes [16]. As compared to donation after brain death (DBD), DCD requires the termination of life-sustaining therapies and confirmed cardiac arrest leading to a legal death pronouncement before organ procurement may be conducted, leading to prolonged and variable periods of warm ischemia [16]. Such insults lead to irreversible cellular damage to hepatocytes and sinusoidal lining cells, alongside increased ischemic necrosis and neutrophilic infiltrate. Ischemic cholangiopathy following DCD is the most challenging insult to resolve, since cholangiocytes (epithelial cells of the bile duct) are terminal cells extremely sensitive to ischemia. Differential rates of ischemic cholangiopathy following DCD have been associated with choice of immunosuppression, and suggests that manipulation of inflammation in the preservation period may be imperative in improving graft survival [17].

2.3.3. Postoperative

Managing morbidity and mortality following liver transplantation, and more specifically the clinical and surgical issues related to allorecognition following transplantation, provides yet another large set of challenges for both clinicians and the field of transplant biology. While immunosuppressive regimens are needed in the postoperative period to prevent acute and chronic liver rejections, it is likely that such regimens can interfere with the progression of the primary diseases leading to end-stage liver disease. This issue has been extensively studied in patients with autoimmune diseases who had recurrence of their primary disease within the first decade after liver transplantation [18]. The challenge of rapid disease progression in a compromised immune state was initially highlighted by the large number of HCV transplant recipients acquiring infections postoperatively that progress to cirrhosis within five years [7]. Immunosuppressive drugs are currently available for use to postoperatively target antigen presentation; complement activation; and stimulate T-cell, B-cell, TNF-α, and many additional pathways. While these therapies have become more selective and less toxic, the mechanisms of action for many of these drugs remain poorly understood. For example, monoclonal antibodies targeting the cell surface molecules CD25 and CD52 have been successful improving graft survival, yet there is currently no way to monitor their function or their appropriate therapeutic levels [7]. Additionally, some patients that do in fact survive long after transplantation may develop complex cardiovascular and metabolic disorders that play have a major impact on their long-term outcomes. There is still much work remaining to develop continuous immune monitoring assays and individualize immunosuppressive regimens to each recipient based on their multi-omic make-up [7].

3. Organ Perfusion: Generating Ex Vivo Data

3.1. Data Types in Perfusion Experiments

The development of perfusion solutions has been centered on providing metabolic support for organs during the preservation phase prior to transplant [19]. However, there is an ongoing opportunity for improving these solutions by defining how they impact organ metabolism, as well as how they might affect related processes such as the inflammatory response that, secondary to ischemia/reperfusion injury, drives further deterioration of the preserved organ [20]. Our group has recognized the need to define, in granular detail, the impact of preservation on multiple biomolecules using a systems approach [21,22].
Four primary levels of “omes” are the source of a majority of data in perfusion experiments: the genome, transcriptome, proteome, and metabolome. These levels represent the central dogma of biology, which states that information necessary for the functioning of life is stored, transcribed, and translated before having its ultimate effect at the level of signaling proteins, structural proteins, or enzyme-derived metabolites. Genomic data are obtained in the form of DNA, usually by polymerase chain reaction (PCR) amplification followed by sequencing of genes of interest. Whole-genome sequencing, historically used predominantly in microorganisms [23], will likely play an increased role in human research in the future as technology improves and costs continue to decline [24]. The primary sample type for the transcriptome is mRNA, usually sampled on array chips that can bind and measure tens of thousands of known transcripts simultaneously [25]. Known proteins and enzymes of interest typically represent the proteome, and while there exist many ways to derive data at this level, methods such as mass spectrometry (MS) that can provide quantitative measurements are favored for multi-omic experiments [26].
The last of these primary “omes”, the metabolome, which is particularly relevant to translational research since it is quite closely connected to phenotype and function. Metabolomic sample types include carbohydrates, nucleotides, amino acids, lipids, and more, representing both key points in common biochemical pathways as well as less focused upon intermediates. The two primary methods at this level are nuclear magnetic resonance (NMR) spectroscopy and mass spectrometry, the latter of which is usually combined with liquid chromatography (LC) or another form of separation [27]. Issues of expense and reliability have historically caused researchers to avoid this type of data, but with modern technology mitigating these concerns, a lack of precedent is now the primary factor hindering the inclusion of metabolomic data in more types of biomedical research [28].
We have lumped multiple biological end-products under the umbrella of “metabolomics”, though it should be noted that others have sub-divided this category into distinct “omes”, such as the lipidome or phosphoproteome. Other “omes” related to the multitude of microorganisms living commensally in the human body (e.g., the microbiome [29,30], the virome [31]) have been implicated in regulating numerous physiological and pathological processes. Additionally, other categories of biomolecules, such as inflammatory mediators and micro-RNAs, may be considered “omes” despite not having the specific nomenclature. Essentially, the classification and study of “omes” is a systems biology approach to characterize groups of relevant biological variables. The tools presented in this review can be applied to any “ome” or other large set or sets of data, not just those included in the example scenarios.

3.2. Data Analysis in Perfusion Experiments

Data analysis represents the next major challenge in conducting organ perfusion research once multi-omic data have been obtained. The inherent differences in the structure and dimensionality of data obtained using a variety of methods and sample types, along with the large size of datasets resulting from many of the methods discussed above, makes any single-level analysis or attempt at multi-level integration a complex scenario [32]. The ultimate goal of multi-omic experiments is to be able to examine many data types simultaneously, gaining a better understanding of mechanisms that underlie both pathology and treatment, and then to correlate these mechanisms to the observed clinical features and outcome. Given these broad goals and complex data, the use of quantitative methods and systems biology approaches such as mathematical modeling is an appropriate first strategy [33]. Then, the results and implications of these analyses can help researchers focus on more specific targets for future studies using laboratorial or more traditional statistical methods. These approaches are being applied with increasing frequency to fields throughout science and medicine [28], and are particularly helpful in studies of complex networks and multiscale systems. Below, we focus on the key approaches to analyzing these types of ex vivo-generated data, with examples from liver preservation as well as insights into liver pathophysiology.

4. Computational Modeling: A Systems Biology Tool for Gaining Insights into Liver Disease and Transplantation

4.1. Data Types in the Context of Liver Disease and Transplantation

“Omic” data represent an excellent example of variety, and have the potential to yield important insights into the biology of liver disease and the impact of organ perfusion for liver transplantation. As both computing power and the sampling methods discussed earlier in this review advance, we expect to see exponential increases in the number of measurements in these datasets, and resultant issues of veracity as well. If these types of measurements are integrated into future clinical decision-making, issues of speed of acquisition could also easily arise. Finally, the methods discussed in this review and used in the case studies containing experimental data are all able to handle high-volume datasets.

4.2. What Is a Model?

A model is a simplified representation of some real-world entity [34]. In this general sense, the use of models is already widespread in the fields of science and medicine. Animal and cell-line models provide a standardized vehicle with which to isolate and study complex conditions and systems. More abstractly, the diagrams found throughout scientific literature, uncluttered visual depictions of often-intangible phenomena, also meet this definition of a model. Common statistical procedures, such as correlations and regressions, which reduce experimentally obtained data into explanatory numbers and shapes, are models as well.
The computational models used in systems biology attempt to accomplish no more or less than the aforementioned, more common, examples. Like all other models, they capture the essence of a process under experimentation, but instead of using living organisms, pictures, or familiar statistical terms to communicate their findings, computational models use the language of mathematics or computer programming. Though proficiency in these languages is necessary to fully understand and implement the modeling techniques presented herein, the purpose of this review is to introduce the reader to the existence of, rationale for, and potential benefits of these models. Thus, discussions will incorporate only the minimum degree of mathematics necessary to present concepts effectively.

4.3. Goals of Computational Modeling

By now, the rationale for using computational approaches, such as modeling, for ex vivo data analysis in organ perfusion experiments should be apparent. Simply put, these investigations yield results that conventional methods have trouble describing, and computational biology offers a promising new set of tools with which key, and often non-intuitive, insights may be derived. Before proceeding to present these methods in detail, it is prudent that their general benefits and limitations be appreciated first.
A crucial concept is that computational tools (and the systems biologists who employ them) do not minimize the significance or necessity of traditional reductionist methods. Computational modeling/systems biology approaches are complementary, not alternative approaches for carrying out biomedical research. Ideally, each approach should inspire further advances in the other in an iterative fashion. For example, computational results may identify targets for further investigations of specific molecular targets or pathways in the laboratory. Conversely, computational analyses may help in the testing of potential therapies derived from simplified in vitro or in vivo laboratory experiments.
In either order, the scientific benefits of computational modeling methods are supplemented by the increases in efficiency they provide. Computational methods are among the most cost-effective in any field in terms of time, resources, and manpower (i.e., “electrons are cheap”).

4.4. Modeling Approaches: Data-Driven vs. Mechanistic Modeling

In any investigation, the course of discovery follows the scientific method. The analysis step of this process, however, where data are translated into new knowledge, is far less straightforward in multi-omic experimentation than other fields, largely due to the previously described problems posed by the scope and scale of the data [35]. Two general approaches that can be used to help here are data-driven and mechanistic modeling. Data-driven modeling uses primarily association-based mathematical methods able to analyze within and among “omes” [36]. These models are statistical and phenomenological in nature, meaning they describe what is occurring but do not attempt to explain how or why. Data-driven modeling techniques are often better able to accept the size and dimensionality of multi-omic datasets compared to traditional statistical techniques constrained by parametric requirements, and the results obtained can often be combined with the scientific knowledge of the investigator to hint at underlying mechanisms [33].
Though not discussed here, mechanistic modeling can be undertaken when the investigator feels there is a sufficient understanding of how and why a system works (i.e., a sufficient understanding to create an abstraction of the system being studied, while not knowing everything about how this system works so that modeling can be used to gain novel insights). Mechanistic models are based on mathematical abstractions of the underlying mechanisms of a system in question, often using equations or decision rules, and may be deterministic (i.e., a simulation yields the same results when run under the same conditions) or stochastic (i.e., random, in which each simulation is different in some way from others run under the same conditions) [36]. These models can be tested and refined by comparing the predictions they generate—especially predictions regarding so-called “emergent behaviors” of the system as a whole (i.e., “the whole is greater than the sum of the parts”)—to real-world data. Once validated, mechanistic models have a wide range of uses, from gaining further mechanistic knowledge at the basic science level to making clinical predictions.
Our focus in this article is on data-driven modeling, but specifically on approaches aimed at integrating data obtained at multiple time points rather than at a single time point from many individuals. Moreover, multi-omic studies across all fields, and especially in the setting of organ transplantation, are still in their infancy; therefore, the case studies presented herein will primarily employ data-driven models.

4.5. Dynamic Data-Driven Modeling Methods

Modeling methods and similar computational tools have been reviewed in detail elsewhere [35,36,37,38,39,40,41]. Here, some selections are explained in more detail. Though not comprehensive, they have been chosen to represent the wide range of available techniques. For some methods, we have provided examples based on data (Figure 1) that were generated artificially in order to demonstrate as clearly as possible how different trajectories of biological analytes could be visualized and interrelated via several key computational modeling algorithms. More specifically, these data are designed to represent variables of varying levels (high or low) and direction (increasing, decreasing, or non-linear).

4.5.1. Principal Component Analysis (PCA)

Principal component analysis (PCA) is an example of an entry-level quantitative tool that researchers from a variety of fields have already added to their repertoire. This method does require most biologists and physicians to learn some additional mathematics, but the level required is quite attainable and the wide range of potential uses makes the investment worthwhile. PCA analyzes the covariance matrix of all measured variables to quantify how each contributes to the overall variability of the dataset. One major use for this information is to identify relationships between and among variables in order to separate them effectively into groups. Another is to efficiently and objectively reduce the dimensionality of a dataset by eliminating from further investigation variables shown to be least important to the overall information content of a time-varying, multivariate dataset [39,42]. In this way, PCA can be used as a filter for other analyses that have constraints on the number of variables that can be input [43]. Using the same logic of dimension-reduction, PCA results can also be used to identify variables most important to the response being measured and serve as a substitute to traditional statistical tests of significance when the dimensionality of data renders these tests powerless. It should be noted that there is one important caveat to this statement: PCA relies entirely on linearity, since it is in essence simply a rotation of the data matrix; thus, PCA is most useful when the interaction among variables is not overly nonlinear. In organ perfusion experiments, the grouping of high variable counts with low sample sizes, due to limited supply of experimental organs, often leads to this very situation.
There is a fairly extensive literature on the use of PCA in the setting of liver disease. An early study by Folkerts et al. utilized PCA as well as hierarchical clustering to define key clinical, clinical chemical, and histological parameters obtained from a cohort of nearly 200 liver disease patients in order to segregate patient sub-group [44]. In the liver transplantation setting, Gelson et al. examined histopathology and immunocytochemistry data on lymphocyte and cell cycle markers, as well as flow cytometry data on circulating and intrahepatic lymphocytes obtained from with established liver grafts. Using PCA, they determined a set of variables (lobular inflammation, portal inflammation, interface hepatitis, and fibrosis) as key characteristics of these patients [45]. In a metabolomic study of NAFLD in both mice and humans, PCA was utilized to suggest a series of biomarkers associated with disease progression (e.g., creatinine and eicosanoid metabolites [46]. PCA was one of a suite of data-driven modeling tools used in another clinical metabolomic study of acute alcoholic hepatitis to distinguish biomarkers in patients vs. controls [47]. In a more recent study, Zhang et al. examined circulating microRNAs in small cohorts of chronic hepatitis B and NASH patients vs. healthy controls, and found that PCA (but not the raw data) could distinguish patient sub-groups from healthy controls [46]. Similarly, Zhou et al. used PCA and other data-driven modeling tools to segregate chronic hepatitis B patients based on parameters of inflammation grade, gene expression profiles, and clinical chemistry [48]. Finally, China et al. utilized PCA to help define the potential impact of albumin therapy on prostaglandins and other inflammation-related lipids in patients with acute decompensation and acute-on-chronic liver failure [48].
In our group’s approach to PCA, data are first be normalized for each variable (by dividing each value by the maximum value for that variable) in order to convert all variable levels into the same scale (from 0 to 1) prior to performing PCA. This eliminates any artifactual effects on variance caused by variables having different ranges of values. Then, the covariance matrix of this normalized data is constructed and its eigenvectors and eigenvalues are calculated. Only sufficient eigenvectors to capture a predetermined percentage of the total variance in the data are considered for further study, these are the principal components of the dataset. From these leading components, the coefficient (weight) associated with each variable’s contribution to that component is multiplied by that component’s associated eigenvalue. This product represents the contribution of a given variable to the variance accounted for by that component. An overall score for each variable is calculated by taking the sum of its scores in each component, and represents a measure of that variable’s contribution to the overall variance of the system. The variables with the largest scores are the ones contributing most to the variance of the process being studied, and are hypothesized to therefore be most important to the underlying process being investigated. More specifically, the overall PCA score for each variable is calculated in the following way: P j = i | e i · W i j | , where i is the index of component and j is the index of variable. e refers to eigenvalues and Wij is the amount that the j-th variable contributes to the i-th component.
When performed on our artificial data set (Figure 2), PCA shows that the two low variables exhibiting directional changes (C, B) are the primary drivers, followed closely by the two high variables that exhibit directional change (G, F). The pair of non-linear variables (H, D) is next, followed by the non-changing pair (A, E) ranked last. Also noteworthy is that the component distribution of the directionally changing variables compared to the non-linear variables. Whereas the PCA scores of the former are primarily derived from Component 1 contributions, the latter scores are much more heavily derived from Component 2, demonstrating PCA’s ability to separate variables by category.

4.5.2. Partial Least Squares Regression (PLS)

Unlike PCA, which may be used in single or multi-omic situations, PLS (also known as “Projection to Latent Structures”) in this setting is applicable to the comparison of two (and only two) “omes”. More specifically, PLS maps one predictor category of variables onto an observation category in an effort to best quantify how the former explains the latter. The directionality of this method can be utilized in biomarker discovery to compare “omes” downstream in the central dogma, where physiological or pathological differences are manifest, with upstream “omes” that may contain those differences’ underlying causes. Other factors that make PLS an appropriate tool for multi-omic studies are its preferred applicability over similar methods to situations in which the predictor category has highly collinear variables and contains significantly more variables than the observation category.
The method starts similarly to PCA, with the extraction of components from the predictor dataset (M). Since PLS deals with two datasets, components are extracted from the response dataset (N) as well. The key difference between PLS and PCA is that PCA identifies components of M that best account for the covariance of M, whereas PLS identifies components of M that best account for the covariance of N. The detailed mathematics of how this occurs may also be beyond the scope of many readers and will not be described here, but is documented extensively elsewhere [49]. An important consequence of this difference, and one that is necessary for researchers to recognize, is that unlike PCA, PLS should not be used to with the intent of eliminating irrelevant variables from further investigation. Rather, it should be used to select promising variables for future study. This distinction can be thought of as akin to strategically treating PCA as a “rule out” test and PLS as a “rule in” test.

4.5.3. Dynamic Network Analysis (DyNA)

Network modeling involves visualizing dynamic interactions among variables in the form of networks, in which variables are network nodes and the interconnections among them are edges (lines or directed arrow). Though network modeling is now quite pervasive, it is usually performed on a single time point based on data from multiple individuals or from repeated experiments in animals or cells. However, we and others have demonstrated the utility of dynamic network inference, utilizing data obtained over time. Others have typically focused on transcriptomic analysis and subsequent network analysis in the context of liver disease. For example, Oh et al. examined the transcriptome of mice subjected to a high-fat diet as a model of NASH [50]. Based on eight time points obtained over 24 weeks in these mice, they identified central nodes related to inflammatory signaling (Toll-like receptor 2 and CD14) along with cell cycle signaling (Cyclin D1), and implicated them in the processes of steatosis and inflammation. They also inferred a common signaling pathway (ErbB/insulin) as a potential link among the various networks induced in this animal model of liver disease [50].
While multiple network analysis methods have been developed (see our later discussion of Dynamic Bayesian Networks), we developed Dynamic Network Analysis (DyNA) as a bridge between traditional statistics and mathematical modeling [22,51,52,53,54,55,56,57]. It combines two basic statistical measures, t-testing and correlation, with the added dimension of time and uses a traditional node-and-link output to create networks that can be analyzed both visually and quantitatively. This blend of approaches is generally more accessible to researchers without any specialized quantitative training, and reveals insights that often go unnoticed when t-testing and other forms of statistical correlation are used in their traditional fashions.
In organ perfusion experiments, where time plays a crucial role and large numbers of variables are often better analyzed visually, DyNA can be particularly useful. This technique, unlike PCA and PLS, considers time points discretely, rather than considering the entire experimental time course as a whole. While this strategy offers the advantage of higher temporal resolution, network analyses carried out on single time points (such as comparing microarray data in control vs. experiment, or placebo vs. treatment) suffer from being difficult to interpret due to the fact that the trajectory of the system being studied is unaccounted for and often unknown. By creating networks for sequential time points, or groups of time points, DyNA attempts to minimize this risk while still maintaining the benefits of a discrete investigation.
The mathematical formation of this method is straightforward. First, time intervals are chosen for the output networks. The total number of intervals, number of time points to include in each, and whether adjacent intervals overlap are at the discretion of the investigator. In order to be included in a network, a given mediator must be statistically significantly different from its baseline value by Student’s t-test. Positive edges are created between two nodes if the value of their correlation coefficient is greater than or equal to a preselected threshold x, and negative edges are created if the coefficient is less than or equal to −x. Network density may be calculated utilizing the following formula (a minor revision of that reported by Assenov et al. [58]): ( T o t a l   #   e d g e s × T o t a l   #   n o d e s ) ÷ ( M a x i m u m   p o s s i b l e   e d g e s   a m o n g   t o t a l   n o d e s ) .
When applied to our sample data (Figure 3), the DyNA algorithm links variables that exhibit both positive (black) and negative (red) correlations in a given time interval. It is also notable that the variables that do not exhibit change (A and E) are correctly left unlinked to the other, changing variables by the algorithm.

4.5.4. Dynamic Bayesian Network Inference (DyBN)

Like DyNA, DyBN is highly applicable to perfusion experiments due to the fact it is a time-based analysis. Its node-and-edge output is similar to DyNA, but with two differences in the edges: the inability to distinguish between positive and negative association, and variations in thickness to account for differing strengths of association. The key difference between DyNA and DyBN is that, instead of producing individual networks for multiple intervals, DyBN yields a single resultant network representative of the entire experimental time course. This approach treats the system in question more holistically, revealing its dominant patterns and participants. For more mathematically inclined readers, the complete algorithmic strategy for DyBN uses an inhomogeneous dynamic change-point model with a Bayesian Gaussian with score equivalence (BGe) scoring criterion, and has been well described elsewhere [59,60].
Two important points that distinguish DyBN from DyNA and PCA must be understood. First, this algorithm is stochastic in nature, meaning that resultant networks may vary slightly for multiple runs of the same input data. This concern is primarily theoretical in nature. In practice, differences are uncommon due to the high number of iterations employed. When present, these differences occur in nodes and links that are on the cusp of inclusion requirements and therefore unlikely to be crucial to the overall results and interpretation. Second, the dimension constraints of DyBN often limit the number of variables that can be to be analyzed to far below the number typically present in modern experiments. While there is no simple formula to calculate the maximum amount, as it is related to the number of subjects and time points, in practice it is often in the tens. When dealing with data that include hundreds or thousands of variables, this problem could be solved by using a filtering method (such as PCA, as described previously) to select those most appropriate for inclusion.
Dynamic Bayesian network inference results for our artificial data (Figure 4) reveal central roles for variables with higher levels (E, F, G, and H). One interesting point concerns variable E, which is present at a relatively high level, but with little change over time. A typical statistical analysis (such as one-way analysis of variance) would rule out variable E as being important to the process at hand. However, when one considers that the steady-state levels of a given molecule reflect an underlying process of synthesis and degradation (as well as many other processes related to modification and transport of the biomolecule), it should become apparent why variable E is a central node in in the DyBN: in essence, the algorithms is assuming that the other variables are affecting, and are affected by, variable E, and hence it is central. This stands in contrast to variable A, which, like variable E, also changes little over time; however, unlike variable E, variable A is present at very low levels, implying that this variable is truly not playing much of a role in the process. The DyBN indicates this possibility by placing variable A as a bottom-most output node.

5. Case Studies of Computational Methods in the Setting of Liver Pathology and Preservation

Now that our review of relevant hepatology is complete, and “omic” data and modeling basics have been introduced, we will use this section to discuss real-world examples of modeling. We have included case studies exploring both ex vivo and in vivo liver investigations. Our inclusion of in vivo work in a perfusion text is deliberate, highlighting the fact that the ultimate purpose of ex vivo experimentation is to improve outcomes in vivo. In the case of the liver, despite its wide-ranging future promises (which are discussed later in this review), perfusion is currently being employed in research dealing with graft preservation for transplant. Thus, we have selected a sample of studies that cover not just perfusion and preservation, but conditions leading to transplant and the response to transplant as well.

5.1. #1: Using Networks as Biomarkers [60]

5.1.1. Background

Pediatric acute liver failure (PALF) is a devastating condition that in some cases can be treated with liver transplantation. Transplantation itself, however, has negative impacts of its own as an irreversible procedure that subjects the child to a lifetime of immunosuppression and the risk of graft rejection. Transplantation, especially in the setting of PALF, can be costly to the community as well if organ allocation is suboptimal. The ability to distinguish among patients who will survive without transplant, die despite transplant, and survive only with transplant would lead to more efficient treatment decisions yielding improved short- and long-term outcomes.
Though most acute liver failure (ALF) research focuses on underlying causes, recent investigations have also explored the role of the immune system in this syndrome, suggesting a role for inflammatory dysregulation. At the basic science level, elevation of soluble interleukin-2 receptor alpha has been observed in PALF, indicating immune system activation [61]. Clinically, PALF patients are at increased risk for bacterial and fungal infections [62], aplastic anemia [63,64], and impaired cell-mediated and humoral immunity [62].

5.1.2. The Problem

In this study, we measured 216 serum samples, taken from 49 PALF patients, for levels of 26 inflammatory mediators individually known to serve as biomarkers for various phases of the inflammatory response. Prior to any modeling, an unsupervised hierarchical clustering analysis found that these values were unable to associate with clinical outcomes. These negative initial results are in line with the difficulty others have had identifying biomarkers in liver disease. Potential biomarkers in this field have suffered from low specificity and reproducibility, as well as restricted applicability to only the most severe disease states [65,66,67].

5.1.3. The Solution

Because the search for biomarkers involves identifying a connection between an individual variable and an overall state, we chose to employ PCA. In this study, we created patient-specific PCAs instead of merging data for patients with similar outcomes, which allowed us to keep the model unbiased and retain its predictive value. The result of the patient-specific approach is a set of scores for each variable for each patient, which we term that patient’s “inflammation barcode”. These barcodes were then analyzed using the same unsupervised hierarchical clustering algorithm used on the raw data. Unlike the analysis on raw data, which was unable to effectively separate patients into groups, the analysis on inflammation barcodes separated patients into seven distinct groups. When cross-referenced with outcomes, these groups were found to associate to some extent with different outcomes (Figure 5).
Because PCA hinted at inflammatory signatures for different clinical outcomes, and because the sampled variables were known to interact in inflammatory networks, we hypothesized that the underlying mechanism responsible for the observed signatures could be altered networks of inflammation leading to different outcomes. Therefore, we created DyBNs for each outcome group to discern their underlying inflammatory responses. The three resultant networks (Figure 6) suggest major differences between the responses of survivors and non-survivors. Interestingly, the network representing patients who went on to receive liver transplants is quite similar to that of spontaneously surviving patients. An in depth interpretation of the networks, included in our original publication [60], shows two distinct pathways, with one leading to negative feedback-induced resolvable inflammation and the other to unresolvable inflammation caused by an unchecked positive feedback loop. The former pathway is dominated by interferon-gamma-inducible protein 10 (IP-10/CXCL10), and the latter by monokine induced by gamma-interferon (MIG). These results show the potential of DyBNs to be used both holistically as a biomarker and specifically as an identifier of central network drivers for pharmaceutical targeting.

5.2. #2: Making Sense of Metabolomics [21]

5.2.1. Background

The current standard of liver preservation, cold static preservation (CSP), in which organs are stored under hypothermic and anoxic conditions, results in a progressive decay of organ quality. This decay negatively impacts graft function after transplant, affecting morbidity and mortality as well as hospital length of stay and cost [68]. One clinical application of organ perfusion research has been to attempt to use this technology to improve graft preservation.
In this study, a liver machine perfusion (MP) preservation protocol was being tested against cold static preservation as a control. Experimental porcine livers were preserved for 9 h at subnormothermic temperature (21 °C) and perfused with a hemoglobin-based oxygen carrier solution (HBOC) through both the hepatic artery and portal vein. Control livers were preserved at 4 °C. After preservation, both sets of organs were transplanted into outbred animals, which were then followed until death or for five days. The aim was to examine the effects of MP from a multidisciplinary perspective. Along with clinical outcomes and markers, histological, transcriptomic, metabolomic, inflammatory, and mitochondrial analyses were performed.

5.2.2. The Problem

The metabolomic analysis of perfusate samples included over 600 metabolites, of which 223 were calculated to differ significantly between groups using traditional statistical methods. Differences in certain metabolites, such as bile salts and products of branched-chain amino acid oxidation, were interpreted based on known physiological pathways and processes. Many other differences, however, were difficult to interpret due to inconsistencies, obscurity, or isolated findings. While this analysis provided a wealth of data, it was clear that alternative means of interpretation would be needed to fully take advantage of these results.

5.2.3. The Solution

Due to the comprehensive nature of this study, where a multitude of analyses were undertaken in parallel to reveal general differences between treatment groups, PCA was selected to compare the primary metabolomic drivers of the response to each preservation protocol in these organs.
The analysis was performed using data from three time points, and the top five drivers were different for each preservation protocol (Table 1). The MP response appeared to be driven by carbohydrate metabolism (ribulose, ribose, glycolate) and antioxidant defenses (oxidized homo-glutathione). The CSP response appeared to be driven by lipid and protein breakdown (ethanolamine, isoleucine, glycerol-3-phosphate, and cysteine) and oxygen starvation (lactate). These findings, though non-specific, fit in well with the results from other analyses in this study, which combine to show that MP livers benefit metabolically from proper oxygenation.

5.3. #3: A Second Look at Inflammation [22]

5.3.1. Background

In the same study as above, the inflammatory analysis was completed by comparing tissue values of inflammatory mediators between treatment groups using traditional statistics. Some mediators were found to be reduced with MP compared to CSP, some were unchanged, and none were elevated (Table 2).

5.3.2. The Problem

Inflammation is more than single mediators acting at single time points; rather, it is a complex and dynamic network of interactions. The traditional statistical analysis undertaken here still has merit in that it can (indistinctly) suggest some sort of reduction in overall inflammation with MP, but it is unable to gain any insight into the underlying dynamic networks. These networks may contain valuable scientific information and potentially non-intuitive therapeutic targets, so obtaining information about their structures is desirable to a wide range of researchers.

5.3.3. The Solution

First, DyBN was used on tissue and perfusate data from all time points to identify the dynamic inflammatory network characterizing the entire experimental time course. Then, DyNA was utilized in order to define the time-dependent progression of these inflammation networks in a more granular fashion over the period of preservation, allowing for a higher resolution depiction of the inflammatory response over time. In addition to evaluating the networks from different temporal perspectives, the combined use of these two methods also took advantage of other key differences: DyBN, via its ability to map self-feedback, can identify potential central nodes, and DyNA is able to differentiate whether interactions are positive or negative.
Network analyses from both sample types suggested an NLRP3 inflammasome-regulated response in both treatment groups. These results were consistent with prior clinical, biochemical, and histological findings, adding further support to the strength of modeling techniques. Inflammasomes are a subset of nucleotide-binding oligomerization domain receptors (NLRs) that, when activated, initiate an IL-1β response via caspase-1 activation [69,70]. Recent research has linked inflammasomes to multiple pathologies, including in the liver [71], and therefore these compounds have emerged as a potential therapeutic target for a variety of diseases [72].
In addition to capturing known interactions, these analyses were also able to identify a potential, novel point in the inflammatory response where MP exerts its effects. As compared to CSP, both DyBN and DyNA suggested a reduced role of IL-18 (whose active form is produced via the NLRP3 inflammasome) and increased role of IL-1RA (an antagonist of IL-1β, which in turn is also produced secondary to the activation of the NLRP3 inflammasome) with MP, along with increased liver damage with CSP. DyNA also suggested divergent progression of responses over the 9 h preservation time, with CSP leading to a stable pattern of IL-18-induced liver damage and MP leading to a resolution of the pro-inflammatory response.
Perhaps the most noteworthy achievement of this analysis was its ability to suggest IL-18 and IL-1RA as key mediators in tissue networks, despite their lack of a statistically significant difference between treatment groups using traditional statistics (Table 3). Perfusate networks also identify IL-18 as a key mediator despite its lack of a difference. These findings highlight the advantages alternative methods can often have over traditional statistics.

6. Implications and Future Directions

6.1. Implications for Basic Science

Computational biology tools have the potential to greatly enhance our understanding of basic science principles and relationships. The use of the data-driven approaches described above provides a framework for the identification of novel mechanistic constructs. This, in turn, allows for more traditionally focused experimental design, furthering the field through enhanced foundational knowledge and the development of more robust biological models. An excellent emerging example of this sort of data utilization and modeling is the field of epigenetic transcriptional control of gene expression.
Epigenetic regulation encompasses a broad variety of post-translational modifications to chromatin-associated proteins as well as distinct chemical modifications to DNA that provide heritable, non-genetic modes of gene regulation. This occurs via a variety of different mechanisms, with multiple contributions from distinct metabolic, cell signaling, and microenvironmental processes. The end result of epigenetic control is an integration of these diverse inputs into an activating or silencing event at the transcriptional level, leading to phenotypic changes [73]. This is a complex, multifactorial process ideally suited to both top-down and bottom-up approaches in computational biology.
It is becoming increasingly apparent that perturbations in normal metabolic processes lead to alterations in epigenetic regulation. This may happen via alterations in metabolic intermediate pools from central metabolism as well as via the production of reactive oxygen and nitrogen species involved in inflammatory or cell damage pathways [74]. The expanded use of large-scale metabolomic profiling is starting to allow researchers to interrogate metabolic responses to cellular damage mediated by numerous disease states, including traumatic injury, malignancy, and in post-transplant scenarios. This, coupled with advances in genome-wide mapping of epigenetic marks via techniques such as combined chromatin immunoprecipitation and sequencing (ChIP-seq [75]), enables the development of large datasets encompassing metabolic, epigenetic, and transcriptional events in experimental systems. The machine perfusion studies described in this review present an ideal model for exploring these relationships further, as they enable the straightforward harvest of biological tissue in a controlled experimental environment.
A conceptual model demonstrating the potential basic science applications of this would be as follows. In the routine preservation vs. machine perfused explanted liver studies, the data collected has already included transcriptional profiling, metabolic profiling, and functional profiling (at the organ biology level). Adding additional epigenetic profiling through large data approaches, such as mass spectrometry of isolated histones or ChIP-seq for genome-wide epigenetic landscape information, would enable the use of several of the systems biological approaches previously described. A PLS/PCA-based approach could be utilized either to identify metabolic covariance contributing to epigenetic alteration. This could then allow the specific narrowing of research efforts to focus on individual metabolic pathways contributing most heavily to epigenetic regulatory processes. In turn, the net effect of these epigenetic pathways on gene expression at both the transcriptional and proteomic level could be assayed through additional data-driven approaches, for example the analyses of dynamic networks described above, ultimately leading to novel understanding of the dynamic relationships governing metabolism and its effects on gene expression and graft performance.
At the same time, work is being done to identify parameters that govern epigenetic transcriptional responses that would contribute to mechanistic modeling approaches. Recent work by Bintu et al. [74] demonstrated that individual epigenetic marks function with different kinetics, and provided a framework for understanding epigenetic signaling as a series of graded responses that provides for both memory and plasticity. Critically, this establishes a platform for synthesizing experimental models of epigenetic responses that can be expanded upon as the factors governing epigenetic mechanisms become clearer.

6.2. Implications for Translational Science

Using current and future basic science results, derived from both quantitative and traditional methods, computational biology tools could help to bridge the gap between laboratory and clinic. Pharmaceutical development, currently a sluggish process riddled with failures, could be particularly impacted [76]. As pathways are better defined with the help of data-driven approaches and eventually modeled mechanistically, improved genome-scale metabolic models (GEMs) would provide the perfect setting for rational drug design. Moreover, this research could include in silico clinical trials to better screen potential drugs for further investment.
More immediately, datasets obtained from perfusion experiments on organs afflicted by diseases of interest could be analyzed quantitatively to search for novel biomarkers of those diseases. The increased strength of quantitative tools, combined with improved data clarity from organs in isolation, would likely lead to a higher quantity and quality of results than traditional investigations. These diseased organs could also be used to begin to map the “omic” signatures of the different pathologies, which would help in the creation of more accurate disease-, organ-, and patient-specific models for a variety of future uses.

6.3. Implications for Clinical Science

At the clinical level, the most immediate application of computational biology tools for ex vivo data analysis is, not surprisingly, in the field of transplantation. As seen in the case studies, investigations of this type are already underway in an effort to develop improved organ preservation protocols, technologies, and solutions. While current research focuses on the responses of organs to preservation itself, an attainable short-term goal would be to begin to predict transplant outcomes based on these responses. Given the plethora of potential biomarkers available, the continued use of quantitative tools will likely play a key role in this effort. Eventually, standardized tests could be developed to determine whether to transplant or discard donor organs of borderline quality.
Outside the field of transplantation, leveraging advances in preservation to create stable ex vivo environments for the study of organs in isolation could have widespread use. Investigators into a variety of diseases would enjoy the ability to sample, image, and test interventions on organs in a laboratory far more extensively than possible on patients in a hospital. Though availability would certainly be limited, the very diseases of interest here are often those that preclude transplantation currently, meaning that discarded organs could supply this work. In the long term, as preservation continues to improve and diseases that currently require organs to be discarded are cured or mitigated, donor grafts unfit for transplantation could be treated ex vivo until sufficiently healthy for implantation.
All of these potential approaches require significant infrastructure investments in order to be implemented on any meaningful scale. One strategy to achieve this has been to design “Organ Intensive Care Units”. These laboratories, already in the planning and implementation stages at certain institutions, house numerous preservation systems, along with the necessary technology and staffing to track organ function and perform all desired sampling and testing. We suggest that incorporating extensive “omic” analyses with computational modeling in this Organ Intensive Care setting could be transformative for the transplantation field, and look forward to participating in that endeavor.

Acknowledgments

The liver perfusion experiments were sponsored by a charitable grant from the Virginia Garcia de Souza (VGS) Foundation, Sao Paulo, Brazil.

Author Contributions

Original figures were created by R.Z. and D.S. All authors contributed to the writing of this article. All authors have read and approved the final manuscript.

Conflicts of Interest

Yoram Vodovotz is a co-founder and stakeholder in Immunetrics, Inc., Pittsburgh, PA, USA; this relationship is managed by the University of Pittsburgh under all applicable institutional and federal guidelines. Paulo Fontes is co-founder and Chairman of the Scientific Advisory Board, VirTech Bio Inc., Natick, MA, USA.

References

  1. Rui, L. Energy metabolism in the liver. Compr. Physiol. 2014, 4, 177–197. [Google Scholar] [PubMed]
  2. Berg, J.M.; Tymoczko, J.L.; Stryer, L.; Stryer, L. Biochemistry, 5th ed.; W.H. Freeman: New York, NY, USA, 2002. [Google Scholar]
  3. Kuntz, E.; Kuntz, H. Biochemistry and Functions of the Liver; Springer: Berlin, Germany, 2008; pp. 35–76. [Google Scholar]
  4. Boyer, J.L. Bile formation and secretion. Compr. Physiol. 2013, 3, 1035–1078. [Google Scholar] [PubMed]
  5. Liver Transplantation. Available online: http://www.niddk.nih.gov/health-information/health-topics/liver-disease/liver-transplant/Pages/facts.aspx (accessed on 18 July 2015).
  6. Setiawan, V.W.; Stram, D.O.; Porcel, J.; Lu, S.C.; Le Marchand, L.; Noureddin, M. Prevalence of chronic liver disease and cirrhosis by underlying cause in understudied ethnic groups: The multiethnic cohort. Hepatology 2016, 64, 1969–1977. [Google Scholar] [CrossRef] [PubMed]
  7. Zarrinpar, A.; Busuttil, R.W. Liver transplantation: Past, present and future. Nat. Rev. Gastroenterol. Hepatol. 2013, 10, 434–440. [Google Scholar] [CrossRef] [PubMed]
  8. Singal, A.K.; Guturu, P.; Hmoud, B.; Kuo, Y.F.; Salameh, H.; Wiesner, R.H. Evolving frequency and outcomes of liver transplantation based on etiology of liver disease. Transplantation 2013, 95, 755–760. [Google Scholar] [CrossRef] [PubMed]
  9. Transplants in the U.S. By State. Available online: http://optn.transplant.hrsa.gov/converge/latestData/rptData.asp (accessed on 18 July 2015).
  10. Liou, I.W.; Larson, A.M. Role of Liver Transplantation in Acute Liver Failure. Available online: http://www.medscape.com/viewarticle/584467_4 (accessed on 20 July 2015).
  11. Starzl, T.E. History of clinical transplantation. World J. Surg. 2000, 24, 759–782. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Rocha-Santos, V.; Nacif, L.S.; Pinheiro, R.S.; Ducatti, L.; Andraus, W.; D’Alburquerque, L.C. Simplified technique for auxiliary orthotopic liver transplantation using a whole graft. Arquivos Brasileiros de Cirurgia Digestiva ABCD (Braz. Arch. Dig. Surg.) 2015, 28, 136–138. [Google Scholar] [CrossRef] [PubMed]
  13. Starzl, T.E. The saga of liver replacement, with particular reference to the reciprocal influence of liver and kidney transplantation (1955–1967). J. Am. Coll. Surg. 2002, 195, 587–610. [Google Scholar] [CrossRef] [Green Version]
  14. Van Raemdonck, D.; Neyrinck, A.; Rega, F.; Devos, T.; Pirenne, J. Machine perfusion in organ transplantation: A tool for ex-vivo graft conditioning with mesenchymal stem cells? Curr. Opin. Organ Transplant. 2013, 18, 24–33. [Google Scholar] [CrossRef] [PubMed]
  15. Bonnel, A.R.; Bunchorntavakul, C.; Reddy, K.R. Immune dysfunction and infections in patients with cirrhosis. Clin. Gastroenterol. Hepatol. 2011, 9, 727–738. [Google Scholar] [CrossRef] [PubMed]
  16. Xia, W.; Ke, Q.; Wang, Y.; Feng, X.; Guo, H.; Wang, W.; Zhang, M.; Shen, Y.; Wu, J.; Xu, X.; et al. Donation after cardiac death liver transplantation: Graft quality evaluation based on pretransplant liver biopsy. Liver Transplant. 2015, 21, 838–846. [Google Scholar] [CrossRef] [PubMed]
  17. Halldorson, J.B.; Bakthavatsalam, R.; Montenovo, M.; Dick, A.; Rayhill, S.; Perkins, J.; Reyes, J. Differential rates of ischemic cholangiopathy and graft survival associated with induction therapy in dcd liver transplantation. Am. J. Transplant. 2015, 15, 251–258. [Google Scholar] [CrossRef] [PubMed]
  18. Molmenti, E.P.; Netto, G.J.; Murray, N.G.; Smith, D.M.; Molmenti, H.; Crippin, J.S.; Hoover, T.C.; Jung, G.; Marubashi, S.; Sanchez, E.Q.; et al. Incidence and recurrence of autoimmune/alloimmune hepatitis in liver transplant recipients. Liver Transplant. 2002, 8, 519–526. [Google Scholar] [CrossRef] [PubMed]
  19. Latchana, N.; Peck, J.R.; Whitson, B.A.; Henry, M.L.; Elkhammas, E.A.; Black, S.M. Preservation solutions used during abdominal transplantation: Current status and outcomes. World J. Transplant. 2015, 5, 154–164. [Google Scholar] [CrossRef] [PubMed]
  20. Zhai, Y.; Petrowsky, H.; Hong, J.C.; Busuttil, R.W.; Kupiec-Weglinski, J.W. Ischaemia-reperfusion injury in liver transplantation—From bench to bedside. Nat. Rev. Gastroenterol. Hepatol. 2013, 10, 79–89. [Google Scholar] [CrossRef] [PubMed]
  21. Fontes, P.; Lopez, R.; van der Plaats, A.; Vodovotz, Y.; Minervini, M.; Scott, V.; Soltys, K.; Shiva, S.; Paranjpe, S.; Sadowsky, D.; et al. Liver preservation with machine perfusion and a newly developed cell-free oxygen carrier solution under subnormothermic conditions. Am. J. Transplant. 2015, 15, 381–394. [Google Scholar] [CrossRef] [PubMed]
  22. Sadowsky, D.; Zamora, R.; Barclay, D.; Yin, J.; Fontes, P.; Vodovotz, Y. Machine perfusion of porcine livers with oxygen-carrying solution results in reprogramming of dynamic inflammation networks. Front. Pharmacol. 2016, 7, 413. [Google Scholar] [CrossRef] [PubMed]
  23. Land, M.; Hauser, L.; Jun, S.R.; Nookaew, I.; Leuze, M.R.; Ahn, T.H.; Karpinets, T.; Lund, O.; Kora, G.; Wassenaar, T.; et al. Insights from 20 years of bacterial genome sequencing. Funct. Integr. Genom. 2015, 15, 141–161. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  24. Van El, C.G.; Cornel, M.C.; Borry, P.; Hastings, R.J.; Fellmann, F.; Hodgson, S.V.; Howard, H.C.; Cambon-Thomsen, A.; Knoppers, B.M.; Meijers-Heijboer, H.; et al. Whole-genome sequencing in health care: Recommendations of the european society of human genetics. Eur. J. Hum. Genet. 2013, 21, 580–584. [Google Scholar] [CrossRef] [PubMed]
  25. Katagiri, F.; Glazebrook, J. Overview of mrna expression profiling using DNA microarrays. In Current Protocols in Molecular Biology; Ausubel, F.M., Ed.; John Wiley & Sons: Hoboken, NJ, USA, 2009. [Google Scholar]
  26. Bakalarski, C.E.; Kirkpatrick, D.S. A biologist’s field guide to multiplexed quantitative proteomics. Mol. Cell. Proteom. 2016, 15, 1489–1497. [Google Scholar] [CrossRef] [PubMed]
  27. Horgan, R.P.; Kenny, L.C. ‘Omic’ technologies: Genomics, transcriptomics, proteomics and metabolomics. Obs. Gynaecol. 2011, 13, 189–195. [Google Scholar] [CrossRef]
  28. Gowda, G.A.; Zhang, S.; Gu, H.; Asiago, V.; Shanaiah, N.; Raftery, D. Metabolomics-based methods for early disease diagnostics. Expert Rev. Mol. Diagn. 2008, 8, 617–633. [Google Scholar] [CrossRef] [PubMed]
  29. Group, N.H.W.; Peterson, J.; Garges, S.; Giovanni, M.; McInnes, P.; Wang, L.; Schloss, J.A.; Bonazzi, V.; McEwen, J.E.; Wetterstrand, K.A.; et al. The nih human microbiome project. Genome Res. 2009, 19, 2317–2323. [Google Scholar] [CrossRef] [PubMed]
  30. Martin, R.; Miquel, S.; Langella, P.; Bermudez-Humaran, L.G. The role of metagenomics in understanding the human microbiome in health and disease. Virulence 2014, 5, 413–423. [Google Scholar] [CrossRef] [PubMed]
  31. Wylie, K.M.; Weinstock, G.M.; Storch, G.A. Emerging view of the human virome. Transl. Res. J. Lab. Clin. Med. 2012, 160, 283–290. [Google Scholar] [CrossRef] [PubMed]
  32. Nardini, C.; Dent, J.; Tieri, P. Editorial: Multi-omic data integration. Front. Cell Dev. Biol. 2015, 3, 46. [Google Scholar] [CrossRef] [PubMed]
  33. Aksenov, S.V.; Church, B.; Dhiman, A.; Georgieva, A.; Sarangapani, R.; Helmlinger, G.; Khalil, I.G. An integrated approach for inference and mechanistic modeling for advancing drug development. FEBS Lett. 2005, 579, 1878–1883. [Google Scholar] [CrossRef] [PubMed]
  34. Ellner, S.P.; Guckenheimer, J. Dynamic Models in Biology; Princeton University Press: Princeton, NJ, USA, 2006. [Google Scholar]
  35. An, G.; Nieman, G.; Vodovotz, Y. Computational and systems biology in trauma and sepsis: Current state and future perspectives. Int. J. Burns Trauma 2012, 2, 1–10. [Google Scholar] [PubMed]
  36. Vodovotz, Y.; Billiar, T.R. In silico modeling: Methods and applications to trauma and sepsis. Crit. Care Med. 2013, 41, 2008–2014. [Google Scholar] [CrossRef] [PubMed]
  37. Vodovotz, Y.; An, G. Systems biology and inflammation. In Systems Biology in Drug Discovery and Development: Methods and Protocols; Yan, Q., Totowa, N., Eds.; Springer Science & Business Media: New York, NY, USA, 2009; pp. 181–201. [Google Scholar]
  38. Aerts, J.M.; Haddad, W.M.; An, G.; Vodovotz, Y. From data patterns to mechanistic models in acute critical illness. J. Crit. Care 2014, 29, 604–610. [Google Scholar] [CrossRef] [PubMed]
  39. Namas, R.A.; Mi, Q.; Namas, R.; Almahmoud, K.; Zaaqoq, A.M.; Abdul-Malak, O.; Azhar, N.; Day, J.; Abboud, A.; Zamora, R.; et al. Insights into the role of chemokines, damage-associated molecular patterns, and lymphocyte-derived mediators from computational models of trauma-induced inflammation. Antioxid. Redox Signal. 2015, 23, 1370–1387. [Google Scholar] [CrossRef] [PubMed]
  40. Vodovotz, Y. Translational System Biology; Elsevier: Boston, MA, USA, 2014. [Google Scholar]
  41. Vodovotz, Y.; An, G. Complex Systems and Computational Biology Approaches to Acute Inflammation; Springer: New York, NY, USA, 2013. [Google Scholar]
  42. Janes, K.A.; Yaffe, M.B. Data-driven modelling of signal-transduction networks. Nat. Rev. Mol. Cell Biol. 2006, 7, 820–828. [Google Scholar] [CrossRef] [PubMed]
  43. Sadowsky, D.; Nieman, G.; Barclay, D.; Mi, Q.; Zamora, R.; Constantine, G.; Golub, L.; Lee, H.M.; Roy, S.; Gatto, L.A.; et al. Impact of chemically-modified tetracycline 3 on intertwined physiological, biochemical, and inflammatory networks in porcine sepsis/ards. Int. J. Burns Trauma 2015, 5, 22–35. [Google Scholar] [PubMed]
  44. Folkerts, U.; Nagel, D.; Vogt, W. The use of cluster analysis in clinical chemical diagnosis of liver diseases. J. Clin. Chem. Clin. Biochem. Z. Klinische Chem. Klinische Biochem. 1990, 28, 399–406. [Google Scholar] [CrossRef]
  45. Gelson, W.; Hoare, M.; Unitt, E.; Palmer, C.; Gibbs, P.; Coleman, N.; Davies, S.; Alexander, G.J. Heterogeneous inflammatory changes in liver graft recipients with normal biochemistry. Transplantation 2010, 89, 739–748. [Google Scholar] [CrossRef] [PubMed]
  46. Zhang, H.; Li, Q.Y.; Guo, Z.Z.; Guan, Y.; Du, J.; Lu, Y.Y.; Hu, Y.Y.; Liu, P.; Huang, S.; Su, S.B. Serum levels of micrornas can specifically predict liver injury of chronic hepatitis b. World J. Gastroenterol. 2012, 18, 5188–5196. [Google Scholar] [PubMed]
  47. Rachakonda, V.; Gabbert, C.; Raina, A.; Bell, L.N.; Cooper, S.; Malik, S.; Behari, J. Serum metabolomic profiling in acute alcoholic hepatitis identifies multiple dysregulated pathways. PLoS ONE 2014, 9, e113860. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  48. Zhou, W.; Ma, Y.; Zhang, J.; Hu, J.; Zhang, M.; Wang, Y.; Li, Y.; Wu, L.; Pan, Y.; Zhang, Y.; et al. Predictive model for inflammation grades of chronic hepatitis b: Large-scale analysis of clinical parameters and gene expressions. Liver Int. 2017, 37, 1632–1641. [Google Scholar] [CrossRef] [PubMed]
  49. Abdi, H. Partial least squares regression (pls-regression). In Encyclopedia for Research Methods for the Social Sciences; Lewis-Beck, M., Bryman, A., Futing, T., Eds.; Sage: Thousand Oak, CA, USA, 2003; pp. 792–795. [Google Scholar]
  50. Oh, H.Y.; Shin, S.K.; Heo, H.S.; Ahn, J.S.; Kwon, E.Y.; Park, J.H.; Cho, Y.Y.; Park, H.J.; Lee, M.K.; Kim, E.J.; et al. Time-dependent network analysis reveals molecular targets underlying the development of diet-induced obesity and non-alcoholic steatohepatitis. Genes Nutr. 2013, 8, 301–316. [Google Scholar] [CrossRef] [PubMed]
  51. Mi, Q.; Constantine, G.; Ziraldo, C.; Solovyev, A.; Torres, A.; Namas, R.; Bentley, T.; Billiar, T.R.; Zamora, R.; Puyana, J.C.; et al. A dynamic view of trauma/hemorrhage-induced inflammation in mice: Principal drivers and networks. PLoS ONE 2011, 6, e19424. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  52. Ziraldo, C.; Vodovotz, Y.; Namas, R.A.; Almahmoud, K.; Tapias, V.; Mi, Q.; Barclay, D.; Jefferson, B.S.; Chen, G.; Billiar, T.R.; et al. Central role for mcp-1/ccl2 in injury-induced inflammation revealed by in vitro, in silico, and clinical studies. PLoS ONE 2013, 8, e79804. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  53. Namas, R.A.; Vodovotz, Y.; Almahmoud, K.; Abdul-Malak, O.; Zaaqoq, A.; Namas, R.; Mi, Q.; Barclay, D.; Zuckerbraun, B.; Peitzman, A.B.; et al. Temporal patterns of circulating inflammation biomarker networks differentiate susceptibility to nosocomial infection following blunt trauma in humans. Ann. Surg. 2016, 263, 191–198. [Google Scholar] [CrossRef] [PubMed]
  54. Abboud, A.; Namas, R.A.; Ramadan, M.; Mi, Q.; Almahmoud, K.; Abdul-Malak, O.; Azhar, N.; Zaaqoq, A.; Namas, R.; Barclay, D.A.; et al. Computational analysis supports an early, type 17 cell-associated divergence of blunt trauma survival and mortality. Crit. Care Med. 2016, 44, e1074–e1081. [Google Scholar] [CrossRef] [PubMed]
  55. Zamora, R.; Ravuri, S.K.; Plock, J.A.; Vodovotz, Y.; Gorantla, V.S. Differential inflammatory networks distinguish responses to bone marrow-derived versus adipose-derived mesenchymal stem cell therapies in vascularized composite allotransplantation. J. Trauma Acute Care Surg. 2017, 83, S50–S58. [Google Scholar] [CrossRef] [PubMed]
  56. Abdul-Malak, O.; Vodovotz, Y.; Zaaqoq, A.; Almahmoud, K.; Peitzman, A.; Sperry, J.; Billiar, T.R.; Namas, R.A. Elevated admission base deficit is associated with a distinct and more complex network of systemic inflammation in blunt trauma patients. Mediat. Inflamm. 2016, in press. [Google Scholar] [CrossRef] [PubMed]
  57. Zamora, R.; Vodovotz, Y.; Mi, Q.; Barclay, D.; Yin, J.; Horslen, S.; Rudnick, D.; Loomes, K.; Squires, R.H. Data-driven modeling for precision medicine in pediatric acute liver failure. Mol. Med. 2016, in press. [Google Scholar] [CrossRef] [PubMed]
  58. Assenov, Y.; Ramirez, F.; Schelhorn, S.E.; Lengauer, T.; Albrecht, M. Computing topological parameters of biological networks. Bioinformatics 2008, 24, 282–284. [Google Scholar] [CrossRef] [PubMed]
  59. Grzegorczyk, M.; Husmeier, D. Improvements in the reconstruction of time-varying gene regulatory networks: Dynamic programming and regularization by information sharing among genes. Bioinformatics 2011, 27, 693–699. [Google Scholar] [CrossRef] [PubMed]
  60. Azhar, N.; Ziraldo, C.; Barclay, D.; Rudnick, D.A.; Squires, R.H.; Vodovotz, Y.; Pediatric Acute Liver Failure Study Group. Analysis of serum inflammatory mediators identifies unique dynamic networks associated with death and spontaneous survival in pediatric acute liver failure. PLoS ONE 2013, 8, e78202. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  61. Bucuvalas, J.; Filipovich, L.; Yazigi, N.; Narkewicz, M.R.; Ng, V.; Belle, S.H.; Zhang, S.; Squires, R.H. Immunophenotype predicts outcome in pediatric acute liver failure. J. Pediatr. Gastroenterol. Nutr. 2013, 56, 311–315. [Google Scholar] [CrossRef] [PubMed]
  62. Lee, W.M. Acute liver failure. New Engl. J. Med. 1993, 329, 1862–1872. [Google Scholar] [CrossRef] [PubMed]
  63. Brown, K.E.; Tisdale, J.; Barrett, A.J.; Dunbar, C.E.; Young, N.S. Hepatitis-associated aplastic anemia. N. Engl. J. Med. 1997, 336, 1059–1064. [Google Scholar] [CrossRef] [PubMed]
  64. Rolando, N.; Harvey, F.; Brahm, J.; Philpott-Howard, J.; Alexander, G.; Gimson, A.; Casewell, M.; Fagan, E.; Williams, R. Prospective study of bacterial infection in acute liver failure: An analysis of fifty patients. Hepatology 1990, 11, 49–53. [Google Scholar] [CrossRef] [PubMed]
  65. Poynard, T.; Morra, R.; Ingiliz, P.; Imbert-Bismut, F.; Thabut, D.; Messous, D.; Munteanu, M.; Massard, J.; Benhamou, Y.; Ratziu, V. Biomarkers of liver fibrosis. Adv. Clin. Chem. 2008, 46, 131–160. [Google Scholar] [PubMed]
  66. Hammerich, L.; Heymann, F.; Tacke, F. Role of il-17 and th17 cells in liver diseases. Clin. Dev. Immunol. 2011, 2011, 345803. [Google Scholar] [CrossRef] [PubMed]
  67. Sattar, N. Biomarkers for diabetes prediction, pathogenesis or pharmacotherapy guidance? Past, present and future possibilities. Diabet. Med. A J. Br. Diabet. Assoc. 2012, 29, 5–13. [Google Scholar] [CrossRef] [PubMed]
  68. Jay, C.; Ladner, D.; Wang, E.; Lyuksemburg, V.; Kang, R.; Chang, Y.; Feinglass, J.; Holl, J.L.; Abecassis, M.; Skaro, A.I. A comprehensive risk assessment of mortality following donation after cardiac death liver transplant—An analysis of the national registry. J. Hepatol. 2011, 55, 808–813. [Google Scholar] [CrossRef] [PubMed]
  69. Davis, B.K.; Wen, H.; Ting, J.P. The inflammasome nlrs in immunity, inflammation, and associated diseases. Ann. Rev. Immunol. 2011, 29, 707–735. [Google Scholar] [CrossRef] [PubMed]
  70. Conforti-Andreoni, C.; Ricciardi-Castagnoli, P.; Mortellaro, A. The inflammasomes in health and disease: From genetics to molecular mechanisms of autoinflammation and beyond. Cell. Mol. Immunol. 2011, 8, 135–145. [Google Scholar] [CrossRef] [PubMed]
  71. Tilg, H.; Moschen, A.R.; Szabo, G. Interleukin-1 and inflammasomes in ald/aah and nafld/nash. Hepatology 2016, 64, 955–965. [Google Scholar] [CrossRef] [PubMed]
  72. Menu, P.; Vince, J.E. The nlrp3 inflammasome in health and disease: The good, the bad and the ugly. Clin. Exp. Immunol. 2011, 166, 1–15. [Google Scholar] [CrossRef] [PubMed]
  73. Cyr, A.R.; Domann, F.E. The redox basis of epigenetic modifications: From mechanisms to functional consequences. Antioxid. Redox Signal. 2011, 15, 551–589. [Google Scholar] [CrossRef] [PubMed]
  74. Bintu, L.; Yong, J.; Antebi, Y.E.; McCue, K.; Kazuki, Y.; Uno, N.; Oshimura, M.; Elowitz, M.B. Dynamics of epigenetic regulation at the single-cell level. Science 2016, 351, 720–724. [Google Scholar] [CrossRef] [PubMed]
  75. Park, P.J. Chip-seq: Advantages and challenges of a maturing technology. Nat. Rev. Genet. 2009, 10, 669–680. [Google Scholar] [CrossRef] [PubMed]
  76. An, G.; Bartels, J.; Vodovotz, Y. In silico augmentation of the drug development pipeline: Examples from the study of acute inflammation. Drug Dev. Res. 2011, 72, 1–14. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Modeling Methods Example Data: Artificially generated data representing variables of varying levels (high or low) and direction (increasing, decreasing, or non-linear).
Figure 1. Modeling Methods Example Data: Artificially generated data representing variables of varying levels (high or low) and direction (increasing, decreasing, or non-linear).
Computation 05 00046 g001
Figure 2. PCA Using Example Data: The PCA algorithm was performed on the example dataset, including data from all times, and was set to include enough components to capture at least 70% variance. Results show that the two low variables exhibiting directional changes (C, B) are the primary drivers, followed closely by the two high variables that exhibit directional change (G, F). The pair of non-linear variables (H, D) is next, followed by the non-changing pair (A, E) ranked last. Note the difference in component distribution between directionally changing and non-linear variables, demonstrating PCA’s ability to separate variables by category.
Figure 2. PCA Using Example Data: The PCA algorithm was performed on the example dataset, including data from all times, and was set to include enough components to capture at least 70% variance. Results show that the two low variables exhibiting directional changes (C, B) are the primary drivers, followed closely by the two high variables that exhibit directional change (G, F). The pair of non-linear variables (H, D) is next, followed by the non-changing pair (A, E) ranked last. Note the difference in component distribution between directionally changing and non-linear variables, demonstrating PCA’s ability to separate variables by category.
Computation 05 00046 g002
Figure 3. DyNA using example data: The DyNA algorithm was run for two different time intervals (0–4 h, 4–24 h) and two different correlation thresholds (0.7, 0.95). In all networks, the algorithm links variables that exhibit both positive (black) and negative (red) correlations in a given time interval. Variables that do not exhibit change (A and E) are correctly left unlinked to the other, changing variables by the algorithm.
Figure 3. DyNA using example data: The DyNA algorithm was run for two different time intervals (0–4 h, 4–24 h) and two different correlation thresholds (0.7, 0.95). In all networks, the algorithm links variables that exhibit both positive (black) and negative (red) correlations in a given time interval. Variables that do not exhibit change (A and E) are correctly left unlinked to the other, changing variables by the algorithm.
Computation 05 00046 g003
Figure 4. DyBN using example data: The DyBN algorithm was performed on the example dataset, including data from all times, resulting in a single network representing the entire time course. This network features variables with higher levels (E, F, G, H) in central roles.
Figure 4. DyBN using example data: The DyBN algorithm was performed on the example dataset, including data from all times, resulting in a single network representing the entire time course. This network features variables with higher levels (E, F, G, H) in central roles.
Computation 05 00046 g004
Figure 5. Patient-specific PCA compared to outcomes: Patient-specific PCA was performed, instead of combining data from patients by group. The resulting set of individual results was analyzed using an unsupervised hierarchical clustering algorithm, which was able to separate patients into groups that somewhat associated with different outcomes.
Figure 5. Patient-specific PCA compared to outcomes: Patient-specific PCA was performed, instead of combining data from patients by group. The resulting set of individual results was analyzed using an unsupervised hierarchical clustering algorithm, which was able to separate patients into groups that somewhat associated with different outcomes.
Computation 05 00046 g005
Figure 6. Pediatric acute liver failure (PALF) outcome group DyBN: When performed on each patient group (spontaneous survivors, non-survivors, transplant recipients), DyBN reveals two distinct pathways: one leading to negative feedback-induced resolvable inflammation, and the other to unresolvable inflammation caused by an unchecked positive feedback loop. The network representing patients who went on to receive liver transplants is quite similar to that of spontaneously surviving patients.
Figure 6. Pediatric acute liver failure (PALF) outcome group DyBN: When performed on each patient group (spontaneous survivors, non-survivors, transplant recipients), DyBN reveals two distinct pathways: one leading to negative feedback-induced resolvable inflammation, and the other to unresolvable inflammation caused by an unchecked positive feedback loop. The network representing patients who went on to receive liver transplants is quite similar to that of spontaneously surviving patients.
Computation 05 00046 g006
Table 1. Metabolomic PCA Comparing Machine Perfusion (MP) and Cold Static Preservation (CSP) *.
Table 1. Metabolomic PCA Comparing Machine Perfusion (MP) and Cold Static Preservation (CSP) *.
RankCSPMP
1EthanolamineRibulose
2IsoleucineRibose
3Glycerol-3-PhosphateGSSG
4CysteineGlycolate (OH-acetate)
5LactateXylonate
* Of hundreds of metabolites analyzed by PCA, top drivers in the MP response were representative of carbohydrate metabolism (ribulose, ribose, glycolate) and antioxidant defenses (oxidized homo-glutathione). The CSP response appeared to be driven by lipid and protein breakdown (ethanolamine, isoleucine, glycerol-3-phosphate, and cysteine) and oxygen starvation (lactate).
Table 2. Traditional Statistical Analyses of MP vs. CSP Inflammatory Data *.
Table 2. Traditional Statistical Analyses of MP vs. CSP Inflammatory Data *.
MediatorSignificant?
(p Value)
IFN-α0.001
TNF-α0.032
IFN-γ0.022
IL-40.021
IL-1β<0.001
IL-12/IL-23 (p40)<0.001
IL-10No
IL-6No
IL-8No
GM-CSFNo
IL-1αNo
IL-1RANo
IL-2No
IL-18No
* Two-way ANOVA on data derived from tissue samples. All variables measured were either not significantly different, or lower in MP.
Table 3. Traditional Statistical Analyses of IL-18 and IL-1RA in MP vs. CSP *.
Table 3. Traditional Statistical Analyses of IL-18 and IL-1RA in MP vs. CSP *.
Sample TypeCytokineProtocolMean ± SEM, pg/mLp Value
PerfusateIL-18CSP738 ± 1110.299
MP932 ± 155
IL-1RACSP230 ± 340.005
MP7317 ± 1953
TissueIL-18CSP1600 ± 1530.839
MP1544 ± 243
IL-1RACSP2478 ± 2700.539
MP2733 ± 324
* Two-way analysis of variance (ANOVA) on data derived from tissue and perfusate samples. Despite the lack of statistically significant differences in three of four traditional analyses, DyNA and DyBN were able to identify these two variables as key determinants in the overall inflammatory response to organ preservation.

Share and Cite

MDPI and ACS Style

Sadowsky, D.; Abboud, A.; Cyr, A.; Vodovotz, L.; Fontes, P.; Zamora, R.; Vodovotz, Y. Dynamic Data-Driven Modeling for Ex Vivo Data Analysis: Insights into Liver Transplantation and Pathobiology. Computation 2017, 5, 46. https://doi.org/10.3390/computation5040046

AMA Style

Sadowsky D, Abboud A, Cyr A, Vodovotz L, Fontes P, Zamora R, Vodovotz Y. Dynamic Data-Driven Modeling for Ex Vivo Data Analysis: Insights into Liver Transplantation and Pathobiology. Computation. 2017; 5(4):46. https://doi.org/10.3390/computation5040046

Chicago/Turabian Style

Sadowsky, David, Andrew Abboud, Anthony Cyr, Lena Vodovotz, Paulo Fontes, Ruben Zamora, and Yoram Vodovotz. 2017. "Dynamic Data-Driven Modeling for Ex Vivo Data Analysis: Insights into Liver Transplantation and Pathobiology" Computation 5, no. 4: 46. https://doi.org/10.3390/computation5040046

APA Style

Sadowsky, D., Abboud, A., Cyr, A., Vodovotz, L., Fontes, P., Zamora, R., & Vodovotz, Y. (2017). Dynamic Data-Driven Modeling for Ex Vivo Data Analysis: Insights into Liver Transplantation and Pathobiology. Computation, 5(4), 46. https://doi.org/10.3390/computation5040046

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop