Next Article in Journal
A Class of Copula-Based Bivariate Poisson Time Series Models with Applications
Next Article in Special Issue
Metabolic Pathway Analysis in the Presence of Biological Constraints
Previous Article in Journal
Gene Expression Analysis through Parallel Non-Negative Matrix Factorization
 
 
Order Article Reprints
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Language for Modeling and Optimizing Experimental Biological Protocols

Department of Computer Science, University of Oxford, Oxford OX1 3QD, UK
*
Author to whom correspondence should be addressed.
Current address: Delft Center for Systems and Control (DCSC), TU Delft, 2628 CN Delft, The Netherlands.
Computation 2021, 9(10), 107; https://doi.org/10.3390/computation9100107
Received: 26 August 2021 / Revised: 12 October 2021 / Accepted: 12 October 2021 / Published: 16 October 2021
(This article belongs to the Special Issue Formal Method for Biological Systems Modelling)

Abstract

:
Automation is becoming ubiquitous in all laboratory activities, moving towards precisely defined and codified laboratory protocols. However, the integration between laboratory protocols and mathematical models is still lacking. Models describe physical processes, while protocols define the steps carried out during an experiment: neither cover the domain of the other, although they both attempt to characterize the same phenomena. We should ideally start from an integrated description of both the model and the steps carried out to test it, to concurrently analyze uncertainties in model parameters, equipment tolerances, and data collection. To this end, we present a language to model and optimize experimental biochemical protocols that facilitates such an integrated description, and that can be combined with experimental data. We provide probabilistic semantics for our language in terms of Gaussian processes (GPs) based on the linear noise approximation (LNA) that formally characterizes the uncertainties in the data collection, the underlying model, and the protocol operations. In a set of case studies, we illustrate how the resulting framework allows for automated analysis and optimization of experimental protocols, including Gibson assembly protocols.

1. Introduction

Automation is becoming ubiquitous in all laboratory activities: protocols are run under reproducible and auditable software control, data are collected by high-throughput machinery, experiments are automatically analyzed, and further experiments are selected to maximize knowledge acquisition. However, while progress is being made towards the routine integration of sophisticated end-to-end laboratory workflows and towards the remote access to laboratory facilities and procedures [1,2,3,4,5], the integration between laboratory protocols and mathematical models is still lacking. Models describe physical processes, either mechanistically or by inference from data, while protocols define the steps carried out during an experiment in order to obtain experimental data. Neither models nor protocols cover the domain of the other, although they both attempt to characterize the same phenomena. As a consequence, it is often hard to attribute causes of experimental failures: whether an experiment failed because of a misconception in the model, or because of a misstep in the protocol. To confront this problem, we need an approach that integrates and accounts for all the components, theoretical and practical, of a laboratory experiment. We should ideally start from an integrated description from which we can extract both the model of a phenomenon, for possibly automated mathematical analysis, and the steps carried out to test it, for automated execution by lab equipment. This is essential to enable automated model synthesis and falsification by concurrently taking into account uncertainties in model parameters, equipment tolerances, and data collection.
We present a language to model and optimize experimental biochemical protocols that provides such an integrated description of the protocol and of the underlying molecular process, and that can be combined with experimental data. From this integrated representation, both the model of a phenomenon and the steps carried out to test it can be separately extracted. Our approach is illustrated in Figure 1.
We provide probabilistic semantics for our language in terms of a Gaussian process (GP) [6], which can be used to characterize uncertainties. Such a semantics arises from a linear noise approximation (LNA) [7,8] of the dynamics of the underlying biochemical system and of the protocol operations, and it corresponds to a Gaussian noise assumption. We show that in a Bayesian framework, the resulting semantics can be combined in a principled way with experimental data to build a posterior process that integrates our prior knowledge with new data. We demonstrate that the Gaussian nature of the resulting process allows one to efficiently and automatically optimize the parameters of an experimental protocol in order to maximize the performances of the experiment. On a series of case studies, including a Gibson Assembly protocol [9] and a Split and MIx protocol, we highlight the usefulness of our approach and how it can have an impact on scientific discovery.
Related work and novelty.
Several factors contribute to the growing need for a formalization of experimental protocols in biology. First, better record keeping of experimental operations is recognized as a step towards tackling the ‘reproducibility crisis’ in biology [10]. Second, the emergence of ‘cloud labs’ creates a need for precise, machine-readable descriptions of the experimental steps to be executed. To address these needs, frameworks allowing protocols to be recorded, shared, and reproduced locally or in a remote lab have been proposed. These frameworks introduce different programming languages for experimental protocols, including BioScript [11,12], BioCoder [2], Autoprotocol [13], and Antha [14]. These languages provide expressive, high-level protocol descriptions but consider each experimental sample as a labeled ‘black-box’. This makes challenging the study a protocol together with the biochemical systems it manipulates in a common framework. In contrast, we consider a simpler set of protocol operations, but we capture the details of experimental samples, enabling us to track properties of chemical solutions (concentrations and temperatures) as the chemicals that react during the execution of a protocol. This allows us to formalize and verify requirements for the correct execution of a protocol and to optimize various protocol or system parameters to satisfy these specifications.
Our language derives from our previous conference paper [5] by introducing stochasticity in chemical evolution, and by providing a Gaussian semantics, particularly for protocol operations, for the effective analysis of the combined evolution of stochastic chemical kinetics and protocols and experimental data. The language semantics has been already implemented (but not previously formally presented) in a chemical/protocol simulator [15]. We expect that this style of semantics could be adapted to other protocol languages, along the principles we illustrate, to provide a foundation for the analysis of complex stochastic protocols in other settings.
Paper outline.
In what follows, we first introduce the syntax and semantics of our language. We then show how we can perform optimization of the resulting Gaussian process integrated with data. Finally, we illustrate the usefulness of our approach on several case studies, highlighting the potential impact of our work on scientific discovery.

2. Materials and Methods

In this section, we introduce the syntax of the language we propose for modeling experimental protocols. A formal semantics of the language, based on denotational semantics [16], is then discussed. The physical process underlying a biological experimental protocol is modeled as a chemical reaction system (CRS). As a consequence, before introducing the language for experimental protocols, we first formally introduce chemical reaction networks (CRNs) and chemical reaction systems (CRSs).

2.1. Chemical Reaction Network (CRN)

Chemical reaction networks (CRNs) is a standard language for modeling biomolecular interactions [17].
Definition 1
(CRN). A chemical reaction network C = ( A , R ) C R N = Λ × T is a pair of a finite set A Λ of chemical species, of size | A | , and a finite set R T of chemical reactions. A reaction τ R is a triple τ = ( r τ , p τ , k τ ) , where r τ N | Λ | is the source complex, p τ N | Λ | is the product complex and k τ R > 0 is the coefficient associated with the rate of the reaction. The quantities r τ and p τ are the stoichiometry of reactants and products. The stoichiometric vector associated to τ is defined by υ τ = p τ r τ . Given a reaction τ i = ( [ 1 , 0 , 1 ] , [ 0 , 2 , 0 ] , k i ) we refer to it visually as τ i : λ 1 + λ 3 k i 2 λ 2 .
Definition 2
(CRN State). Let C = ( A , R ) C R N . A state of C , of the form ( μ , Σ , V , T ) S = ( R 0 | A | × R | A | × | A | × R 0 × R 0 ) , consists of a concentration vector μ (moles per liter), a covariance matrix Σ, a volume (liters), and a temperature (degrees Celsius). A CRN together with a (possibly initial) state is called a Chemical Reaction System (CRS) having the form ( A , R ) , ( μ , Σ , V , T ) .
A Gaussian process is a collection of random variables, such that every finite linear combination of them is normally distributed. Given a state of a CRN, its time evolution from that state can be described by a Gaussian process indexed by time whose mean and variance are given by a linear noise approximation (LNA) of the Chemical Master Equation (CME) [7,18] and is formally defined in the following definitions of CRN Flux and CRN Time Evolution.
Definition 3
(CRN Flux). Let ( A , R ) be a CRN. Let F ( V , T ) R 0 | A | R | A | be the flux of the CRN at volume V R 0 and temperature T R 0 . For a concentration vector μ R 0 | A | we assume F ( V , T ) ( μ ) = τ R υ τ α τ ( V , T , μ ) , with stoichiometric vector υ τ and rate function α τ . We call J F the Jacobian of F ( V , T ) , and  J F its transpose. Further, define W ( V , T ) ( μ ) = τ R υ τ υ τ α τ ( V , T , μ ) to be the diffusion term.
We should remark that, if one assumes mass action kinetics, then for a reaction τ it holds that α τ ( V , T , μ ) = k τ ( k τ , V , T ) μ r τ where μ s = i = 1 | Λ | μ i s i is the product of the reagent concentrations, and  k τ encodes any dependency of the reaction rate k τ on volume and temperature. Then, the Jacobian evaluated at μ is given by J F i k ( μ ) = F ( V , T ) ( x ) i x k | x = μ = τ R υ τ i r τ k α τ ( V , T , μ ) μ k for i , k | A | [8].
In the following, we use μ , Σ for concentration mean vectors and covariance matrices, respectively, and  μ , Σ (boldface) for the corresponding functions of time providing the evolution of means and covariances.
Definition 4
(CRN Time Evolution). Given a CRS ( A , R ) , ( μ , Σ , V , T ) , its evolution at time t < H (where H R 0 { } is a time horizon) is the state ( μ μ ( t ) , Σ μ , Σ ( t ) , V , T ) obtained by integrating its flux up to time t, where:
μ μ ( t ) = μ + 0 t F ( V , T ) ( μ μ ( s ) ) d s
Σ μ , Σ ( t ) = Σ + 0 t J F ( μ μ ( s ) ) Σ μ , Σ ( s ) + Σ μ , Σ ( s ) J F ( μ μ ( s ) ) + W ( V , T ) ( μ μ ( s ) ) d s ,
with μ μ ( 0 ) = μ and Σ μ , Σ ( 0 ) = Σ . If, for such an H, μ or Σ are not unique, then we say that the evolution is ill-posed. Otherwise, μ μ ( t ) and Σ μ , Σ ( t ) define a Gaussian process with that mean and covariance matrix for t < H .
An ill-posed problem may result from technically expressible but anomalous CRS kinetics that does not reflect physical phenomena, such as deterministic trajectories that are not uniquely determined by initial conditions, or trajectories that reach infinite concentrations in finite time.
Example 1.
Consider the CRN C = ( { a , b , c } , R ) , where R = { ( [ 1 , 1 , 0 ] , [ 0 , 2 , 0 ] , 0.2 ) , ( [ 0 , 1 , 1 ] , [ 0 , 0 , 2 ] , 0.2 ) } . Equivalently, R can be expressed as
a + b 0.2 b + b b + c 0.2 c + c .
Then, we have
F ( a , b , c ) = 0.2 a · b a · b b · c b · c W ( a , b , c ) = 0.2 a · b a · b 0 a · b a · b + b · c b · c 0 b · c b · c J F ( a , b , c ) = 0.2 b a 0 b a c b 0 c b
Consider an initial condition of μ a = 0.1 , μ b = μ c = 0.001 , then the time evolution of mean and variance for the GP defined in Definition 4 are reported in Figure 2.

2.2. A Language for Experimental Biological Protocols

In what follows, we introduce the syntax of our language for experimental biological protocols.
Definition 5
(Syntax of a Protocol). Given a set of sample variables x V a r and a set of parameter variables z P a r , the syntax of a protocol P P r o t for a given fixed CRN C = ( A , R )  is -4.6cm0cm
P = x ( a sample variable ) ( p 1 p | A | , r V , r T ) ( a sample with initial concentrations , volume , temperature ) l e t x = P 1 i n P 2 ( introduce a local sample variable x ) M i x ( P 1 , P 2 ) ( mix samples ) l e t x 1 , x 2 = S p l i t ( P 1 , p ) i n P 2 ( split a sample P 1 by a proportion p in ( 0 1 ) ) E q u i l i b r a t e ( P , p ) ( equilibrate a sample for p sec onds ) D i s p o s e ( P ) ( discard sample ) p = z ( a parameter variable ) r ( a literal non - negative real number )
Moreover, let-bound variables x, x 1 , x 2 must occur (as free variables) exactly once in P 2 .
A protocol P manipulates samples (which are CRN states as in Definition 2) through a set of operations, and finally yields a sample as the result. This syntax allows one to create and manipulate new samples using Mix (put together different samples), Split (separate samples) and Dispose (discard samples) operations. Note that the CRN is common to all samples, but different samples may have different initial conditions and hence different active reactions. The single-occurrence (linearity) restriction of sample variables implies that a sample cannot be duplicated or forgotten.

2.3. Gaussian Semantics for Protocols

There are two possible approaches to a Gaussian semantics of protocols, that is, to a semantics characterized by keeping track of mean and variance of sample concentrations. They differ in the interpretation of the source of noise that produces the variances. We discuss the two options, and then choose one of the two based on both semantic simplicity and relevance to the well-mixed-solution assumption of chemical kinetics.
The first approach we call extrinsic noise. The protocol operations are deterministic, and the the evolution of each sample is also deterministic according to the rate equation (that is, Definition 4 (1)). Here, we imagine running the protocol multiple times over a distribution of initial conditions (i.e., the noise is given extrinsically to the evolution). The outcome is a final distribution that is determined uniquely by the initial distribution and by the deterministic evolution of each run. For example, the sample-split operation here is interpreted as follows. In each run, we have a sample whose concentration in general differs from its mean concentration over all runs. The two parts of a split are assigned identical concentration, which again may differ from the mean concentration. Hence, over all runs, the two parts of the split have identical variance, and their distributions are perfectly correlated, having the same concentration in each run. The subsequent time evolution of the two parts of the split is deterministic and hence identical (for, say, a  50 % split). In summary, in the extrinsic noise approach, the variance on the trajectories is due to the distribution of deterministic trajectories for the different initial conditions.
The second approach we call intrinsic noise. The protocol operations are again deterministic, but the evolution in each separate sample is driven by the Chemical Master Equation (i.e., the noise is intrinsic to the evolution). Here, we imagine running the protocol many times on the same initial conditions, and the variance of each sample is due ultimately to random thermal noise in each run. This model assumes a Markovian evolution of the underlying stochastic system, implying that no two events may happen at the same time (even in separate samples). Furthermore, as usual, we assume that chemical solutions are well-mixed: the probability of a reaction (a molecular collision) is independent of the initial position of the molecules in the solution. Moreover, the well-mixture of solutions is driven by uncorrelated thermal noise in separate samples. Given two initially identical but separate samples, the first chemical reaction in each sample (the first “fruitful” collision that follows a usually large number of mixing collisions) is determined by uncorrelated random processes, and their first reaction cannot happen at exactly the same time. Therefore, in contrast to the extrinsic noise approach, a sample-split operation results in two samples that are completely uncorrelated, at least on the time scale of the first chemical reaction in each sample. In summary, in the intrinsic noise approach, the variance on the trajectories is due to the distribution of stochastic trajectories for identical initial conditions.
In both approaches, the variance computations for mix and for split are the same: mix uses the squared-coefficient law to determine the variance of two mixed samples, while split does not change the variance of the sample being split. The only practical difference is that in the intrinsic noise interpretation we do not need to keep track of the correlation between different samples: we take it to be always zero in view of the well-mixed-solution assumption. As a consequence, each sample needs to keep track of its internal correlations represented as a local Σ matrix of size | A | × | A | , but not of its correlations with other samples, which would require a larger global matrix of potentially unbounded size. Therefore, the intrinsic noise interpretation results in a vast simplification of the semantics (Definition 6) that does not require keeping track of correlations of concentrations across separate samples. In the rest of the paper, we consider only the intrinsic noise semantics.
Definition 6
(Gaussian Semantics of a Protocol—Intrinsic Noise). The intrinsic-noise Gaussian semantics [ [ P ] ] ρ P r o t × E n v S of a protocol P P r o t for a CRN C = ( A , R ) , under environment ρ E n v = ( V a r P a r ) S , for a fixed horizon H with no ill-posed time evolutions, denotes the final CRN state ( μ , Σ , V , T ) S (Definition 2) of the protocol and is defined inductively as follows:   
[ [ x ] ] ρ = ρ ( x ) [ [ ( p 1 p | A | , r V , r T ) ] ] ρ = ( [ [ p 1 ] ] ρ [ [ p | A | ] ] ρ , 0 | A | × | A | , r V , r T ) [ [ l e t x = P 1 i n P 2 ] ] ρ = [ [ P 2 ] ] ρ 1 w h e r e ρ 1 = ρ { x [ [ P 1 ] ] ρ } [ [ M i x ( P 1 , P 2 ) ] ] ρ = ( V 1 μ 1 + V 2 μ 2 V 1 + V 2 , V 1 2 Σ 1 + V 2 2 Σ 2 ( V 1 + V 2 ) 2 , V 1 + V 2 , V 1 T 1 + V 2 T 2 V 1 + V 2 ) w h e r e ( μ 1 , Σ 1 , V 1 , T 1 ) = [ [ P 1 ] ] ρ a n d ( μ 2 , Σ 2 , V 2 , T 2 ) = [ [ P 2 ] ] ρ [ [ l e t x , y = S p l i t ( P 1 , p ) i n P 2 ] ] ρ = [ [ P 2 ] ] ρ 1 w h e r e r = [ [ p ] ] ρ , 0 < r < 1 a n d ( μ , Σ , V , T ) = [ [ P 1 ] ] ρ a n d ρ 1 = ρ { x ( μ , Σ , r V , T ) , y ( μ , Σ , ( 1 r ) V , T ) } [ [ E q u i l i b r a t e ( P , p ) ] ] ρ = ( μ μ ( t ) , Σ μ , Σ ( t ) , V , T ) w h e r e t = [ [ p ] ] ρ a n d ( μ , Σ , V , T ) = [ [ P ] ] ρ [ [ D i s p o s e ( P ) ] ] ρ = ( 0 | A | , 0 | A | × | A | , 0 , 0 )
together with [ [ p ] ] ρ defined as:
[ [ z ] ] ρ = ρ ( z ) [ [ r ] ] ρ = r
The semantics of E q u i l i b r a t e derives from Definition 4. The substitution notation ρ { x v } represents a function that is identical to ρ except that at x it yields v; this may be extended to the case where x is a vector of distinct variables and v is a vector of equal size. The variables introduced by l e t are used linearly (i.e., must occur exactly once in their scope), implying that each sample is consumed whenever used.
We say that the components ( μ , Σ ) R 0 | A | × R | A | × | A | of a CRN state ( μ , Σ , V , T ) S form a Gaussian state, and sometimes we say that the protocol semantics produces a Gaussian state (as part of a CRN state).
The semantics of Definition 6 combines the end states of sub-protocols by linear operators, as we show in the examples below. We stress that it does not, however, provide a time-domain GP: just the protocol end state as a Gaussian state (random variable). In particular, a protocol like l e t x = E q u i l i b r a t e ( P , p ) i n D i s p o s e ( x ) introduces a discontinuity at the end time p, where all the concentrations go to zero; other discontinuities can be introduced by M i x . A protocol may have a finite number of such discontinuities corresponding to liquid handling operations, and otherwise proceeds by LNA-derived GPs and by linear combinations of Gaussian states at the discontinuity points.
It is instructive to interpret our protocol operators as linear operators on Gaussian states: this justifies how covariance matrices are handled in Definition 6, and it easily leads to a generalized class of possible protocol operators. We discuss linear protocol operators in the examples below.
Example 2.
The M i x operator combines the concentrations, covariances, and temperatures of the two input samples proportionally to their volumes. The resulting covariance matrix, in particular, can be justified as follows. Consider two (input) states A = ( μ A , Σ A , V A , T A ) , B = ( μ B , Σ B , V B , T B ) with | μ A | = | μ B | = k and | Σ A | = | Σ B | = k × k . Let 0 and 1 denote null and identity vectors and matrices of size k and k × k . Consider a third null state C = ( 0 , 0 , V C , T C ) , with  V C = V A + V B and T C = V A T A + V B T B V C . The joint Gaussian distribution of A , B , C is given by μ = μ A μ B 0 ,   Σ = Σ A 0 0 0 Σ B 0 0 0 0 . Define the symmetric hollow linear operator M i x = 0 0 V A V C 1 0 0 V B V C 1 V A V C 1 V B V C 1 0 . The zeros on the diagonal (hollowness) imply that the inputs states are zeroed after the operation, and hence discarded. Applying this operator to the joint distribution, by the linear combination of normal random variables we obtain a new Gaussian distribution with:
μ = M i x · μ = 0 0 V A μ A + V B μ B V C Σ = M i x · Σ · M i x = 0 0 0 0 0 0 0 0 V A 2 Σ A + V B 2 Σ B V C 2
Hence, all is left of μ , Σ is the output state C = ( μ C , Σ C , V C , T C ) where μ C = V A μ A + V B μ B V C and Σ C = V A 2 Σ A + V B 2 Σ B V C 2 as in Definition 6.
Example 3.
The S p l i t operator splits a sample in two parts, preserving the concentration and consequently the covariance of the input sample. The resulting covariance matrix, in particular, can be justified as follows. Consider one (input) state A = ( μ A , Σ A , V A , T A ) and two null states B = ( 0 , 0 , p V A , T A ) , C = ( 0 , 0 , ( 1 p ) V A , T A ) with 0 < p < 1 . As above, consider the joint distribution of these three states, μ = μ A 0 0 ,   Σ = Σ A 0 0 0 0 0 0 0 0 and define the symmetric hollow linear operator S p l i t = 0 1 1 1 0 0 1 0 0 . The 1s imply that concentrations are not affected. The whole submatrix of output states being zero implies that any initial values of output states are ignored. Applying this operator to the joint distribution, we obtain:
μ = S p l i t · μ = 0 μ A μ A Σ = S p l i t · Σ · S p l i t = 0 0 0 0 Σ A Σ A 0 Σ A Σ A
By projecting μ , Σ on B and C, we are left with the two output states B = ( μ A , Σ A , p V A , T A ) , C = ( μ A , Σ A , ( 1 p ) V A , T A ) , as in Definition 6. The correlation between B and C that is present in Σ is not reflected in these outcomes: it is discarded on the basis of the well-mixed-solution assumption.
Example 4.
In general, any symmetric hollow linear operator, where the submatrix of the intended output states is also zero, describes a possible protocol operation over samples. As a further example, consider a different splitting operator, l e t x , y = O s m o ( P 1 , p ) i n P 2 , that intuitively works as follows. A membrane permeable only to water is placed in the middle of an input sample A = ( μ A , Σ A , V A , T A ) , initially producing two samples of volume V A / 2 , but with unchanged concentrations. Then, an osmotic pressure is introduced (by some mechanism) that causes a proportion 0 < p < 1 of the water (and volume) to move from one side to the other, but without transferring any other molecules. As the volumes change, the concentrations increase in one part and decrease in the other. Consider, as in S p l i t , two initial null states B and C and the joint distributions of those three states μ , Σ . Define a symmetric linear operator O s m o = 0 1 p p 1 p 0 0 p 0 0 . Applying this operator to the joint distribution we obtain:
μ = O s m o · μ = 0 ( 1 p ) μ A p μ A Σ = O s m o · Σ · O s m o = 0 0 0 0 ( 1 p ) 2 Σ A p ( 1 p ) Σ A 0 p ( 1 p ) Σ A p 2 Σ A
producing the two output states B = ( ( 1 p ) μ A , ( 1 p ) 2 Σ A , p V A , T A ) and C = ( p μ A , p 2 Σ A , ( 1 p ) V A , T A ) describing the situation after the osmotic pressure is applied. Again, we discard the correlation between B and C that is present in Σ on the basis of the well-mixed-solution assumption.
Example 5.
We compute the validity of some simple equivalences between protocols by applying the Gaussian Semantics to both sides of the equivalence. Consider the CRN state P o i s s o n ( k , V , T ) = ( [ k ] , [ [ k ] ] , V , T ) over a single species having mean [ k ] (a singleton vector) and variance [ [ k ] ] (a 1 × 1 matrix). This CRN state can be produced by a specific CRN [19], but here we add it as a new primitive protocol and extend our semantics to include [ [ P o i s s o n ( k , V , T ) ] ] ρ = ( [ k ] , [ [ k ] ] , V , T ) . Note first that mixing two uncorrelated Poisson states does not yield a Poisson state (the variance changes to [ [ k / 2 ] ] ). Then, the following equation holds:
[ [ M i x ( P o i s s o n ( k , V , T ) , P o i s s o n ( k , V , T ) ) ] ] ρ = [ [ l e t x , y = S p l i t ( P o i s s o n ( k , V , T ) , 0.5 ) i n M i x ( x , y ) ] ] ρ = ( [ k ] , [ [ k / 2 ] ] , V , T )
That is, in the intrinsic-noise semantics, mixing two correlated (by Split) Poisson states is the same as mixing two uncorrelated Poisson states, because of the implicit decorrelation that happens at each S p l i t . An alternative semantics that would take into account a correlation due to S p l i t could instead possibly satisfy the appealing equation [ [ l e t x , y = S p l i t ( P , 0.5 ) i n M i x ( x , y ) ] ] ρ = [ [ P ] ] ρ for any P (i.e., splitting and remixing makes no change), but this does not hold in our semantics for P = P o i s s o n ( k , V , T ) , where the left hand side yields ( [ k ] , [ [ k / 2 ] ] , V , T ) (i.e., splitting, decorrelating, and remixing makes a change).
The M i x operator can be understood as diluting the two input samples into the resulting larger output volume. We can similarly consider a D i l u t e operator that operates on a single sample, and we can even generalize it to concentrate a sample into a smaller volume. In either case, let W be a new forced volume, and U be a new forced temperature for the sample. Let us define [ [ D i l u t e ( P , W , U ) ] ] ρ = ( V W μ , V 2 W 2 Σ , W , U ) where ( μ , Σ , V , T ) = [ [ P ] ] ρ . Then, the following equation holds:
[ [ M i x ( D i l u t e ( P 1 , W 1 , U 1 ) , D i l u t e ( P 2 , W 2 , U 2 ) ) ] ] ρ = [ [ D i l u t e ( M i x ( P 1 , P 2 ) , W 1 + W 2 , W 1 U 1 + W 2 U 2 W 1 + W 2 ) ] ] ρ = ( V 1 μ 1 + V 2 μ 2 W 1 + W 2 , V 1 2 Σ 1 + V 2 2 Σ 2 ( W 1 + W 2 ) 2 , W 1 + W 2 , W 1 U 1 + W 2 U 2 W 1 + W 2 )
Note that, by diluting to a fixed new volume W, we obtain a protocol equivalence that does not mention, in the equivalence itself, the volumes V i resulting from the sub-protocols P i (which of course occur in the semantics). If instead we diluted by a factor (e.g., factor = 2 to multiply volume by 2), then it would seem necessary to refer to the V i in the equivalence.
To end this section, we provide an extension of syntax and semantics for the optimization of protocols. An optimize protocol directive identifies a vector of variables within a protocol to be varied in order to minimize an arbitrary cost function that is a function of those variables and of a collection of initial values. The semantics can be given concisely but implicitly in terms of an a r g m i n function: a specific realization of this construct is then the subject of the next section.
Definition 7
(Gaussian Semantics of a Protocol—Optimization). Let z P a r N be a vector of (optimization) variables, and  k P a r M be a vector of (initialization) variables with (initial values) r R M , where all the variables are distinct and P a r is disjoint from V a r . Let P P r o t be a protocol for a CRN C = ( A , R ) , with free variables z and k and with any other free variables covered by an environment ρ E n v = ( V a r P a r ) S . Given a cost function C : R | A | × R N R , and a dataset D f i n R M × R N × R | A | , the instruction “optimize k=r, z in P with C given D " provides a vector in R N of optimized values for the z variables (For simplicity, we omit extending the syntax with new representations for C and D , and we let them represent themselves). The optimization is based on a stochastic process X R M + N R 0 | A | × R | A | × | A | derived from P via Definition 6.
[ [ o p t i m i z e k = r , z i n P w i t h C g i v e n D ] ] ρ = a r g m i n u R N E y X ( r , u | D ) [ C ( y , u ) ] w h e r e X ( r , u ) = [ [ P ] ] ρ { k r , z u } f o r a l l r R M a n d u R N ,
where E y X ( r , u | D ) [ · ] stands for the expectation with respect the conditional distribution of X ( r , u ) given D .
From Definition 6, it follows that, for any ( r , u ) R M + N ,   X ( r , u ) is a Gaussian random variable. In what follows, for the purpose of optimization, we further assume that X is a Gaussian process defined on the sample space R M + N , i.e., that for any possible ( r 1 , u 1 ) and ( r 2 , u 2 ) , X ( r 1 , u 1 ) and X ( r 2 , u 2 ) are jointly Gaussian. Such an assumption is standard [6] and natural for our setting (e.g., it is guaranteed to hold for different equilibration times) and guarantees that we can optimize protocols by employing results from Gaussian process optimization as described in detail in Section 2.4.

2.4. Optimization of Protocols through Gaussian Process Regression

In Definition 7 we introduced the syntax and semantics for the optimization of a protocol from data. In this section, we show how the optimization variables can be selected in practice. In particular, we leverage existing results for Gaussian processes (GPs) [6]. We start by considering the dataset D = { ( r i , u i , y i ) , i { 1 , , n D } } f i n R M × R N × R | A | comprising n D executions of the protocol for possibly different initial conditions, i.e., each entry gives the concentration of the species after the execution of the protocol ( y i ) , where the protocol has been run with k = r i , z = u i . Then, we can predict the output of the protocol starting from x ¯ = ( r , u ) by computing the conditional distribution of X (Gaussian process introduced in Definition 7) given the data in D . In particular, under the assumption that X is Gaussian, it is well known that the resulting posterior model, X ( x ¯ | D ) , is still Gaussian with mean and covariance functions given by
μ p ( x ¯ ) = μ ( x ¯ ) + Σ x ¯ , D ( Σ D , D + σ 2 I ) 1 ( y D μ D )
Σ p ( x ¯ , x ¯ ) = Σ x ¯ , x ¯ Σ x ¯ , D ( Σ D , D + σ 2 I ) 1 Σ x ¯ , D T ,
where σ 2 I is a diagonal covariance modeling i.i.d. Gaussian observation noise with variance σ 2 , μ ( x ¯ ) and Σ x ¯ , x ¯ are the prior mean and covariance functions, Σ D , x ¯ is the covariance between x ¯ and all the points in D , and  y D , μ D are vectors of dimensions R | Λ | · n D containing for all ( x i , u i , y i ) D , respectively, y i and μ ( ( x i , u i ) ) [6]. Note that, in order to encode our prior information of the protocol, we take μ ( x ¯ ) to be the mean of the GP as defined in Definition 6, while for the variance we can have Σ x ¯ , x ¯ to be any standard kernel [6]; in the experiments, as is standard in the literature, we consider the widely used squared exponential kernel, which is expressive enough to approximate any continuous function arbitrarily well [20]. However, we remark that, for any parametric kernel, such as the squared exponential, we can still select the hyper-parameters that best fit the variance given by Definition 6 as well as the data [6]. We should also stress that the resulting X ( x ¯ | D ) is a Gaussian process that merges in a principled way (i.e., via Bayes’ rule) our prior knowledge of the model (given by Definition 6) with the new information given in D . Furthermore, it is worth stressing again that X ( x ¯ | D ) is a GP with input space given by R M + N , which is substantially different from the GP defined by the LNA (Definition 5), where the GP was defined over time.
For a given set of initialization parameters r our goal is to synthesize the optimization variables that optimize the protocol with respect to a given cost specification C : R | A | × R N R , i.e., we want to find u * such that
u * = a r g m i n u R N E y N ( μ p ( x ¯ ) , Σ p ( x ¯ , x ¯ ) ) [ C ( y , u ) ] ,
where E y N ( μ p ( x ¯ ) , Σ p ( x ¯ , x ¯ ) ) [ · ] is the expectation with respect to the GP given by Equations (3) and (4)). Note that r is a known vector of reals (see Definition 7), hence we only need to optimize for the free parameters u. In general, the computation of u * in Equation (5) requires solving a non-convex optimization problem that cannot be solved exactly [21]. In this paper, we approximate Equation (5) via gradient-based methods. These methods, such as gradient descent, require the computation of the gradient of the expectation of C with respect to u:
E y N ( μ p ( x ¯ ) , Σ p ( x ¯ , x ¯ ) ) [ C ( y , u ) ] u .
Unfortunately, direct computation of the gradient in (6) is infeasible in general, as the probability distribution where the expectation is taken depends on u itself (note that x ¯ = ( r , u ) ). However, for the GP case, as shown in Lemma 1, the gradient of interest can be computed directly by reparametrizing the Gaussian distribution induced by Equations (3) and (4).
Lemma 1.
For x ¯ = ( r , u ) , let D ( x ¯ ) be the matrix such that D ( x ¯ ) D T ( x ¯ ) = Σ p ( x ¯ , x ¯ ) . Then, it holds that
E y N ( μ p ( x ¯ ) , Σ p ( x ¯ , x ¯ ) ) [ C ( y , u ) ] u = E z N ( 0 , I ) [ C ( μ p ( x ¯ ) + D ( x ¯ ) z , u ) u ] .
In particular, D ( x ¯ ) as defined above is guaranteed to exist under the assumption that Σ p is positive definite and can be computed via Cholesky decomposition. Furthermore, note that if the output is uni-dimensional, then D ( x ¯ ) is simply the standard deviation of the posterior GP in x ¯ .
Example 6.
Given the CRN C introduced in Example 1, consider the protocol
P = E q u i l i b r a t e ( ( 0.1 m M , 0.001 m M , 0.001 m M , 1 μ L , 20 C ) , T ) ,
which seeks to evolve the various species from an initial starting concentration of 0.1 mM for species a and 0.001 mM for both species b and c for T seconds. Hence, for this example, we have r = [ 0.1 , 0.001 , 0.001 , 1 , 20 ] including initial conditions, volume, and temperature and u = [ T ] , that is, the only optimization variable is the equilibration time. We consider a dataset D composed of the following six data points (for simplicity we report the output values of only species b and omit unit of measurement, which are mM for concentration, μL for volume, Celsius degrees for temperature, and seconds for equilibration time):
d 1 = ( ( 0.1001 , 0.0015 , 0.001 , 1 , 20 ) , 0 , 0.001 ) d 2 = ( ( 0.099 , 0.001 , 0.001 , 1 , 20 ) , 40 , 0.001 ) d 3 = ( ( 0.1 , 0.001 , 0.001 , 1 , 20 ) , 150 , 0.09 ) d 4 = ( ( 0.1 , 0.001 , 0.002 , 1 , 20 ) , 250 , 0.08 ) d 5 = ( ( 0.09 , 0.001 , 0.0015 , 1 , 20 ) , 400 , 0.003 ) d 6 = ( ( 0.1 , 0.001 , 0.002 , 1 , 20 ) , 500 , 0.001 ) ,
where, for example, in  d 1 we have that the initial concentration for a , b , c are, respectively, 0.1001 , 0.0015 , 0.001 , volume and temperature at which the experiment is performed are 1 and 20 and the observed value for b at the end of the protocol for T = 0 is 0.001 . We assume an additive Gaussian observation noise (noise in the collection of the data) with standard deviation σ = 0.01 . Note that the initial conditions of the protocol in the various data points in general differ from those of P, which motivates having Equations (3) and (4) dependent on both r and u.
We consider the prior mean given by Equation (1) and independent squared exponential kernels for each species with hyper-parameters optimized through maximum likelihood estimation (MLE) [6]. The resulting posterior GP is reported in Figure 3, where is compared with the prior mean and the true underlying dynamics for species b (assumed for this example to be deterministic). We note that with just 6 data points in D , the posterior GP is able to correctly recover the true behavior of b with relatively low uncertainty.
We would like to maximize the concentration of b after the execution of the protocol. This can be easily encoded with the following cost function:
C ( a , b , c , u ) = b 2 .
For r = [ 0.1 , 0.001 , 0.001 , 1 , 20 ] , the value obtained is T 230 , which is simply the one that maximizes the posterior mean. In Section 3.1, we will show how, for more complex specifications, the optimization process will become more challenging because of the need to also account for the variance in order to balance between exploration and exploitation.

3. Results

We consider two case studies where we illustrate the usefulness of our framework. The first example illustrates how our framework can be employed to optimize a Gibson assembly protocol [9] from experimental data. In the second example, we illustrate the flexibility of our language and semantics on a combination of protocol operations. Furthermore, we also use this example to show how our framework can be employed to perform analysis of the protocol parameters while also accounting for the uncertainty in both the model dynamics, the protocol parameters, and the data collection.

3.1. Gibson Assembly Protocol

We start by considering a simplified version of the Gibson assembly, a protocol widely used for joining multiple DNA fragments [9]. We consider the following model of Gibson assembly described by Michaelis–Menten kinetics, where two DNA double strands, AB and BA (with shared endings A and B, but different middle parts), are enzymatically joined in two symmetric ways according to their common endings, and the resulting strands ABA and BAB are circularized into the same final structure O. The resulting dynamical model is given by the following ODEs (assuming AB is over abundant with respect to BA):
d A B ( t ) d t = 0
d B A ( t ) d t = k c a t 1 · A B ( t ) · B A ( t ) B A ( t ) + K m 1 k c a t 2 · A B ( t ) · B A ( t ) B A ( t ) + K m 2
d A B A ( t ) d t = k c a t 1 · A B ( t ) · B A ( t ) B A ( t ) + K m 1 k c a t 1 · A B A ( t ) A B A ( t ) + K m 1
d B A B ( t ) d t = k c a t 2 · A B ( t ) · B A ( t ) B A ( t ) + K m 2 k c a t 2 · B A B ( t ) B A B ( t ) + K m 2
d O ( t ) d t = k c a t 1 · A B A ( t ) A B A ( t ) + K m 1 + k c a t 2 · B A B ( t ) B A B ( t ) + K m 2 ,
where k c a t 1 , k c a t 2 , K m 1 , K m 2 are given parameters. Gibson assembly is a single-step isothermal protocol: once all the ingredients are combined, it can be written as:
P = E q u i l i b r a t e ( ( [ 1 m M , x B A m M , 0 m M , 0 m M , 0 m M ] , 1 μ L , 50 C ) , T ) ,
where [ 1 , x B A , 0 , 0 , 0 ] is a vector of the initial concentration of the various species with A B initialized at 1 mM, B A at x B A mM, and all the other species to 0 mM for a sample of volume of 1 μ L and at a temperature 50 Celsius degrees, equilibrated for T seconds. For this example, we have that the set of optimization variables is u = [ x B A , T ] and the goal is to find initial condition of B A ( x B A ) and equilibration time (T) such that, at the end of the protocol, species O has a concentration as close as possible to a desired value O d e s , while also keeping the equilibration time and the initial concentration of B A close to some reference values. This can be formalized with the following cost function:
C ( x B A , T ) = β ( O ( T ) O d e s ) 2 + λ ( T T r e f T n o r m ) 2 + ( 1 λ ) ( x B A I r e f ) 2 ,
where O ( T ) is the concentration of strand O after the execution of the protocol. T r e f , T n o r m , I r e f , and β , are parameters describing, respectively, the reference equilibration time, a normalization term for the equilibration time, the reference initial concentration of B A , and a weight term such that if β > 1 we give more importance to O ( T ) being close to O d e s compared to equilibration time and initial conditions being closer to their reference values. Similarly, λ [ 0 , 1 ] is a weight term that balances the importance that equilibration time and the initial concentration of B A are close to their reference values. In our experiments, we fix O d e s = 0.6 , T r e f = 700 , T n o r m = 1000 , I r e f = 1 , and  β = 20 .
The dynamical model considered in Equations (8)–(12) is obtained by combining multiple enzymatic reactions into single Michaelis–Menten steps and the value of the reaction rates may not be accurate. Hence, to obtain a more accurate model, we combine Equations (8)–(12) with the experimental data from [9] under the assumption that observations are corrupted by a Gaussian noise of relatively high standard deviation of 0.1 .
In Figure 4, we plot the synthesized (approximately) optimal values of T and x B A for various values of λ by using gradient descent, with the gradient computed as shown in Lemma 1. Note that even for λ = 0 or λ = 1 the cost cannot be made identically zero. This is due to the uncertainty in the model that leads to a non-zero variance everywhere. Note also that for λ = 1 the synthesized time T is smaller than 700 (the reference equilibration time). This can be explained by looking at Figure 4 (Right), where we plot the variance of O ( T ) as a function of T and x B A . In fact, as we have available only data (reported in the Appendix B) for x B A 1 the variance increases when x B A is far from 1. As a consequence, the algorithm automatically balances this tradeoff by picking an equilibration time that is close to the target value, but also allows for an x B A close to 1 in order the keep the variance under control.

3.2. Split and Mix Protocol

Consider the CRN C = ( { a , b , c } , R ) , where R is given by the reactions:
a + b 1 b + c a + c 1 a + a b + c 1 c + c .
Consider also the associated protocol P s p l i t & m i x shown below in the syntax of Definition 5 (with underscore standing for an unused variable, with initial concentration vectors ordered as ( a 0 , b 0 , c 0 ) , and initial states matching the pattern ( ( a 0 , b 0 , c 0 ) , v o l u m e , t e m p e r a t u r e ) :
l e t A = ( ( 10 m M , 0 m M , 1 m M ) , 1 μ L , 20 C ) i n l e t A 1 = E q u i l i b r a t e ( A , 100 s ) i n l e t C , D = S p l i t ( A 1 , 0.5 ) i n l e t _ = D i s p o s e ( C ) i n l e t B = ( ( 0 m M , 10 m M , 1 m M ) , 1 μ L , 20 C ) i n l e t B 1 = E q u i l i b r a t e ( B , 100 s ) i n l e t E = M i x ( D , B 1 ) i n E q u i l i b r a t e ( E , 1000 s )
This protocol interleaves equilibration steps with mixing and splitting. During the first E q u i l i b r a t e , only the second reaction is active in sample A, due to its initial conditions, yielding monotonic changes in concentrations. During the second E q u i l i b r a t e , similarly, only the third reaction is active in sample B. During the third E q u i l i b r a t e , after samples A and B have been manipulated and mixed, all three reactions are active in sample E, yielding an oscillation.
The semantics of Definition 6 can be used to unravel the behavior of P s p l i t & m i x . In particular, integration of the CRN C in samples A and B yields states S A , S B that, after some split and mix manipulations, determine the initial conditions for both mean and covariance for the integration of C in sample E.
We can numerically evaluate the protocol following the semantics: this is shown in Figure 5 (Left and Center), where the protocol evaluation results in three simulations that implement the three integration steps, where each simulation is fed the results of the previous simulations and protocol operations.
To show the usefulness of the integrated semantics, in Figure 5 (Right) we perform a global sensitivity analysis of the whole protocol with respect to the protocol parameters t 1 = 100 , t 2 = 100 , t 3 = 1000 , the duration of the three E q u i l i b r a t e , and  s 1 = 0.5 , the proportion of S p l i t . We produce density plots of the means and variances of a,b,c at the end of the protocol when t 1 , t 2 , t 3 , s 1 vary jointly randomly by up to 10%. This is thus a sensitivity analysis of the joint effects of three simulations connected by liquid handling steps. It is obtained by providing the whole parameterized protocol, not its separate parts, to the harness that varies the parameters and plots the results. We could similarly vary the initial concentrations and reaction rates, and those together with the protocol parameters.

4. Discussion

Automation already helps scaling up the production of chemical and biochemical substances, but it is also hoped it will solve general reproduceability problems. In the extreme, even one-off experiments that have no expectation of reproduction or scaling-up should be automated, so that the provenance of the data they produce can be properly recorded. Most pieces of lab equipment are already computer controlled, but automating their interconnection and integration is still a challenge that results in poor record keeping. A large amount of bottom-up development will be necessary to combine all these pieces of equipment into “fabrication lines”. However, it is also important to start thinking top-down at what the resulting connecting “glue” should look like, as we have attempted to do in this paper. This is because, if automation is the ultimate goal, then some pieces of equipment that are hard to automate should be discarded entirely.
Consider, for example, the Split operation from Definition 6, which separates a sample into two samples. That represents a vast array of laboratory procedures, from simple pipetting to microfluidic separation, each having its own tolerances and error distributions (which are well characterized and could be included into the language and the analysis). Despite all those variations, it is conceivable that a protocol compiler could take an abstract Split instruction, and a known piece of equipment, and automatically tailor the instruction for that equipment. Digital microfluidics is particularly appealing in this respect because of the relative ease and uniformity of such tailoring [22]. Therefore, one could decide to work entirely within digital microfluidics, perhaps helping to make that area more robust, and avoid other kinds of equipment.
Are there drawbacks to this approach? First, as we mentioned, equipment should be selected based on ease of automation: whole new lab procedures may need to be developed in replacement, along with standards for equipment automation. Second, protocol languages are more opaque than either sets of mathematical equations or sequences of laboratory steps. This needs to be compensated for with appropriate user interfaces for common use in laboratories.
About the second point, as a first step, we have separately produced an easily deployed app (Kaemika [15]) that supports the integrated language described here, although pragmatic details of the syntax diverge slightly. We have used it to run simulations and to produce Figure 2 and Figure 5 (see Appendix B). It uses a standard ODE solver for simulating the “Equilibrate” steps of the protocols, i.e., the chemical reaction kinetics. It uses the linear noise approximation (LNA) for stochastic simulations; that is, means and variances of concentrations are computed by an extended set of ODEs and solved with the same ODE solver along with the deterministic rate equations. The liquid handling steps of the protocols are handled following the semantics of Definition 6, noting that in the stochastic case means and variances are propagated into and out of successive equilibrate and liquid handling steps. The result is an interleaved simulation of equilibrate and liquid handing steps, which is presented as an interleaved graphical rendering of concentration trajectories and of droplet movements on a simulated microfluidic device. The sequence of protocol steps can be extracted as a graph, omitting the detailed kinetics. The kinetic equations operating at each step of the protocol can be extracted as well.
In this context, we should also stress that our proposed Gaussian semantics makes various assumptions: (1) we assume that molecular processes described by CRNs and liquid handling operations can be modeled by a Gaussian process, and (2) we assume that the collected data are corrupted by additive Gaussian noise. The former assumption is justified by the precision of lab equipment to perform liquid handling operations, whose uncertainty is generally very small, and by the Central Limit Theorem (CLT). The CLT guarantees that the solution of the Chemical Master Equation will converge to a Gaussian process in the limit of high number of molecules [23], as is common in wet lab experiments. In particular, as illustrated in Experiment 5.1 in [8], already a number of molecules of the order of hundreds for each species generally guarantees that a Gaussian approximation is accurate. It is obvious that, in case of experiments with single molecules, such an approximation may be inaccurate and a full treatment of the CME would be more appropriate [24]. The assumption that observations are corrupted by Gaussian additive noise is standard [25]. However, we acknowledge that in certain scenarios, non-Gaussian or multiplicative observation noise may be required. In this case, we would like to stress that Gaussian process regression can still be performed with success, at the price of introducing approximations on the computation of the posterior mean and variance [6].
As automation standards are developed for laboratory equipment, we will be able to target our language to such standards, and extend it or adapt it as needed. In this context, our future works include extension to both the syntax and semantics of our language to include common laboratory procedures that we have not investigated here, for example, changing the temperature of a sample (as in a thermal cycler) or its volume (as in evaporation or dilution). These are easy to add to our abstract framework, but each corresponds to a whole family of lab equipment that may need to be modeled and integrated in detail. Furthermore, we plan to extend the semantics to allow for more general stochastic processes that may be needed when modeling single molecules.

5. Conclusions

We have introduced a probabilistic framework that rigorously describes the joint handling of liquid manipulation steps and chemical kinetics, throughout the execution of an experimental protocol, with particular attention to the propagation of uncertainty and the optimization of the protocol parameters. A central contribution of this paper is the distinction between the intrinsic and extrinsic approach to noise, which leads to a much simplified semantics under a chemically justified assumption of well-mixed-solutions. The semantics are reduced to operations on Gaussian states, where any modeled or hypothetical protocol operation is a symmetric linear operator on Gaussian states.
The Gaussian process semantics approach is novel to this work, and is novel with respect to the Piecewise Deterministic Markov Process semantics in [5], which treats chemical evolution as deterministic. The semantics in this work are about a collection of deterministic protocol operations, but note that (1) stochasticity in chemical kinetics is propagated across protocol operations, making the whole protocol stochastic, and (2) we have shown examples of how to easily incorporate new protocol operators as linear operators on Gaussian states, which may include ones that introduce their own additional stochasticity. The Gaussian approach in this paper enables principled integration of a protocol model with experimental data, which in turn enables automated optimization and analysis of experimental biological protocols. The syntax of protocols is chosen for mathematical simplicity, reflecting the one in [5]; a closely related but more pragmatic syntax is now implemented in [15].

Author Contributions

Conceptualization, L.C., M.K. and L.L.; methodology, L.C., M.K. and L.L.; software, L.C. and L.L.; validation, L.C., M.K. and L.L.; formal analysis, L.C., M.K. and L.L.; investigation, L.C., M.K. and L.L.; writing—original draft preparation, L.C., M.K. and L.L.; writing—review and editing, L.C., M.K. and L.L.; visualization, L.C. and L.L.; supervision, L.C. and M.K.; project administration, L.C.; funding acquisition, L.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded in part by the ERC under the European Union’s Horizon 2020 research and innovation programme (FUN2MODEL, grant agreement No. 834115). Luca Cardelli was funded by a Royal Society Research Professorhip RP/R/180001 & RP/EA/180013.

Data Availability Statement

All data are contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Simulation Script

This is the script for the case study of Section 3.2 for the protocol P s p l i t & m i x , in Kaemika [15]. The protocol begins at ‘species {c}’ and ends at ‘equilibrate E’. For the sensitivity analysis of Figure 5, a function f abstracts the equilibrate time parameters e1,e2,e3 and the split proportion parameter s1 and yields the concentrations of a,b,c at the end of the protocol. A multivariate random variable X, over a uniform multidimensional sample space w, is constructed from f to vary the parameters. Then X is sampled and plotted.
function f(number e1 e2 e3 s1) {
define
    species {c}
    sample A 1¼L, 20C
    species a @ 10mM in A
    amount c @ 1mM in A
    a + c -> a + a {1}
    equilibrate A1 = A for~e1
    sample B {1¼L, 20C}
    species b @ 10mM in B
    amount c @ 1mM in B
    b + c -> c + c {1}
    equilibrate B1 = B for~e2
    split C,D = A1 by s1
    dispose~C
    mix E = D, B1
    a + b -> b + b {1}
    equilibrate E for~e3
  yield [observe(a,E), observe(b,E), observe(c,E)]
}
random X(omega w) {
  f(100*(1+(w(0)-0.5)/10), 100*(1+(w(1)-0.5)/10), 1000*(1+(w(2)-0.5)/10),
    0.5*(1+(w(3)-0.5)/10))
}
draw 3000 from X
This script produces a density plot of the sensitivity of the concentrations (when shift-clicking the Play button to run uninterrupted). For the sensitivity of the standard deviation of the concentrations, replace the result of f with
  yield [observe(sqrt(var(a)),E), observe(sqrt(var(b)),E),
         observe(sqrt(var(c)),E)]
and run the script with LNA enabled.

Appendix B. Data for Gibson Assembly

We report the experimental data employed for the Gibson assembly protocol in Section 3.1 (from [9] Figure 2a). For simplicity we report the output values of only species O and omit unit of measurement, which are mM for concentration, μ L for volume, Celsius degrees for temperature, and seconds for equilibration time:
d 1 = ( ( 1 , 0 , 0 , 0 , 1 , 20 ) , ( 1 , 0 ) , 0 ) d 2 = ( ( 1 , 1 , 0 , 0 , 0 , 1 , 20 ) , ( 1 , 120 ) , 0 ) d 3 = ( ( 1 , 0 , 0 , 0 , 1 , 20 ) , ( 1 , 240 ) , 0.05 ) d 4 = ( ( 1 , 0 , 0 , 0 , 1 , 20 ) , ( 1 , 360 ) , 0.56 ) d 5 = ( ( 1 , 0 , 0 , 0 , 1 , 20 ) , ( 1 , 480 ) , 0.8 ) d 6 = ( ( 1 , 0 , 0 , 0 , 1 , 20 ) , ( 1 , 660 ) , 0.86 ) d 5 = ( ( 1 , 0 , 0 , 0 , 1 , 20 ) , ( 1 , 840 ) , 0.9 ) d 6 = ( ( 1 , 0 , 0 , 0 , 1 , 20 ) , ( 1 , 960 ) , 0.88 ) ,
where, for example, in d 1 we have that the initial concentration for A B and B A is 1 (note that in this case the optimization variables are x B A and T, hence in d 1 the vector ( 1 , 0 ) represents the value assigned to those variables during the particular experiment) and all other species are not present at time 0, volume and temperature at which the experiment is performed are 1 and 20 and the observed value for O at the end of the protocol for T = 0 is 0 . We assume an additive Gaussian observation noise (noise in the collection of the data) with standard deviation σ = 0.1 .

References

  1. Murphy, N.; Petersen, R.; Phillips, A.; Yordanov, B.; Dalchau, N. Synthesizing and tuning stochastic chemical reaction networks with specified behaviours. J. R. Soc. Interface 2018, 15, 20180283. [Google Scholar] [CrossRef] [PubMed][Green Version]
  2. Ananthanarayanan, V.; Thies, W. Biocoder: A programming language for standardizing and automating biology protocols. J. Biol. Eng. 2010, 4, 1–13. [Google Scholar] [CrossRef] [PubMed][Green Version]
  3. Cardelli, L.; Češka, M.; Fränzle, M.; Kwiatkowska, M.; Laurenti, L.; Paoletti, N.; Whitby, M. Syntax-guided optimal synthesis for chemical reaction networks. In International Conference on Computer Aided Verification; Springer: Berlin/Heidelberg, Germany, 2017; pp. 375–395. [Google Scholar]
  4. Ang, J.; Harris, E.; Hussey, B.J.; Kil, R.; McMillen, D.R. Tuning response curves for synthetic biology. ACS Synth. Biol. 2013, 2, 547–567. [Google Scholar] [CrossRef] [PubMed]
  5. Abate, A.; Cardelli, L.; Kwiatkowska, M.; Laurenti, L.; Yordanov, B. Experimental biological protocols with formal semantics. In International Conference on Computational Methods in Systems Biology; Springer: Berlin/Heidelberg, Germany, 2018; pp. 165–182. [Google Scholar]
  6. Rasmussen, C.E.; Williams, C.K.; Bach, F. Gaussian Processes for Machine Learning; MIT Press: Cambridge, MA, USA, 2006. [Google Scholar]
  7. Van Kampen, N.G. Stochastic Processes in Physics and Chemistry; Elsevier: Amsterdam, The Netherlands, 1992; Volume 1. [Google Scholar]
  8. Cardelli, L.; Kwiatkowska, M.; Laurenti, L. Stochastic analysis of chemical reaction networks using linear noise approximation. Biosystems 2016, 149, 26–33. [Google Scholar] [CrossRef] [PubMed]
  9. Gibson, D.G.; Young, L.; Chuang, R.Y.; Venter, J.C.; Hutchison, C.A.; Smith, H.O. Enzymatic assembly of DNA molecules up to several hundred kilobases. Nat. Methods 2009, 6, 343–345. [Google Scholar] [CrossRef] [PubMed]
  10. Begley, C.G.; Ellis, L.M. Raise standards for preclinical cancer research. Nature 2012, 483, 531–533. [Google Scholar] [CrossRef] [PubMed]
  11. Ott, J.; Loveless, T.; Curtis, C.; Lesani, M.; Brisk, P. Bioscript: Programming safe chemistry on laboratories-on-a-chip. Proc. ACM Program. Lang. 2018, 2, 1–31. [Google Scholar] [CrossRef][Green Version]
  12. Baker, M. 1500 scientists lift the lid on reproducibility. Nat. News 2016, 533, 452. [Google Scholar] [CrossRef] [PubMed][Green Version]
  13. Bates, M.; Berliner, A.; Lachoff, J.; Jaschke, P.; Groban, E. Wet lab accelerator: A web-based application democratizing laboratory automation for synthetic biology. ACS Synth. Biol. 2017, 6, 167–171. [Google Scholar] [CrossRef] [PubMed]
  14. Synthace. Antha. Available online: https://www.synthace.com/platform/first-steps-with-antha/ (accessed on 16 October 2021).
  15. Cardelli, L. Kaemika app: Integrating protocols and chemical simulation. In International Conference on Computational Methods in Systems Biology; Springer: Berlin/Heidelberg, Germany, 2020; pp. 373–379. [Google Scholar]
  16. Scott, D.; Strachey, C. Toward a Mathematical Semantics for Computer Languages; Oxford University Computing Laboratory, Programming Research Group Oxford: Oxford, UK, 1971; Volume 1. [Google Scholar]
  17. Cardelli, L. Two-domain DNA strand displacement. Math. Struct. Comput. Sci. 2013, 23, 247–271. [Google Scholar] [CrossRef]
  18. Bortolussi, L.; Cardelli, L.; Kwiatkowska, M.; Laurenti, L. Central limit model checking. ACM Trans. Comput. Log. (TOCL) 2019, 20, 1–35. [Google Scholar] [CrossRef]
  19. Laurenti, L.; Csikasz-Nagy, A.; Kwiatkowska, M.; Cardelli, L. Molecular Filters for Noise Reduction. Biophys. J. 2018, 114, 3000–3011. [Google Scholar] [CrossRef] [PubMed][Green Version]
  20. Micchelli, C.A.; Xu, Y.; Zhang, H. Universal Kernels. J. Mach. Learn. Res. 2006, 7, 2651–2667. [Google Scholar]
  21. Boyd, S.; Boyd, S.P.; Vandenberghe, L. Convex Optimization; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
  22. Newman, S.; Stephenson, A.P.; Willsey, M.; Nguyen, B.H.; Takahashi, C.N.; Strauss, K.; Ceze, L. High density DNA data storage library via dehydration with digital microfluidic retrieval. Nat. Commun. 2019, 10, 1–6. [Google Scholar] [CrossRef] [PubMed][Green Version]
  23. Ethier, S.N.; Kurtz, T.G. Markov Processes: Characterization and Convergence; John Wiley & Sons: Hoboken, NJ, USA, 2009; Volume 282. [Google Scholar]
  24. Schwabe, A.; Rybakova, K.N.; Bruggeman, F.J. Transcription stochasticity of complex gene regulation models. Biophys. J. 2012, 103, 1152–1161. [Google Scholar] [CrossRef] [PubMed][Green Version]
  25. Leake, M. Analytical tools for single-molecule fluorescence imaging in cellulo. Phys. Chem. Chem. Phys. 2014, 16, 12635–12647. [Google Scholar] [CrossRef] [PubMed][Green Version]
Figure 1. Processing a unified description for an experimental protocol. A program integrates a biophysical model of the underlying molecular system with the steps of the protocol. In this case, the protocol comprises a single instruction, which lets a sample equilibrate for t seconds, where t is a parameter. The initial concentration of the sample is 1 for the first species and 0 for all the others. The value of t is selected as the one the maximizes a cost function, in this case the difference between y 5 and y 1 after the execution of the protocol. The optimization is performed on a Gaussian process given by the semantics of the program (biophysical model and protocol) integrated with experimental data.
Figure 1. Processing a unified description for an experimental protocol. A program integrates a biophysical model of the underlying molecular system with the steps of the protocol. In this case, the protocol comprises a single instruction, which lets a sample equilibrate for t seconds, where t is a parameter. The initial concentration of the sample is 1 for the first species and 0 for all the others. The value of t is selected as the one the maximizes a cost function, in this case the difference between y 5 and y 1 after the execution of the protocol. The optimization is performed on a Gaussian process given by the semantics of the program (biophysical model and protocol) integrated with experimental data.
Computation 09 00107 g001
Figure 2. Evolution of μ a (red), μ b (green), μ c (blue) for the CRN reported in Example 1. Inset: the respective variances.
Figure 2. Evolution of μ a (red), μ b (green), μ c (blue) for the CRN reported in Example 1. Inset: the respective variances.
Computation 09 00107 g002
Figure 3. Mean and variance of species b for the CRN reported in Example 1 for r = [ 0.1 , r b , 0.001 , 1 , 20 ] and u = [ T ] . (Left:) Evolution of μ (prior mean of species b given by Equation (1)), μ p (posterior mean of species b given by Equation (3)), and the true dynamics of species b, assumed to be a deterministic function for this example, for r b = 0.001 (initialization variable relative to species b). It is possible to observe how, with just a few data points, the posterior mean reflects correctly the true dynamics. (Right:) Standard deviation of b after training (square root of solution of Equation (4)) as a function of r b and T. The variance is higher for combinations of T and r b where no training data are available.
Figure 3. Mean and variance of species b for the CRN reported in Example 1 for r = [ 0.1 , r b , 0.001 , 1 , 20 ] and u = [ T ] . (Left:) Evolution of μ (prior mean of species b given by Equation (1)), μ p (posterior mean of species b given by Equation (3)), and the true dynamics of species b, assumed to be a deterministic function for this example, for r b = 0.001 (initialization variable relative to species b). It is possible to observe how, with just a few data points, the posterior mean reflects correctly the true dynamics. (Right:) Standard deviation of b after training (square root of solution of Equation (4)) as a function of r b and T. The variance is higher for combinations of T and r b where no training data are available.
Computation 09 00107 g003
Figure 4. Optimal values for x B A and T and variance of the predictive Gaussian process. (Left:) Optimal values of x B A and T 1000 for different values of λ . (Right:) Variance of O ( T ) after training (Equation (2)) as a function of x B A and T. The variance is minimized for x B A 1 . This is due to the fact that all training data have x B A = 1 .
Figure 4. Optimal values for x B A and T and variance of the predictive Gaussian process. (Left:) Optimal values of x B A and T 1000 for different values of λ . (Right:) Variance of O ( T ) after training (Equation (2)) as a function of x B A and T. The variance is minimized for x B A 1 . This is due to the fact that all training data have x B A = 1 .
Computation 09 00107 g004
Figure 5. (Left, Center:) Evolution of a (red), b (green), c (blue) for protocol P s p l i t & m i x , showing mean (thick lines) and standard deviation (thin lines), separately for Sample E, with some trajectory overlaps in Sample A and B. Horizontal axis is time (s), vertical axis is concentration (mM). Sample A is simulated first, then Sample B, and finally Sample E, where the standard deviations start above zero due to propagating the final states of the earlier simulations. (Right:) Density plots for global sensitivity analysis over 3000 runs, displaying the sensitivity at the end of the protocol of mean (top) and standard deviation (bottom) of a, b, c. Sensitivity is with respect to the three E q u i l i b r a t e duration parameters and the S p l i t proportion parameter: those parameters are simultaneously drawn from uniform distributions, each varying by up to ±5%. Horizontal axis is concentration (mM), vertical axis is the kernel density estimate ( m = × 10 3 ) with a standard normal distribution kernel, and bandwidth of 1/100 of the data range. Thick vertical lines locate the mean, thin vertical lines locate the ±standard deviation.
Figure 5. (Left, Center:) Evolution of a (red), b (green), c (blue) for protocol P s p l i t & m i x , showing mean (thick lines) and standard deviation (thin lines), separately for Sample E, with some trajectory overlaps in Sample A and B. Horizontal axis is time (s), vertical axis is concentration (mM). Sample A is simulated first, then Sample B, and finally Sample E, where the standard deviations start above zero due to propagating the final states of the earlier simulations. (Right:) Density plots for global sensitivity analysis over 3000 runs, displaying the sensitivity at the end of the protocol of mean (top) and standard deviation (bottom) of a, b, c. Sensitivity is with respect to the three E q u i l i b r a t e duration parameters and the S p l i t proportion parameter: those parameters are simultaneously drawn from uniform distributions, each varying by up to ±5%. Horizontal axis is concentration (mM), vertical axis is the kernel density estimate ( m = × 10 3 ) with a standard normal distribution kernel, and bandwidth of 1/100 of the data range. Thick vertical lines locate the mean, thin vertical lines locate the ±standard deviation.
Computation 09 00107 g005
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Cardelli, L.; Kwiatkowska, M.; Laurenti, L. A Language for Modeling and Optimizing Experimental Biological Protocols. Computation 2021, 9, 107. https://doi.org/10.3390/computation9100107

AMA Style

Cardelli L, Kwiatkowska M, Laurenti L. A Language for Modeling and Optimizing Experimental Biological Protocols. Computation. 2021; 9(10):107. https://doi.org/10.3390/computation9100107

Chicago/Turabian Style

Cardelli, Luca, Marta Kwiatkowska, and Luca Laurenti. 2021. "A Language for Modeling and Optimizing Experimental Biological Protocols" Computation 9, no. 10: 107. https://doi.org/10.3390/computation9100107

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop