Feature Papers for the Fifth Year Anniversary of the Founding of Processes

A special issue of Processes (ISSN 2227-9717).

Deadline for manuscript submissions: closed (31 May 2018) | Viewed by 144170

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editor

Department of Chemical Engineering and the Institute for Applied Life Sciences, University of Massachusetts Amherst, N527 Life Sciences Laboratories, 240 Thatcher Way, Amherst, MA 01003, USA
Interests: complex dynamic systems; systems biology; metabolic modeling; circadian systems modeling
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

This Special Issue is designed to celebrate the fifth year anniversary of the founding of the open access journal Processes. The issues will highlight a diverse set of topics related to process and systems technology for chemical, materials, biochemical, pharmaceutical and biomedical applications. The scope of this Special Issue includes, but is not limited to: chemical and biochemical processes; cellular systems; material manufacturing; and systems modeling, simulation, optimization and control. We are particularly interested in receiving manuscripts that integrate experimental and theoretical/computational studies as well as contributions from industry. Manuscripts for this important Special Issue of Processes will be accepted by invitation only.

Prof. Dr. Michael A. Henson
Founding Editor-in-Chief

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Processes is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 350 CHF (Swiss Francs). Please note that for papers submitted after 1 July 2018 an APC of 850 CHF applies. Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • biomedical systems
  • chemical processes
  • computational systems biology
  • dynamic modeling
  • materials manufacturing
  • microbial systems
  • process control and optimization

Published Papers (20 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Review

3 pages, 157 KiB  
Editorial
Special Issue on Feature Papers for Celebrating the Fifth Anniversary of the Founding of Processes
by Michael A. Henson
Processes 2019, 7(1), 15; https://doi.org/10.3390/pr7010015 - 01 Jan 2019
Cited by 1 | Viewed by 2280
Abstract
The Special Issue “Feature Papers for Celebrating the Fifth Anniversary of the Founding of Processes” represents a landmark for this open access journal covering chemical, biological, materials, pharmaceutical, and environmental systems as well as general computational methods for process and systems engineering. [...] Read more.
The Special Issue “Feature Papers for Celebrating the Fifth Anniversary of the Founding of Processes” represents a landmark for this open access journal covering chemical, biological, materials, pharmaceutical, and environmental systems as well as general computational methods for process and systems engineering. [...] Full article

Research

Jump to: Editorial, Review

15 pages, 4662 KiB  
Article
Glycosylation Flux Analysis of Immunoglobulin G in Chinese Hamster Ovary Perfusion Cell Culture
by Sandro Hutter, Moritz Wolf, Nan Papili Gao, Dario Lepori, Thea Schweigler, Massimo Morbidelli and Rudiyanto Gunawan
Processes 2018, 6(10), 176; https://doi.org/10.3390/pr6100176 - 01 Oct 2018
Cited by 16 | Viewed by 5285
Abstract
The terminal sugar molecules of the N-linked glycan attached to the fragment crystalizable (Fc) region is a critical quality attribute of therapeutic monoclonal antibodies (mAbs) such as immunoglobulin G (IgG). There exists naturally-occurring heterogeneity in the N-linked glycan structure of mAbs, and such [...] Read more.
The terminal sugar molecules of the N-linked glycan attached to the fragment crystalizable (Fc) region is a critical quality attribute of therapeutic monoclonal antibodies (mAbs) such as immunoglobulin G (IgG). There exists naturally-occurring heterogeneity in the N-linked glycan structure of mAbs, and such heterogeneity has a significant influence on the clinical safety and efficacy of mAb drugs. We previously proposed a constraint-based modeling method called glycosylation flux analysis (GFA) to characterize the rates (fluxes) of intracellular glycosylation reactions. One contribution of this work is a significant improvement in the computational efficiency of the GFA, which is beneficial for analyzing large datasets. Another contribution of our study is the analysis of IgG glycosylation in continuous perfusion Chinese Hamster Ovary (CHO) cell cultures. The GFA of the perfusion cell culture data indicated that the dynamical changes of IgG glycan heterogeneity are mostly attributed to alterations in the galactosylation flux activity. By using a random forest regression analysis of the IgG galactosylation flux activity, we were further able to link the dynamics of galactosylation with two process parameters: cell-specific productivity of IgG and extracellular ammonia concentration. The characteristics of IgG galactosylation dynamics agree well with what we previously reported for fed-batch cultivations of the same CHO cell strain. Full article
Show Figures

Figure 1

13 pages, 2131 KiB  
Article
Effect of the Length-to-Width Aspect Ratio of a Cuboid Packed-Bed Device on Efficiency of Chromatographic Separation
by Guoqiang Chen and Raja Ghosh
Processes 2018, 6(9), 160; https://doi.org/10.3390/pr6090160 - 06 Sep 2018
Cited by 4 | Viewed by 4963
Abstract
In recent papers we have discussed the use of cuboid packed-bed devices as alternative to columns for chromatographic separations. These devices address some of the major flow distribution challenges faced by preparative columns used for process-scale purification of biologicals. Our previous studies showed [...] Read more.
In recent papers we have discussed the use of cuboid packed-bed devices as alternative to columns for chromatographic separations. These devices address some of the major flow distribution challenges faced by preparative columns used for process-scale purification of biologicals. Our previous studies showed that significant improvements in separation metrics such as the number of theoretical plates, peak shape, and peak resolution in multi-protein separation could be achieved. However, the length-to-width aspect ratio of a cuboid packed-bed device could potentially affect its performance. A systematic comparison of six cuboid packed-bed devices having different length-to-width aspect ratios showed that it had a significant effect on separation performance. The number of theoretical plates per meter in the best-performing cuboid packed-bed device was about 4.5 times higher than that in its equivalent commercial column. On the other hand, the corresponding number in the worst-performing cuboid-packed bed was lower than that in the column. A head-to-head comparison of the best-performing cuboid packed bed and its equivalent column was carried out. Performance metrics compared included the widths and dispersion indices of flow-through and eluted protein peaks. The optimized cuboid packed-bed device significantly outperformed its equivalent column with regards to all these attributes. Full article
Show Figures

Figure 1

28 pages, 999 KiB  
Article
Dynamic Sequence Specific Constraint-Based Modeling of Cell-Free Protein Synthesis
by David Dai, Nicholas Horvath and Jeffrey Varner
Processes 2018, 6(8), 132; https://doi.org/10.3390/pr6080132 - 17 Aug 2018
Cited by 6 | Viewed by 4847
Abstract
Cell-free protein expression has emerged as an important approach in systems and synthetic biology, and a promising technology for personalized point of care medicine. Cell-free systems derived from crude whole cell extracts have shown remarkable utility as a protein synthesis technology. However, if [...] Read more.
Cell-free protein expression has emerged as an important approach in systems and synthetic biology, and a promising technology for personalized point of care medicine. Cell-free systems derived from crude whole cell extracts have shown remarkable utility as a protein synthesis technology. However, if cell-free platforms for on-demand biomanufacturing are to become a reality, the performance limits of these systems must be defined and optimized. Toward this goal, we modeled E. coli cell-free protein expression using a sequence specific dynamic constraint-based approach in which metabolite measurements were directly incorporated into the flux estimation problem. A cell-free metabolic network was constructed by removing growth associated reactions from the iAF1260 reconstruction of K-12 MG1655 E. coli. Sequence specific descriptions of transcription and translation processes were then added to this metabolic network to describe protein production. A linear programming problem was then solved over short time intervals to estimate metabolic fluxes through the augmented cell-free network, subject to material balances, time rate of change and metabolite measurement constraints. The approach captured the biphasic cell-free production of a model protein, chloramphenicol acetyltransferase. Flux variability analysis suggested that cell-free metabolism was potentially robust; for example, the rate of protein production could be met by flux through the glycolytic, pentose phosphate, or the Entner-Doudoroff pathways. Variation of the metabolite constraints revealed central carbon metabolites, specifically upper glycolysis, tricarboxylic acid (TCA) cycle, and pentose phosphate, to be the most effective at training a predictive model, while energy and amino acid measurements were less effective. Irrespective of the measurement set, the metabolic fluxes (for the most part) remained unidentifiable. These findings suggested dynamic constraint-based modeling could aid in the design of cell-free protein expression experiments for metabolite prediction, but the flux estimation problem remains challenging. Furthermore, while we modeled the cell-free production of only a single protein in this study, the sequence specific dynamic constraint-based modeling approach presented here could be extended to multi-protein synthetic circuits, RNA circuits or even small molecule production. Full article
Show Figures

Figure 1

15 pages, 1431 KiB  
Article
A Cybernetic Approach to Modeling Lipid Metabolism in Mammalian Cells
by Lina Aboulmouna, Shakti Gupta, Mano R. Maurya, Frank T. DeVilbiss, Shankar Subramaniam and Doraiswami Ramkrishna
Processes 2018, 6(8), 126; https://doi.org/10.3390/pr6080126 - 12 Aug 2018
Cited by 6 | Viewed by 5260
Abstract
The goal-oriented control policies of cybernetic models have been used to predict metabolic phenomena such as the behavior of gene knockout strains, complex substrate uptake patterns, and dynamic metabolic flux distributions. Cybernetic theory builds on the principle that metabolic regulation is driven towards [...] Read more.
The goal-oriented control policies of cybernetic models have been used to predict metabolic phenomena such as the behavior of gene knockout strains, complex substrate uptake patterns, and dynamic metabolic flux distributions. Cybernetic theory builds on the principle that metabolic regulation is driven towards attaining goals that correspond to an organism’s survival or displaying a specific phenotype in response to a stimulus. Here, we have modeled the prostaglandin (PG) metabolism in mouse bone marrow derived macrophage (BMDM) cells stimulated by Kdo2-Lipid A (KLA) and adenosine triphosphate (ATP), using cybernetic control variables. Prostaglandins are a well characterized set of inflammatory lipids derived from arachidonic acid. The transcriptomic and lipidomic data for prostaglandin biosynthesis and conversion were obtained from the LIPID MAPS database. The model parameters were estimated using a two-step hybrid optimization approach. A genetic algorithm was used to determine the population of near optimal parameter values, and a generalized constrained non-linear optimization employing a gradient search method was used to further refine the parameters. We validated our model by predicting an independent data set, the prostaglandin response of KLA primed ATP stimulated BMDM cells. We show that the cybernetic model captures the complex regulation of PG metabolism and provides a reliable description of PG formation. Full article
Show Figures

Graphical abstract

19 pages, 3492 KiB  
Article
Modeling the Dynamics of Human Liver Failure Post Liver Resection
by Babita K. Verma, Pushpavanam Subramaniam and Rajanikanth Vadigepalli
Processes 2018, 6(8), 115; https://doi.org/10.3390/pr6080115 - 04 Aug 2018
Cited by 9 | Viewed by 4254
Abstract
Liver resection is an important clinical intervention to treat liver disease. Following liver resection, patients exhibit a wide range of outcomes including normal recovery, suppressed recovery, or liver failure, depending on the regenerative capacity of the remnant liver. The objective of this work [...] Read more.
Liver resection is an important clinical intervention to treat liver disease. Following liver resection, patients exhibit a wide range of outcomes including normal recovery, suppressed recovery, or liver failure, depending on the regenerative capacity of the remnant liver. The objective of this work is to study the distinct patient outcomes post hepatectomy and determine the processes that are accountable for liver failure. Our model based approach shows that cell death is one of the important processes but not the sole controlling process responsible for liver failure. Additionally, our simulations showed wide variation in the timescale of liver failure that is consistent with the clinically observed timescales of post hepatectomy liver failure scenarios. Liver failure can take place either instantaneously or after a certain delay. We analyzed a virtual patient cohort and concluded that remnant liver fraction is a key regulator of the timescale of liver failure, with higher remnant liver fraction leading to longer time delay prior to failure. Our results suggest that, for a given remnant liver fraction, modulating a combination of cell death controlling parameters and metabolic load may help shift the clinical outcome away from post hepatectomy liver failure towards normal recovery. Full article
Show Figures

Figure 1

17 pages, 420 KiB  
Article
Modeling and Optimal Design of Absorbent Enhanced Ammonia Synthesis
by Matthew J. Palys, Alon McCormick, E. L. Cussler and Prodromos Daoutidis
Processes 2018, 6(7), 91; https://doi.org/10.3390/pr6070091 - 18 Jul 2018
Cited by 58 | Viewed by 18238
Abstract
Synthetic ammonia produced from fossil fuels is essential for agriculture. However, the emissions-intensive nature of the Haber–Bosch process, as well as a depleting supply of these fossil fuels have motivated the production of ammonia using renewable sources of energy. Small-scale, distributed processes may [...] Read more.
Synthetic ammonia produced from fossil fuels is essential for agriculture. However, the emissions-intensive nature of the Haber–Bosch process, as well as a depleting supply of these fossil fuels have motivated the production of ammonia using renewable sources of energy. Small-scale, distributed processes may better enable the use of renewables, but also result in a loss of economies of scale, so the high capital cost of the Haber–Bosch process may inhibit this paradigm shift. A process that operates at lower pressure and uses absorption rather than condensation to remove ammonia from unreacted nitrogen and hydrogen has been proposed as an alternative. In this work, a dynamic model of this absorbent-enhanced process is proposed and implemented in gPROMS ModelBuilder. This dynamic model is used to determine optimal designs of this process that minimize the 20-year net present cost at small scales of 100 kg/h to 10,000 kg/h when powered by wind energy. The capital cost of this process scales with a 0.77 capacity exponent, and at production scales below 6075 kg/h, it is less expensive than the conventional Haber–Bosch process. Full article
Show Figures

Figure 1

17 pages, 1504 KiB  
Article
Solving Materials’ Small Data Problem with Dynamic Experimental Databases
by Michael McBride, Nils Persson, Elsa Reichmanis and Martha A. Grover
Processes 2018, 6(7), 79; https://doi.org/10.3390/pr6070079 - 27 Jun 2018
Cited by 18 | Viewed by 5456
Abstract
Materials processing is challenging because the final structure and properties often depend on the process conditions as well as the composition. Past research reported in the archival literature provides a valuable source of information for designing a process to optimize material properties. Typically, [...] Read more.
Materials processing is challenging because the final structure and properties often depend on the process conditions as well as the composition. Past research reported in the archival literature provides a valuable source of information for designing a process to optimize material properties. Typically, the issue is not having too much data (i.e., big data), but rather having a limited amount of data that is sparse, relative to a large number of design variables. The full utilization of this information via a structured database can be challenging, because of inconsistent and incorrect reporting of information. Here, we present a classification approach specifically tailored to the task of identifying a promising design region from a literature database. This design region includes all high performing points, as well as some points having poor performance, for the purpose of focusing future experiments. The classification method is demonstrated on two case studies in polymeric materials, namely: poly(3-hexylthiophene) for flexible electronic devices and polypropylene–talc composite materials for structural applications. Full article
Show Figures

Graphical abstract

17 pages, 2332 KiB  
Article
Optimal Multiscale Capacity Planning in Seawater Desalination Systems
by Hassan Baaqeel and Mahmoud M. El-Halwagi
Processes 2018, 6(6), 68; https://doi.org/10.3390/pr6060068 - 01 Jun 2018
Cited by 13 | Viewed by 4321
Abstract
The increasing demands for water and the dwindling resources of fresh water create a critical need for continually enhancing desalination capacities. This poses a challenge in distressed desalination network, with incessant water demand growth as the conventional approach of undertaking large expansion projects [...] Read more.
The increasing demands for water and the dwindling resources of fresh water create a critical need for continually enhancing desalination capacities. This poses a challenge in distressed desalination network, with incessant water demand growth as the conventional approach of undertaking large expansion projects can lead to low utilization and, hence, low capital productivity. In addition to the option of retrofitting existing desalination units or installing additional grassroots units, there is an opportunity to include emerging modular desalination technologies. This paper develops the optimization framework for the capacity planning in distressed desalination networks considering the integration of conventional plants and emerging modular technologies, such as membrane distillation (MD), as a viable option for capacity expansion. The developed framework addresses the multiscale nature of the synthesis problem, as unit-specific decision variables are subject to optimization, as well as the multiperiod capacity planning of the system. A superstructure representation and optimization formulation are introduced to simultaneously optimize the staging and sizing of desalination units, as well as design and operating variables in the desalination network over a planning horizon. Additionally, a special case for multiperiod capacity planning in multiple effect distillation (MED) desalination systems is presented. An optimization approach is proposed to solve the mixed-integer nonlinear programming (MINLP) optimization problem, starting with the construction of a project-window interval, pre-optimization screening, modeling of screened configurations, intra-process design variables optimization, and finally, multiperiod flowsheet synthesis. A case study is solved to illustrate the usefulness of the proposed approach. Full article
Show Figures

Figure 1

19 pages, 2577 KiB  
Article
Prediction of Metabolite Concentrations, Rate Constants and Post-Translational Regulation Using Maximum Entropy-Based Simulations with Application to Central Metabolism of Neurospora crassa
by William R. Cannon, Jeremy D. Zucker, Douglas J. Baxter, Neeraj Kumar, Scott E. Baker, Jennifer M. Hurley and Jay C. Dunlap
Processes 2018, 6(6), 63; https://doi.org/10.3390/pr6060063 - 28 May 2018
Cited by 9 | Viewed by 6306
Abstract
We report the application of a recently proposed approach for modeling biological systems using a maximum entropy production rate principle in lieu of having in vivo rate constants. The method is applied in four steps: (1) a new ordinary differential equation (ODE) based [...] Read more.
We report the application of a recently proposed approach for modeling biological systems using a maximum entropy production rate principle in lieu of having in vivo rate constants. The method is applied in four steps: (1) a new ordinary differential equation (ODE) based optimization approach based on Marcelin’s 1910 mass action equation is used to obtain the maximum entropy distribution; (2) the predicted metabolite concentrations are compared to those generally expected from experiments using a loss function from which post-translational regulation of enzymes is inferred; (3) the system is re-optimized with the inferred regulation from which rate constants are determined from the metabolite concentrations and reaction fluxes; and finally (4) a full ODE-based, mass action simulation with rate parameters and allosteric regulation is obtained. From the last step, the power characteristics and resistance of each reaction can be determined. The method is applied to the central metabolism of Neurospora crassa and the flow of material through the three competing pathways of upper glycolysis, the non-oxidative pentose phosphate pathway, and the oxidative pentose phosphate pathway are evaluated as a function of the NADP/NADPH ratio. It is predicted that regulation of phosphofructokinase (PFK) and flow through the pentose phosphate pathway are essential for preventing an extreme level of fructose 1,6-bisphophate accumulation. Such an extreme level of fructose 1,6-bisphophate would otherwise result in a glassy cytoplasm with limited diffusion, dramatically decreasing the entropy and energy production rate and, consequently, biological competitiveness. Full article
Show Figures

Graphical abstract

21 pages, 7799 KiB  
Article
A Systematic Framework for Data Management and Integration in a Continuous Pharmaceutical Manufacturing Processing Line
by Huiyi Cao, Srinivas Mushnoori, Barry Higgins, Chandrasekhar Kollipara, Adam Fermier, Douglas Hausner, Shantenu Jha, Ravendra Singh, Marianthi Ierapetritou and Rohit Ramachandran
Processes 2018, 6(5), 53; https://doi.org/10.3390/pr6050053 - 10 May 2018
Cited by 19 | Viewed by 10944
Abstract
As the pharmaceutical industry seeks more efficient methods for the production of higher value therapeutics, the associated data analysis, data visualization, and predictive modeling require dependable data origination, management, transfer, and integration. As a result, the management and integration of data in a [...] Read more.
As the pharmaceutical industry seeks more efficient methods for the production of higher value therapeutics, the associated data analysis, data visualization, and predictive modeling require dependable data origination, management, transfer, and integration. As a result, the management and integration of data in a consistent, organized, and reliable manner is a big challenge for the pharmaceutical industry. In this work, an ontological information infrastructure is developed to integrate data within manufacturing plants and analytical laboratories. The ANSI/ISA-88.01 batch control standard has been adapted in this study to deliver a well-defined data structure that will improve the data communication inside the system architecture for continuous processing. All the detailed information of the lab-based experiment and process manufacturing, including equipment, samples and parameters, are documented in the recipe. This recipe model is implemented into a process control system (PCS), data historian, as well as Electronic Laboratory Notebook (ELN) system. Data existing in the recipe can be eventually exported from this system to cloud storage, which could provide a reliable and consistent data source for data visualization, data analysis, or process modeling. Full article
Show Figures

Figure 1

10 pages, 2179 KiB  
Article
Cuboid Packed-Beds as Chemical Reactors?
by Raja Ghosh
Processes 2018, 6(5), 44; https://doi.org/10.3390/pr6050044 - 01 May 2018
Cited by 6 | Viewed by 6886
Abstract
Columns are widely used as packed-bed or fixed-bed reactors in the chemical process industry. Packed columns are also used for carrying out chemical separation techniques such as adsorption, distillation, extraction and chromatography. A combination of the variability in flow path lengths, and the [...] Read more.
Columns are widely used as packed-bed or fixed-bed reactors in the chemical process industry. Packed columns are also used for carrying out chemical separation techniques such as adsorption, distillation, extraction and chromatography. A combination of the variability in flow path lengths, and the variability of velocity along these flow paths results in significant broadening in solute residence time distribution within columns, particularly in those having low bed height to diameter ratios. Therefore, wide packed-column reactors operate at low efficiencies. Also, for a column of a particular bed height, the ratio of heat transfer surface area to reactor volume varies inversely as the radius. Therefore, with wide columns, the available heat transfer area could become a limiting factor. In recent papers, box-shaped or cuboid packed-bed devices have been proposed as efficient alternatives to packed columns for carrying out chromatographic separations. In this paper, the use of cuboid packed-beds as reactors for carrying out chemical and biochemical reactions has been proposed. This proposition is primarily supported in terms of advantages resulting from superior system hydraulics and narrower residence time distributions. Other potential advantages, such as better heat transfer attributes, are speculated based on geometric considerations. Full article
Show Figures

Graphical abstract

25 pages, 2041 KiB  
Article
The Impact of Global Sensitivities and Design Measures in Model-Based Optimal Experimental Design
by René Schenkendorf, Xiangzhong Xie, Moritz Rehbein, Stephan Scholl and Ulrike Krewer
Processes 2018, 6(4), 27; https://doi.org/10.3390/pr6040027 - 21 Mar 2018
Cited by 30 | Viewed by 7005
Abstract
In the field of chemical engineering, mathematical models have been proven to be an indispensable tool for process analysis, process design, and condition monitoring. To gain the most benefit from model-based approaches, the implemented mathematical models have to be based on sound principles, [...] Read more.
In the field of chemical engineering, mathematical models have been proven to be an indispensable tool for process analysis, process design, and condition monitoring. To gain the most benefit from model-based approaches, the implemented mathematical models have to be based on sound principles, and they need to be calibrated to the process under study with suitable model parameter estimates. Often, the model parameters identified by experimental data, however, pose severe uncertainties leading to incorrect or biased inferences. This applies in particular in the field of pharmaceutical manufacturing, where usually the measurement data are limited in quantity and quality when analyzing novel active pharmaceutical ingredients. Optimally designed experiments, in turn, aim to increase the quality of the gathered data in the most efficient way. Any improvement in data quality results in more precise parameter estimates and more reliable model candidates. The applied methods for parameter sensitivity analyses and design criteria are crucial for the effectiveness of the optimal experimental design. In this work, different design measures based on global parameter sensitivities are critically compared with state-of-the-art concepts that follow simplifying linearization principles. The efficient implementation of the proposed sensitivity measures is explicitly addressed to be applicable to complex chemical engineering problems of practical relevance. As a case study, the homogeneous synthesis of 3,4-dihydro-1H-1-benzazepine-2,5-dione, a scaffold for the preparation of various protein kinase inhibitors, is analyzed followed by a more complex model of biochemical reactions. In both studies, the model-based optimal experimental design benefits from global parameter sensitivities combined with proper design measures. Full article
Show Figures

Figure 1

24 pages, 1394 KiB  
Article
Fuel Gas Network Synthesis Using Block Superstructure
by Jianping Li, Salih Emre Demirel and M. M. Faruque Hasan
Processes 2018, 6(3), 23; https://doi.org/10.3390/pr6030023 - 01 Mar 2018
Cited by 17 | Viewed by 5871
Abstract
Fuel gas network (FGN) synthesis is a systematic method for reducing fresh fuel consumption in a chemical plant. In this work, we address FGN synthesis problems using a block superstructure representation that was originally proposed for process design and intensification. The blocks interact [...] Read more.
Fuel gas network (FGN) synthesis is a systematic method for reducing fresh fuel consumption in a chemical plant. In this work, we address FGN synthesis problems using a block superstructure representation that was originally proposed for process design and intensification. The blocks interact with each other through direct flows that connect a block with its adjacent blocks and through jump flows that connect a block with all nonadjacent blocks. The blocks with external feed streams are viewed as fuel sources and the blocks with product streams are regarded as fuel sinks. An additional layer of blocks are added as pools when there exists intermediate operations among source and sink blocks. These blocks can be arranged in a I × J two-dimensional grid with I = 1 for problems without pools, or I = 2 for problems with pools. J is determined by the maximum number of pools/sinks. With this representation, we formulate FGN synthesis problem as a mixed-integer nonlinear (MINLP) formulation to optimally design a fuel gas network with minimal total annual cost. We revisit a literature case study on LNG plants to demonstrate the capability of the proposed approach. Full article
Show Figures

Figure 1

12 pages, 2753 KiB  
Article
Multivariable Real-Time Control of Viscosity Curve for a Continuous Production Process of a Non-Newtonian Fluid
by Roberto Mei, Massimiliano Grosso, Francesc Corominas, Roberto Baratti and Stefania Tronci
Processes 2018, 6(2), 12; https://doi.org/10.3390/pr6020012 - 30 Jan 2018
Cited by 4 | Viewed by 5259
Abstract
The application of a multivariable predictive controller to the mixing process for the production of a non-Newtonian fluid is discussed in this work. A data-driven model has been developed to describe the dynamic behaviour of the rheological properties of the fluid as a [...] Read more.
The application of a multivariable predictive controller to the mixing process for the production of a non-Newtonian fluid is discussed in this work. A data-driven model has been developed to describe the dynamic behaviour of the rheological properties of the fluid as a function of the operating conditions using experimental data collected in a pilot plant. The developed model provides a realistic process representation and it is used to test and verify the multivariable controller, which has been designed to maintain viscosity curves of the non-Newtonian fluid within a given region of the viscosity-vs-shear rate plane in presence of process disturbances occurring in the mixing process. Full article
Show Figures

Graphical abstract

35 pages, 6329 KiB  
Article
Computational Package for Copolymerization Reactivity Ratio Estimation: Improved Access to the Error-in-Variables-Model
by Alison J. Scott and Alexander Penlidis
Processes 2018, 6(1), 8; https://doi.org/10.3390/pr6010008 - 19 Jan 2018
Cited by 19 | Viewed by 6729
Abstract
The error-in-variables-model (EVM) is the most statistically correct non-linear parameter estimation technique for reactivity ratio estimation. However, many polymer researchers are unaware of the advantages of EVM and therefore still choose to use rather erroneous or approximate methods. The procedure is straightforward but [...] Read more.
The error-in-variables-model (EVM) is the most statistically correct non-linear parameter estimation technique for reactivity ratio estimation. However, many polymer researchers are unaware of the advantages of EVM and therefore still choose to use rather erroneous or approximate methods. The procedure is straightforward but it is often avoided because it is seen as mathematically and computationally intensive. Therefore, the goal of this work is to make EVM more accessible to all researchers through a series of focused case studies. All analyses employ a MATLAB-based computational package for copolymerization reactivity ratio estimation. The basis of the package is previous work in our group over many years. This version is an improvement, as it ensures wider compatibility and enhanced flexibility with respect to copolymerization parameter estimation scenarios that can be considered. Full article
Show Figures

Graphical abstract

12 pages, 2510 KiB  
Article
On the Thermal Self-Initiation Reaction of n-Butyl Acrylate in Free-Radical Polymerization
by Hossein Riazi, Ahmad Arabi Shamsabadi, Patrick Corcoran, Michael C. Grady, Andrew M. Rappe and Masoud Soroush
Processes 2018, 6(1), 3; https://doi.org/10.3390/pr6010003 - 04 Jan 2018
Cited by 22 | Viewed by 9257
Abstract
This experimental and theoretical study deals with the thermal spontaneous polymerization of n-butyl acrylate (n-BA). The polymerization was carried out in solution (n-heptane as the solvent) at 200 and 220 °C without adding any conventional initiators. It was [...] Read more.
This experimental and theoretical study deals with the thermal spontaneous polymerization of n-butyl acrylate (n-BA). The polymerization was carried out in solution (n-heptane as the solvent) at 200 and 220 °C without adding any conventional initiators. It was studied with the five different n-BA/n-heptane volume ratios: 50/50, 70/30, 80/20, 90/10, and 100/0. Extensive experimental data presented here show significant monomer conversion at all temperatures and concentrations confirming the occurrence of the thermal self-initiation of the monomer. The order, frequency factor, and activation energy of the thermal self-initiation reaction of n-BA were estimated from n-BA conversion, using a macroscopic mechanistic model. The estimated reaction order agrees well with the order obtained via our quantum chemical calculations. Furthermore, the frequency factor and activation energy estimates agree well with the corresponding values that we already reported for bulk polymerization of n-BA. Full article
Show Figures

Figure 1

3897 KiB  
Article
Minimizing the Effect of Substantial Perturbations in Military Water Systems for Increased Resilience and Efficiency
by Corey M. James, Michael E. Webber and Thomas F. Edgar
Processes 2017, 5(4), 60; https://doi.org/10.3390/pr5040060 - 18 Oct 2017
Cited by 1 | Viewed by 4539
Abstract
A model predictive control (MPC) framework, exploiting both feedforward and feedback control loops, is employed to minimize large disturbances that occur in military water networks. Military installations’ need for resilient and efficient water supplies is often challenged by large disturbances like fires, terrorist [...] Read more.
A model predictive control (MPC) framework, exploiting both feedforward and feedback control loops, is employed to minimize large disturbances that occur in military water networks. Military installations’ need for resilient and efficient water supplies is often challenged by large disturbances like fires, terrorist activity, troop training rotations, and large scale leaks. This work applies the effectiveness of MPC to provide predictive capability and compensate for vast geographical differences and varying phenomena time scales using computational software and actual system dimensions and parameters. The results show that large disturbances are rapidly minimized while maintaining chlorine concentration within legal limits at the point of demand and overall water usage is minimized. The control framework also ensures pumping is minimized during peak electricity hours, so costs are kept lower than simple proportional control. Thecontrol structure implemented in this work is able to support resiliency and increased efficiency on military bases by minimizing tank holdup, effectively countering large disturbances, and efficiently managing pumping. Full article
Show Figures

Figure 1

1598 KiB  
Article
A Reaction Database for Small Molecule Pharmaceutical Processes Integrated with Process Information
by Emmanouil Papadakis, Amata Anantpinijwatna, John M. Woodley and Rafiqul Gani
Processes 2017, 5(4), 58; https://doi.org/10.3390/pr5040058 - 12 Oct 2017
Cited by 12 | Viewed by 11275
Abstract
This article describes the development of a reaction database with the objective to collect data for multiphase reactions involved in small molecule pharmaceutical processes with a search engine to retrieve necessary data in investigations of reaction-separation schemes, such as the role of organic [...] Read more.
This article describes the development of a reaction database with the objective to collect data for multiphase reactions involved in small molecule pharmaceutical processes with a search engine to retrieve necessary data in investigations of reaction-separation schemes, such as the role of organic solvents in reaction performance improvement. The focus of this reaction database is to provide a data rich environment with process information available to assist during the early stage synthesis of pharmaceutical products. The database is structured in terms of reaction classification of reaction types; compounds participating in the reaction; use of organic solvents and their function; information for single step and multistep reactions; target products; reaction conditions and reaction data. Information for reactor scale-up together with information for the separation and other relevant information for each reaction and reference are also available in the database. Additionally, the retrieved information obtained from the database can be evaluated in terms of sustainability using well-known “green” metrics published in the scientific literature. The application of the database is illustrated through the synthesis of ibuprofen, for which data on different reaction pathways have been retrieved from the database and compared using “green” chemistry metrics. Full article
Show Figures

Figure 1

Review

Jump to: Editorial, Research

17 pages, 24834 KiB  
Review
Rotor-Stator Mixers: From Batch to Continuous Mode of Operation—A Review
by Andreas Håkansson
Processes 2018, 6(4), 32; https://doi.org/10.3390/pr6040032 - 03 Apr 2018
Cited by 29 | Viewed by 13877
Abstract
Although continuous production processes are often desired, many processing industries still work in batch mode due to technical limitations. Transitioning to continuous production requires an in-depth understanding of how each unit operation is affected by the shift. This contribution reviews the scientific understanding [...] Read more.
Although continuous production processes are often desired, many processing industries still work in batch mode due to technical limitations. Transitioning to continuous production requires an in-depth understanding of how each unit operation is affected by the shift. This contribution reviews the scientific understanding of similarities and differences between emulsification in turbulent rotor-stator mixers (also known as high-speed mixers) operated in batch and continuous mode. Rotor-stator mixers are found in many chemical processing industries, and are considered the standard tool for mixing and emulsification of high viscosity products. Since the same rotor-stator heads are often used in both modes of operation, it is sometimes assumed that transitioning from batch to continuous rotor-stator mixers is straight-forward. However, this is not always the case, as has been shown in comparative experimental studies. This review summarizes and critically compares the current understanding of differences between these two operating modes, focusing on shaft power draw, pumping power, efficiency in producing a narrow region of high intensity turbulence, and implications for product quality differences when transitioning from batch to continuous rotor-stator mixers. Full article
Show Figures

Figure 1

Back to TopTop