Open AccessArticle
RadViz Deluxe: An Attribute-Aware Display for Multivariate Data
Processes 2017, 5(4), 75; doi:10.3390/pr5040075 (registering DOI) -
Abstract
Modern data, such as occurring in chemical engineering, typically entail large collections of samples with numerous dimensional components (or attributes). Visualizing the samples in relation of these components can bring valuable insight. For example, one may be able to see how a
[...] Read more.
Modern data, such as occurring in chemical engineering, typically entail large collections of samples with numerous dimensional components (or attributes). Visualizing the samples in relation of these components can bring valuable insight. For example, one may be able to see how a certain chemical property is expressed in the samples taken. This could reveal if there are clusters and outliers that have specific distinguishing properties. Current multivariate visualization methods lack the ability to reveal these types of information at a sufficient degree of fidelity since they are not optimized to simultaneously present the relations of the samples as well as the relations of the samples to their attributes. We propose a display that is designed to reveal these multiple relations. Our scheme is based on the concept of RadViz, but enhances the layout with three stages of iterative refinement. These refinements reduce the layout error in terms of three essential relationships—sample to sample, attribute to attribute, and sample to attribute. We demonstrate the effectiveness of our method via various real-world domain examples in the domain of chemical process engineering. In addition, we also formally derive the equivalence of RadViz to a popular multivariate interpolation method called generalized barycentric coordinates. Full article
Figures

Figure 1

Open AccessArticle
An Integrated Mathematical Model of Microbial Fuel Cell Processes: Bioelectrochemical and Microbiologic Aspects
Processes 2017, 5(4), 73; doi:10.3390/pr5040073 -
Abstract
Microbial Fuel Cells (MFCs) represent a still relatively new technology for liquid organic waste treatment and simultaneous recovery of energy and resources. Although the technology is quite appealing due its potential benefits, its practical application is still hampered by several drawbacks, such as
[...] Read more.
Microbial Fuel Cells (MFCs) represent a still relatively new technology for liquid organic waste treatment and simultaneous recovery of energy and resources. Although the technology is quite appealing due its potential benefits, its practical application is still hampered by several drawbacks, such as systems instability (especially when attempting to scale-up reactors from laboratory prototypes), internally competing microbial reactions, and limited power generation. This paper is an attempt to address some of the issues related to MFC application in wastewater treatment with a simulation model. Reactor configuration, operational schemes, electrochemical and microbiological characterization, optimization methods and modelling strategies were reviewed and have been included in a mathematical simulation model written with a multidisciplinary, multi-perspective approach, considering the possibility of feeding real substrates to an MFC system while dealing with a complex microbiological population. The conclusions drawn herein can be of practical interest for all MFC researchers dealing with domestic or industrial wastewater treatment. Full article
Figures

Figure 1

Open AccessArticle
Stochasticity in the Parasite-Driven Trait Evolution of Competing Species Masks the Distinctive Consequences of Distance Metrics
Processes 2017, 5(4), 74; doi:10.3390/pr5040074 -
Abstract
Various distance metrics and their induced norms are employed in the quantitative modeling of evolutionary dynamics. Minimization of these distance metrics, when applied to evolutionary optimization, are hypothesized to result in different outcomes. Here, we apply the different distance metrics to the evolutionary
[...] Read more.
Various distance metrics and their induced norms are employed in the quantitative modeling of evolutionary dynamics. Minimization of these distance metrics, when applied to evolutionary optimization, are hypothesized to result in different outcomes. Here, we apply the different distance metrics to the evolutionary trait dynamics brought about by the interaction between two competing species infected by parasites (exploiters). We present deterministic cases showing the distinctive selection outcomes under the Manhattan, Euclidean, and Chebyshev norms. Specifically, we show how they differ in the time of convergence to the desired optima (e.g., no disease), and in the egalitarian sharing of carrying capacity between the competing species. However, when randomness is introduced to the population dynamics of parasites and to the trait dynamics of the competing species, the distinctive characteristics of the outcomes under the three norms become indistinguishable. Our results provide theoretical cases of when evolutionary dynamics using different distance metrics exhibit similar outcomes. Full article
Figures

Figure 1

Open AccessArticle
Development of Molecularly Imprinted Polymers to Target Polyphenols Present in Plant Extracts
Processes 2017, 5(4), 72; doi:10.3390/pr5040072 -
Abstract
The development of molecularly imprinted polymers (MIPs) to target polyphenols present in vegetable extracts was here addressed. Polydatin was selected as a template polyphenol due to its relatively high size and amphiphilic character. Different MIPs were synthesized to explore preferential interactions between the
[...] Read more.
The development of molecularly imprinted polymers (MIPs) to target polyphenols present in vegetable extracts was here addressed. Polydatin was selected as a template polyphenol due to its relatively high size and amphiphilic character. Different MIPs were synthesized to explore preferential interactions between the functional monomers and the template molecule. The effect of solvent polarity on the molecular imprinting efficiency, namely owing to hydrophobic interactions, was also assessed. Precipitation and suspension polymerization were examined as a possible way to change MIPs morphology and performance. Solid phase extraction and batch/continuous sorption processes were used to evaluate the polyphenols uptake/release in individual/competitive assays. Among the prepared MIPs, a suspension polymerization synthesized material, with 4-vinylpyridine as the functional monomer and water/methanol as solvent, showed a superior performance. The underlying cause of such a significant outcome is the likely surface imprinting process caused by the amphiphilic properties of polydatin. The uptake and subsequent selective release of polyphenols present in natural extracts was successfully demonstrated, considering a red wine solution as a case study. However, hydrophilic/hydrophobic interactions are inevitable (especially with complex natural extracts) and the tuning of the polarity of the solvents is an important issue for the isolation of the different polyphenols. Full article
Figures

Open AccessArticle
Multistage Stochastic Programming Models for Pharmaceutical Clinical Trial Planning
Processes 2017, 5(4), 71; doi:10.3390/pr5040071 -
Abstract
Clinical trial planning of candidate drugs is an important task for pharmaceutical companies. In this paper, we propose two new multistage stochastic programming formulations (CM1 and CM2) to determine the optimal clinical trial plan under uncertainty. Decisions of a clinical trial plan include
[...] Read more.
Clinical trial planning of candidate drugs is an important task for pharmaceutical companies. In this paper, we propose two new multistage stochastic programming formulations (CM1 and CM2) to determine the optimal clinical trial plan under uncertainty. Decisions of a clinical trial plan include which clinical trials to start and their start times. Its objective is to maximize expected net present value of the entire clinical trial plan. Outcome of a clinical trial is uncertain, i.e., whether a potential drug successfully completes a clinical trial is not known until the clinical trial is completed. This uncertainty is modeled using an endogenous uncertain parameter in CM1 and CM2. The main difference between CM1 and CM2 is an additional binary variable, which tracks both start and end time points of clinical trials in CM2. We compare the sizes and solution times of CM1 and CM2 with each other and with a previously developed formulation (CM3) using different instances of clinical trial planning problem. The results reveal that the solution times of CM1 and CM2 are similar to each other and are up to two orders of magnitude shorter compared to CM3 for all instances considered. In general, the root relaxation problems of CM1 and CM2 took shorter to solve, CM1 and CM2 yielded tight initial gaps, and the solver required fewer branches for convergence to the optimum for CM1 and CM2. Full article
Figures

Figure 1

Open AccessFeature PaperArticle
Organic Polymers as Porogenic Structure Matrices for Mesoporous Alumina and Magnesia
Processes 2017, 5(4), 70; doi:10.3390/pr5040070 -
Abstract
Mesoporous alumina and magnesia were prepared using various polymers, poly(ethylene glycol) (PEG), poly(vinyl alcohol) (PVA), poly(N-(2-hydroxypropyl) methacrylamide) (PHPMA), and poly(dimethylacrylamide) (PDMAAm), as porogenic structure matrices. Mesoporous alumina exhibits large Brunauer–Emmett–Teller (BET) surface areas up to 365 m2 g−1,
[...] Read more.
Mesoporous alumina and magnesia were prepared using various polymers, poly(ethylene glycol) (PEG), poly(vinyl alcohol) (PVA), poly(N-(2-hydroxypropyl) methacrylamide) (PHPMA), and poly(dimethylacrylamide) (PDMAAm), as porogenic structure matrices. Mesoporous alumina exhibits large Brunauer–Emmett–Teller (BET) surface areas up to 365 m2 g−1, while mesoporous magnesium oxide possesses BET surface areas around 111 m2 g−1. Variation of the polymers has little impact on the structural properties of the products. The calcination of the polymer/metal oxide composite materials benefits from the fact that the polymer decomposition is catalyzed by the freshly formed metal oxide. Full article
Figures

Figure 1

Open AccessFeature PaperArticle
A General State-Space Formulation for Online Scheduling
Processes 2017, 5(4), 69; doi:10.3390/pr5040069 -
Abstract
We present a generalized state-space model formulation particularly motivated by an online scheduling perspective, which allows modeling (1) task-delays and unit breakdowns; (2) fractional delays and unit downtimes, when using discrete-time grid; (3) variable batch-sizes; (4) robust scheduling through the use of conservative
[...] Read more.
We present a generalized state-space model formulation particularly motivated by an online scheduling perspective, which allows modeling (1) task-delays and unit breakdowns; (2) fractional delays and unit downtimes, when using discrete-time grid; (3) variable batch-sizes; (4) robust scheduling through the use of conservative yield estimates and processing times; (5) feedback on task-yield estimates before the task finishes; (6) task termination during its execution; (7) post-production storage of material in unit; and (8) unit capacity degradation and maintenance. Through these proposed generalizations, we enable a natural way to handle routinely encountered disturbances and a rich set of corresponding counter-decisions. Thereby, greatly simplifying and extending the possible application of mathematical programming based online scheduling solutions to diverse application settings. Finally, we demonstrate the effectiveness of this model on a case study from the field of bio-manufacturing. Full article
Figures

Figure 1

Open AccessArticle
Selected Phenomena of the In-Mold Nodularization Process of Cast Iron That Influence the Quality of Cast Machine Parts
Processes 2017, 5(4), 68; doi:10.3390/pr5040068 -
Abstract
This paper discusses a problem connected with the production process of ductile iron castings made using the in-mold method. The study results are presented showing that this method compromises the quality of the cast machine parts and of the equipment itself. Specifics of
[...] Read more.
This paper discusses a problem connected with the production process of ductile iron castings made using the in-mold method. The study results are presented showing that this method compromises the quality of the cast machine parts and of the equipment itself. Specifics of the nodularization process using the in-mold method do not provide the proper conditions for removal of chemical reaction products to the slag, i.e., the products stay in the mold cavity and they also decrease the quality of the casting. In this work, corrosion-type defects were diagnosed mostly on the surface of the casting and some compounds in the near-surface layer—i.e., fayalite (Fe2SiO4) and forsterite (Mg2SiO4)—which cause discontinuities in the metal matrix. The results presented here were selected based on experimental melts of ductile iron. The elements of the mold used in this study, the shape of the mixing chamber, charge materials, method of melting, temperature of liquid metal, etc. were directly related to the production conditions. An analysis was conducted of the chemical composition using a Leco GDS500A spectrometer and a carbon and sulfur Leco CS125 analyzer. Metallographic examinations were conducted using a Phenom-ProX scanning electron microscope with an EDS system. Full article
Figures

Figure 1

Open AccessArticle
Stop Smoking—Tube-In-Tube Helical System for Flameless Calcination of Minerals
Processes 2017, 5(4), 67; doi:10.3390/pr5040067 -
Abstract
Mineral calcination worldwide accounts for some 5–10% of all anthropogenic carbon dioxide (CO2) emissions per year. Roughly half of the CO2 released results from burning fossil fuels for heat generation, while the other half is a product of the calcination
[...] Read more.
Mineral calcination worldwide accounts for some 5–10% of all anthropogenic carbon dioxide (CO2) emissions per year. Roughly half of the CO2 released results from burning fossil fuels for heat generation, while the other half is a product of the calcination reaction itself. Traditionally, the fuel combustion process and the calcination reaction take place together to enhance heat transfer. Systems have been proposed that separate fuel combustion and calcination to allow for the sequestration of pure CO2 from the calcination reaction for later storage/use and capture of the combustion gases. This work presents a new tube-in-tube helical system for the calcination of minerals that can use different heat transfer fluids (HTFs), employed or foreseen in concentrated solar power (CSP) plants. The system is labeled ‘flameless’ since the HTF can be heated by other means than burning fossil fuels. If CSP or high-temperature nuclear reactors are used, direct CO2 emissions can be divided in half. The technical feasibility of the system has been accessed with a brief parametric study here. The results suggest that the introduced system is technically feasible given the parameters (total heat transfer coefficients, mass- and volume flows, outer tube friction factors, and –Nusselt numbers) that are examined. Further experimental work will be required to better understand the performance of the tube-in-tube helical system for the flameless calcination of minerals. Full article
Figures

Open AccessArticle
Using Simulation for Scheduling and Rescheduling of Batch Processes
Processes 2017, 5(4), 66; doi:10.3390/pr5040066 -
Abstract
The problem of scheduling multiproduct and multipurpose batch processes has been studied for more than 30 years using math programming and heuristics. In most formulations, the manufacturing recipes are represented by simplified models using state task network (STN) or resource task network (RTN),
[...] Read more.
The problem of scheduling multiproduct and multipurpose batch processes has been studied for more than 30 years using math programming and heuristics. In most formulations, the manufacturing recipes are represented by simplified models using state task network (STN) or resource task network (RTN), transfers of materials are assumed to be instantaneous, constraints due to shared utilities are often ignored, and scheduling horizons are kept small due to the limits on the problem size that can be handled by the solvers. These limitations often result in schedules that are not actionable. A simulation model, on the other hand, can represent a manufacturing recipe to the smallest level of detail. In addition, a simulator can provide a variety of built-in capabilities that model the assignment decisions, coordination logic and plant operation rules. The simulation based schedules are more realistic, verifiable, easy to adapt for changing plant conditions and can be generated in a short period of time. An easy-to-use simulator based framework can be developed to support scheduling decisions made by operations personnel. In this paper, first the complexities of batch recipes and operations are discussed, followed by examples of using the BATCHES simulator for off-line scheduling studies and for day-to-day scheduling. Full article
Figures

Figure 1

Open AccessFeature PaperArticle
Multi-Objective Optimization of Experiments Using Curvature and Fisher Information Matrix
Processes 2017, 5(4), 63; doi:10.3390/pr5040063 -
Abstract
The bottleneck in creating dynamic models of biological networks and processes often lies in estimating unknown kinetic model parameters from experimental data. In this regard, experimental conditions have a strong influence on parameter identifiability and should therefore be optimized to give the maximum
[...] Read more.
The bottleneck in creating dynamic models of biological networks and processes often lies in estimating unknown kinetic model parameters from experimental data. In this regard, experimental conditions have a strong influence on parameter identifiability and should therefore be optimized to give the maximum information for parameter estimation. Existing model-based design of experiment (MBDOE) methods commonly rely on the Fisher information matrix (FIM) for defining a metric of data informativeness. When the model behavior is highly nonlinear, FIM-based criteria may lead to suboptimal designs, as the FIM only accounts for the linear variation in the model outputs with respect to the parameters. In this work, we developed a multi-objective optimization (MOO) MBDOE, for which the model nonlinearity was taken into consideration through the use of curvature. The proposed MOO MBDOE involved maximizing data informativeness using a FIM-based metric and at the same time minimizing the model curvature. We demonstrated the advantages of the MOO MBDOE over existing FIM-based and other curvature-based MBDOEs in an application to the kinetic modeling of fed-batch fermentation of baker’s yeast. Full article
Figures

Figure 1

Open AccessArticle
How to Generate Economic and Sustainability Reports from Big Data? Qualifications of Process Industry
Processes 2017, 5(4), 64; doi:10.3390/pr5040064 -
Abstract
Big Data may introduce new opportunities, and for this reason it has become a mantra among most industries. This paper focuses on examining how to develop cost and sustainable reporting by utilizing Big Data that covers economic values, production volumes, and emission information.
[...] Read more.
Big Data may introduce new opportunities, and for this reason it has become a mantra among most industries. This paper focuses on examining how to develop cost and sustainable reporting by utilizing Big Data that covers economic values, production volumes, and emission information. We assume strongly that this use supports cleaner production, while at the same time offers more information for revenue and profitability development. We argue that Big Data brings company-wide business benefits if data queries and interfaces are built to be interactive, intuitive, and user-friendly. The amount of information related to operations, costs, emissions, and the supply chain would increase enormously if Big Data was used in various manufacturing industries. It is essential to expose the relevant correlations between different attributes and data fields. Proper algorithm design and programming are key to making the most of Big Data. This paper introduces ideas on how to refine raw data into valuable information, which can serve many types of end users, decision makers, and even external auditors. Concrete examples are given through an industrial paper mill case, which covers environmental aspects, cost-efficiency management, and process design. Full article
Figures

Figure 1

Open AccessFeature PaperArticle
Dispersal-Based Microbial Community Assembly Decreases Biogeochemical Function
Processes 2017, 5(4), 65; doi:10.3390/pr5040065 -
Abstract
Ecological mechanisms influence relationships among microbial communities, which in turn impact biogeochemistry. In particular, microbial communities are assembled by deterministic (e.g., selection) and stochastic (e.g., dispersal) processes, and the relative balance of these two process types is hypothesized to alter the influence of
[...] Read more.
Ecological mechanisms influence relationships among microbial communities, which in turn impact biogeochemistry. In particular, microbial communities are assembled by deterministic (e.g., selection) and stochastic (e.g., dispersal) processes, and the relative balance of these two process types is hypothesized to alter the influence of microbial communities over biogeochemical function. We used an ecological simulation model to evaluate this hypothesis, defining biogeochemical function generically to represent any biogeochemical reaction of interest. We assembled receiving communities under different levels of dispersal from a source community that was assembled purely by selection. The dispersal scenarios ranged from no dispersal (i.e., selection-only) to dispersal rates high enough to overwhelm selection (i.e., homogenizing dispersal). We used an aggregate measure of community fitness to infer a given community’s biogeochemical function relative to other communities. We also used ecological null models to further link the relative influence of deterministic assembly to function. We found that increasing rates of dispersal decrease biogeochemical function by increasing the proportion of maladapted taxa in a local community. Niche breadth was also a key determinant of biogeochemical function, suggesting a tradeoff between the function of generalist and specialist species. Finally, we show that microbial assembly processes exert greater influence over biogeochemical function when there is variation in the relative contributions of dispersal and selection among communities. Taken together, our results highlight the influence of spatial processes on biogeochemical function and indicate the need to account for such effects in models that aim to predict biogeochemical function under future environmental scenarios. Full article
Figures

Figure 1

Open AccessArticle
Optimization through Response Surface Methodology of a Reactor Producing Methanol by the Hydrogenation of Carbon Dioxide
Processes 2017, 5(4), 62; doi:10.3390/pr5040062 -
Abstract
Carbon dioxide conversion and utilization is gaining significant attention worldwide, not only because carbon dioxide has an impact on global climate change, but also because it provides a source for potential fuels and chemicals. Methanol is an important fuel that can be obtained
[...] Read more.
Carbon dioxide conversion and utilization is gaining significant attention worldwide, not only because carbon dioxide has an impact on global climate change, but also because it provides a source for potential fuels and chemicals. Methanol is an important fuel that can be obtained by the hydrogenation of carbon dioxide. In this research, the modeling of a reactor to produce methanol using carbon dioxide and hydrogen is carried out by way of an ANOVA and a central composite design. Reaction temperature, reaction pressure, H2/CO2 ratio, and recycling are the chosen factors, while the methanol production and the reactor volume are the studied responses. Results show that the interaction AC is common between the two responses and allows improvement of the productivity in reducing the volume. A mathematical model for methanol production and reactor volume is obtained with significant factors. A central composite design is used to optimize the process. Results show that a higher productivity is obtained with temperature, CO2/H2 ratio, and recycle factors at higher, lower, and higher levels, respectively. The methanol production is equal to 33,540 kg/h, while the reactor volume is 6 m3. Future research should investigate the economic analysis of the process in order to improve productivity with lower costs. Full article
Figures

Figure 1

Open AccessFeature PaperReview
Improving Bioenergy Crops through Dynamic Metabolic Modeling
Processes 2017, 5(4), 61; doi:10.3390/pr5040061 -
Abstract
Enormous advances in genetics and metabolic engineering have made it possible, in principle, to create new plants and crops with improved yield through targeted molecular alterations. However, while the potential is beyond doubt, the actual implementation of envisioned new strains is often difficult,
[...] Read more.
Enormous advances in genetics and metabolic engineering have made it possible, in principle, to create new plants and crops with improved yield through targeted molecular alterations. However, while the potential is beyond doubt, the actual implementation of envisioned new strains is often difficult, due to the diverse and complex nature of plants. Indeed, the intrinsic complexity of plants makes intuitive predictions difficult and often unreliable. The hope for overcoming this challenge is that methods of data mining and computational systems biology may become powerful enough that they could serve as beneficial tools for guiding future experimentation. In the first part of this article, we review the complexities of plants, as well as some of the mathematical and computational methods that have been used in the recent past to deepen our understanding of crops and their potential yield improvements. In the second part, we present a specific case study that indicates how robust models may be employed for crop improvements. This case study focuses on the biosynthesis of lignin in switchgrass (Panicum virgatum). Switchgrass is considered one of the most promising candidates for the second generation of bioenergy production, which does not use edible plant parts. Lignin is important in this context, because it impedes the use of cellulose in such inedible plant materials. The dynamic model offers a platform for investigating the pathway behavior in transgenic lines. In particular, it allows predictions of lignin content and composition in numerous genetic perturbation scenarios. Full article
Figures

Figure 1

Open AccessArticle
Minimizing the Effect of Substantial Perturbations in Military Water Systems for Increased Resilience and Efficiency
Processes 2017, 5(4), 60; doi:10.3390/pr5040060 -
Abstract
A model predictive control (MPC) framework, exploiting both feedforward and feedback control loops, is employed to minimize large disturbances that occur in military water networks. Military installations’ need for resilient and efficient water supplies is often challenged by large disturbances like fires, terrorist
[...] Read more.
A model predictive control (MPC) framework, exploiting both feedforward and feedback control loops, is employed to minimize large disturbances that occur in military water networks. Military installations’ need for resilient and efficient water supplies is often challenged by large disturbances like fires, terrorist activity, troop training rotations, and large scale leaks. This work applies the effectiveness of MPC to provide predictive capability and compensate for vast geographical differences and varying phenomena time scales using computational software and actual system dimensions and parameters. The results show that large disturbances are rapidly minimized while maintaining chlorine concentration within legal limits at the point of demand and overall water usage is minimized. The control framework also ensures pumping is minimized during peak electricity hours, so costs are kept lower than simple proportional control. Thecontrol structure implemented in this work is able to support resiliency and increased efficiency on military bases by minimizing tank holdup, effectively countering large disturbances, and efficiently managing pumping. Full article
Figures

Figure 1

Open AccessFeature PaperArticle
Thermal and Rheological Properties of Crude Tall Oil for Use in Biodiesel Production
Processes 2017, 5(4), 59; doi:10.3390/pr5040059 -
Abstract
The primary objective of this work was to investigate the thermal and rheological properties of crude tall oil (CTO), a low-cost by-product from the Kraft pulping process, as a potential feedstock for biodiesel production. Adequate knowledge of CTO properties is a prerequisite for
[...] Read more.
The primary objective of this work was to investigate the thermal and rheological properties of crude tall oil (CTO), a low-cost by-product from the Kraft pulping process, as a potential feedstock for biodiesel production. Adequate knowledge of CTO properties is a prerequisite for the optimal design of a cost-effective biodiesel process and related processing equipment. The study revealed the correlation between the physicochemical properties, thermal, and rheological behavior of CTO. It was established that the trans/esterification temperature for CTO was greater than the temperature at which viscosity of CTO entered a steady-state. This information is useful in the selection of appropriate agitation conditions for optimal biodiesel production from CTO. The point of interception of storage modulus (G′) and loss modulus (G′′) determined the glass transition temperature (40 °C) of CTO that strongly correlated with its melting point (35.3 °C). The flow pattern of CTO was modeled as a non-Newtonian fluid. Furthermore, due to the high content of fatty acids (FA) in CTO, it is recommended to first reduce the FA level by acid catalyzed methanolysis prior to alkali treatment, or alternatively apply a one-step heterogeneous or enzymatic trans/esterification of CTO for high-yield biodiesel production. Full article
Figures

Figure 1

Open AccessFeature PaperArticle
A Reaction Database for Small Molecule Pharmaceutical Processes Integrated with Process Information
Processes 2017, 5(4), 58; doi:10.3390/pr5040058 -
Abstract
This article describes the development of a reaction database with the objective to collect data for multiphase reactions involved in small molecule pharmaceutical processes with a search engine to retrieve necessary data in investigations of reaction-separation schemes, such as the role of organic
[...] Read more.
This article describes the development of a reaction database with the objective to collect data for multiphase reactions involved in small molecule pharmaceutical processes with a search engine to retrieve necessary data in investigations of reaction-separation schemes, such as the role of organic solvents in reaction performance improvement. The focus of this reaction database is to provide a data rich environment with process information available to assist during the early stage synthesis of pharmaceutical products. The database is structured in terms of reaction classification of reaction types; compounds participating in the reaction; use of organic solvents and their function; information for single step and multistep reactions; target products; reaction conditions and reaction data. Information for reactor scale-up together with information for the separation and other relevant information for each reaction and reference are also available in the database. Additionally, the retrieved information obtained from the database can be evaluated in terms of sustainability using well-known “green” metrics published in the scientific literature. The application of the database is illustrated through the synthesis of ibuprofen, for which data on different reaction pathways have been retrieved from the database and compared using “green” chemistry metrics. Full article
Figures

Figure 1

Open AccessFeature PaperArticle
Energy Optimization of Gas–Liquid Dispersion in Micronozzles Assisted by Design of Experiment
Processes 2017, 5(4), 57; doi:10.3390/pr5040057 -
Abstract
In recent years gas–liquid flow in microchannels has drawn much attention in the research fields of analytics and applications, such as in oxidations or hydrogenations. Since surface forces are increasingly important on the small scale, bubble coalescence is detrimental and leads to Taylor
[...] Read more.
In recent years gas–liquid flow in microchannels has drawn much attention in the research fields of analytics and applications, such as in oxidations or hydrogenations. Since surface forces are increasingly important on the small scale, bubble coalescence is detrimental and leads to Taylor bubble flow in microchannels with low surface-to-volume ratio. To overcome this limitation, we have investigated the gas–liquid flow through micronozzles and, specifically, the bubble breakup behind the nozzle. Two different regimes of bubble breakup are identified, laminar and turbulent. Turbulent bubble breakup is characterized by small daughter bubbles and narrow daughter bubble size distribution. Thus, high interfacial area is generated for increased mass and heat transfer. However, turbulent breakup mechanism is observed at high flow rates and increased pressure drops; hence, large energy input into the system is essential. In this work Design of Experiment assisted evaluation of turbulent bubbly flow redispersion is carried out to investigate the effect and significance of the nozzle’s geometrical parameters regarding bubble breakup and pressure drop. Here, the hydraulic diameter and length of the nozzle show the largest impacts. Finally, factor optimization leads to an optimized nozzle geometry for bubble redispersion via a micronozzle regarding energy efficacy to attain a high interfacial area and surface-to-volume ratio with rather low energy input. Full article
Figures

Figure 1

Open AccessFeature PaperArticle
Numerical Aspects of Data Reconciliation in Industrial Applications
Processes 2017, 5(4), 56; doi:10.3390/pr5040056 -
Abstract
Data reconciliation is a model-based technique that reduces measurement errors by making use of redundancies in process data. It is largely applied in modern process industries, being commercially available in software tools. Based on industrial applications reported in the literature, we have identified
[...] Read more.
Data reconciliation is a model-based technique that reduces measurement errors by making use of redundancies in process data. It is largely applied in modern process industries, being commercially available in software tools. Based on industrial applications reported in the literature, we have identified and tested different configuration settings providing a numerical assessment on the performance of several important aspects involved in the solution of nonlinear steady-state data reconciliation that are generally overlooked. The discussed items are comprised of problem formulation, regarding the presence of estimated parameters in the objective function; solution approach when applying nonlinear programming solvers; methods for estimating objective function gradients; initial guess; and optimization algorithm. The study is based on simulations of a rigorous and validated model of a real offshore oil production system. The assessment includes evaluations of solution robustness, constraint violation at convergence, and computational cost. In addition, we propose the use of a global test to detect inconsistencies in the formulation and in the solution of the problem. Results show that different settings have a great impact on the performance of reconciliation procedures, often leading to local solutions. The question of how to satisfactorily solve the data reconciliation problem is discussed so as to obtain improved estimates. Full article
Figures