Open AccessArticle
Online Optimization Applied to a Shockless Explosion Combustor
Processes 2016, 4(4), 48; doi:10.3390/pr4040048 -
Abstract
Changing the combustion process of a gas turbine from a constant-pressure to a pressure-increasing approximate constant-volume combustion (aCVC) is one of the most promising ways to increase the efficiency of turbines in the future. In this paper, a newly proposed method to achieve
[...] Read more.
Changing the combustion process of a gas turbine from a constant-pressure to a pressure-increasing approximate constant-volume combustion (aCVC) is one of the most promising ways to increase the efficiency of turbines in the future. In this paper, a newly proposed method to achieve such an aCVC is considered. The so-called shockless explosion combustion (SEC) uses auto-ignition and a fuel stratification to achieve a spatially homogeneous ignition. The homogeneity of the ignition can be adjusted by the mixing of fuel and air. A proper filling profile, however, also depends on changing parameters, such as temperature, that cannot be measured in detail due to the harsh conditions inside the combustion tube. Therefore, a closed-loop control is required to obtain an adequate injection profile and to reject such unknown disturbances. For this, an optimization problem is set up and a novel formulation of a discrete extremum seeking controller is presented. By approximating the cost function with a parabola, the first derivative and a Hessian matrix are estimated, allowing the controller to use Newton steps to converge to the optimal control trajectory. The controller is applied to an atmospheric test rig, where the auto-ignition process can be investigated for single ignitions. In the set-up, dimethyl ether is injected into a preheated air stream using a controlled proportional valve. Optical measurements are used to evaluate the auto-ignition process and to show that using the extremum seeking control approach, the homogeneity of the ignition process can be increased significantly. Full article
Figures

Open AccessFeature PaperArticle
From Single Microparticles to Microfluidic Emulsification: Fundamental Properties (Solubility, Density, Phase Separation) from Micropipette Manipulation of Solvent, Drug and Polymer Microspheres
Processes 2016, 4(4), 49; doi:10.3390/pr4040049 -
Abstract
The micropipette manipulation technique is capable of making fundamental single particle measurements and analyses. This information is critical for establishing processing parameters in systems such as microfluidics and homogenization. To demonstrate what can be achieved at the single particle level, the micropipette technique
[...] Read more.
The micropipette manipulation technique is capable of making fundamental single particle measurements and analyses. This information is critical for establishing processing parameters in systems such as microfluidics and homogenization. To demonstrate what can be achieved at the single particle level, the micropipette technique was used to form and characterize the encapsulation of Ibuprofen (Ibp) into poly(lactic-co-glycolic acid) (PLGA) microspheres from dichloromethane (DCM) solutions, measuring the loading capacity and solubility limits of Ibp in typical PLGA microspheres. Formed in phosphate buffered saline (PBS), pH 7.4, Ibp/PLGA/DCM microdroplets were uniformly solidified into Ibp/PLGA microparticles up to drug loadings (DL) of 41%. However, at DL 50 wt% and above, microparticles showed a phase separated pattern. Working with single microparticles, we also estimated the dissolution time of pure Ibp microspheres in the buffer or in detergent micelle solutions, as a function of the microsphere size and compare that to calculated dissolution times using the Epstein-Plesset (EP) model. Single, pure Ibp microparticles precipitated as liquid phase microdroplets that then gradually dissolved into the surrounding PBS medium. Analyzing the dissolution profiles of Ibp over time, a diffusion coefficient of 5.5 ± 0.2 × 10−6 cm2/s was obtained by using the EP model, which was in excellent agreement with the literature. Finally, solubilization of Ibp into sodium dodecyl sulfate (SDS) micelles was directly visualized microscopically for the first time by the micropipette technique, showing that such micellization could increase the solubility of Ibp from 4 to 80 mM at 100 mM SDS. We also introduce a particular microfluidic device that has recently been used to make PLGA microspheres, showing the importance of optimizing the flow parameters. Using this device, perfectly smooth and size-homogeneous microparticles were formed for flow rates of 0.167 mL/h for the dispersed phase (Qd) and 1.67 mL/h for the water phase (Qc), i.e., a flow rate ratio Qd/Qc of 10, based on parameters such as interfacial tension, dissolution rates and final concentrations. Thus, using the micropipette technique to observe the formation, and quantify solvent dissolution, solidification or precipitation of an active pharmaceutical ingredient (API) or excipient for single and individual microparticles, represents a very useful tool for understanding microsphere-processes and hence can help to establish process conditions without resorting to expensive and material-consuming bulk particle runs. Full article
Figures

Figure 1

Open AccessArticle
The Influence of Viscosity on the Static and Dynamic Properties of PS-PEO Covered Emulsion Drops
Processes 2016, 4(4), 47; doi:10.3390/pr4040047 (registering DOI) -
Abstract
Polymer stabilized emulsions are commonplace in industries ranging from cosmetics and foods to pharmaceuticals. Understanding the physical properties of emulsions is of critical importance to the rapid advancement of industrial applications. In this work, we use a sessile drop geometry to examine the
[...] Read more.
Polymer stabilized emulsions are commonplace in industries ranging from cosmetics and foods to pharmaceuticals. Understanding the physical properties of emulsions is of critical importance to the rapid advancement of industrial applications. In this work, we use a sessile drop geometry to examine the effects of viscosity changes of the surrounding glycerine/water solution on polystyrene-b-polyethylene oxide (PS-PEO) covered toluene droplets. In the experiment, emulsion drops are driven by the buoyant force into a smooth mica surface. The drops buckle as they approach the mica, trapping some of the outer fluid which slowly drains out over time. The characteristic time of the drainage process as well as the surface tension was measured as a function of glycerine/water concentration. The surface tension is found to have a minimum at a glycerine concentration of approximately 50% (by weight to water) and the drainage rate is shown to be well described by a recent model. The simple experiment not only shows how critical features of emulsion stability can be easily and reliably measured, but also identifies important new features of the drainage process. Full article
Figures

Open AccessFeature PaperReview
Extending Applications of High-Pressure Homogenization by Using Simultaneous Emulsification and Mixing (SEM)—An Overview
Processes 2016, 4(4), 46; doi:10.3390/pr4040046 -
Abstract
Conventional high-pressure homogenization (HPH) is widely used in the pharmaceutical, chemical, and food industries among others. In general, its aim is to produce micron or sub-micron scale emulsions with excellent product characteristics. However, its energy consumption is still very high. Additionally, several limitations
[...] Read more.
Conventional high-pressure homogenization (HPH) is widely used in the pharmaceutical, chemical, and food industries among others. In general, its aim is to produce micron or sub-micron scale emulsions with excellent product characteristics. However, its energy consumption is still very high. Additionally, several limitations and boundaries impede the usage of high-pressure homogenization for special products such as particle loaded or highly concentrated systems. This article gives an overview of approaches that have been used in order to improve the conventional high-pressure homogenization process. Emphasis is put on the ‘Simultaneous Emulsification and Mixing’ process that has been developed to broaden the application areas of high-pressure homogenization. Full article
Figures

Open AccessArticle
A Study of Explorative Moves during Modifier Adaptation with Quadratic Approximation
Processes 2016, 4(4), 45; doi:10.3390/pr4040045 -
Abstract
Modifier adaptation with quadratic approximation (in short MAWQA) can adapt the operating condition of a process to its economic optimum by combining the use of a theoretical process model and of the collected data during process operation. The efficiency of the MAWQA algorithm
[...] Read more.
Modifier adaptation with quadratic approximation (in short MAWQA) can adapt the operating condition of a process to its economic optimum by combining the use of a theoretical process model and of the collected data during process operation. The efficiency of the MAWQA algorithm can be attributed to a well-designed mechanism which ensures the improvement of the economic performance by taking necessary explorative moves. This paper gives a detailed study of the mechanism of performing explorative moves during modifier adaptation with quadratic approximation. The necessity of the explorative moves is theoretically analyzed. Simulation results for the optimization of a hydroformylation process are used to illustrate the efficiency of the MAWQA algorithm over the finite difference based modifier adaptation algorithm. Full article
Figures

Figure 1

Open AccessArticle
Performance Evaluation of Real Industrial RTO Systems
Processes 2016, 4(4), 44; doi:10.3390/pr4040044 -
Abstract
The proper design of RTO systems’ structure and critical diagnosis tools is neglected in commercial RTO software and poorly discussed in the literature. In a previous article, Quelhas et al. (Can J Chem Eng., 2013, 91, 652–668) have reviewed the concepts behind the
[...] Read more.
The proper design of RTO systems’ structure and critical diagnosis tools is neglected in commercial RTO software and poorly discussed in the literature. In a previous article, Quelhas et al. (Can J Chem Eng., 2013, 91, 652–668) have reviewed the concepts behind the two-step RTO approach and discussed the vulnerabilities of intuitive, experience-based RTO design choices. This work evaluates and analyzes the performance of industrial RTO implementations in the face of real settings regarding the choice of steady-state detection methods and parameters, the choice of adjustable model parameters and selected variables in the model adaptation problem, the convergence determination of optimization techniques, among other aspects, in the presence of real noisy data. Results clearly show the importance of a robust and careful consideration of all aspects of a two-step RTO structure, as well as of the performance evaluation, in order to have a real and undoubted improvement of process operation. Full article
Figures

Open AccessFeature PaperArticle
Combined Estimation and Optimal Control of Batch Membrane Processes
Processes 2016, 4(4), 43; doi:10.3390/pr4040043 -
Abstract
In this paper, we deal with the model-based time-optimal operation of a batch diafiltration process in the presence of membrane fouling. Membrane fouling poses one of the major problems in the field of membrane processes. We model the fouling behavior and estimate its
[...] Read more.
In this paper, we deal with the model-based time-optimal operation of a batch diafiltration process in the presence of membrane fouling. Membrane fouling poses one of the major problems in the field of membrane processes. We model the fouling behavior and estimate its parameters using various methods. Least-squares, least-squares with a moving horizon, recursive least-squares methods and the extended Kalman filter are applied and discussed for the estimation of the fouling behavior on-line during the process run. Model-based optimal non-linear control coupled with parameter estimation is applied in a simulation case study to show the benefits of the proposed approach. Full article
Figures

Open AccessFeature PaperArticle
An Integrated Membrane Process for Butenes Production
Processes 2016, 4(4), 42; doi:10.3390/pr4040042 -
Abstract
Iso-butene is an important material for the production of chemicals and polymers. It can take part in various chemical reactions, such as hydrogenation, oxidation and other additions owing to the presence of a reactive double bond. It is usually obtained as a
[...] Read more.
Iso-butene is an important material for the production of chemicals and polymers. It can take part in various chemical reactions, such as hydrogenation, oxidation and other additions owing to the presence of a reactive double bond. It is usually obtained as a by-product of a petroleum refinery, by Fluidized Catalytic Cracking (FCC) of naphtha or gas-oil. However, an interesting alternative to iso-butene production is n-butane dehydroisomerization, which allows the direct conversion of n-butane via dehydrogenation and successive isomerization. In this work, a simulation analysis of an integrated membrane system is proposed for the production and recovery of butenes. The dehydroisomerization of n-butane to iso-butene takes place in a membrane reactor where the hydrogen is removed from the reaction side with a Pd/Ag alloys membrane. Afterwards, the retentate and permeate post-processing is performed in membrane separation units for butenes concentration and recovery. Four different process schemes are developed. The performance of each membrane unit is analyzed by appropriately developed performance maps, to identify the operating conditions windows and the membrane permeation properties required to maximize the recovery of the iso-butene produced. An analysis of integrated systems showed a yield of butenes higher than the other reaction products with high butenes recovery in the gas separation section, with values of molar concentration between 75% and 80%. Full article
Figures

Figure 1

Open AccessFeature PaperReview
Optical Measuring Methods for the Investigation of High-Pressure Homogenisation
Processes 2016, 4(4), 41; doi:10.3390/pr4040041 -
Abstract
High-pressure homogenisation is a commonly used technique to produce emulsions with droplets in the micro to nano scale. Due to the flow field in the homogenizer, stresses are transferred to the interface between droplets and continuous phase. Cohesive forces within droplets interact with
[...] Read more.
High-pressure homogenisation is a commonly used technique to produce emulsions with droplets in the micro to nano scale. Due to the flow field in the homogenizer, stresses are transferred to the interface between droplets and continuous phase. Cohesive forces within droplets interact with external stresses. To exceed the cohesive forces, high process pressures are necessary, which might cause a complex flow pattern and large flow velocities. Additionally, the pressure drop can induce cavitation. Inline measurements are a challenge, but necessary to understand droplet break-up in a high-pressure homogenizer. Recently, different optical methods have been used to investigate the flow conditions as well as the droplet deformation and break-up in high-pressure homogenisation, such as high speed imaging, particle and micro particle image velocimetry. In this review, those optical measuring methods are considered critically in their applications and limitations, achievable results and further developments. Full article
Figures

Open AccessFeature PaperArticle
Design of a Multi-Tube Pd-Membrane Module for Tritium Recovery from He in DEMO
Processes 2016, 4(4), 40; doi:10.3390/pr4040040 -
Abstract
Dense self-supported Pd-alloy membranes are used to selectively separate hydrogen and hydrogen isotopes. In particular, deuterium (D) and tritium (T) are currently identified as the main elements for the sustainability of the nuclear fusion reaction aimed at carbon free power generation. In the
[...] Read more.
Dense self-supported Pd-alloy membranes are used to selectively separate hydrogen and hydrogen isotopes. In particular, deuterium (D) and tritium (T) are currently identified as the main elements for the sustainability of the nuclear fusion reaction aimed at carbon free power generation. In the fusion nuclear reactors, a breeding blanket produces the tritium that is extracted and purified before being sent to the plasma chamber in order to sustain the fusion reaction. In this work, the application of Pd-alloy membranes has been tested for recovering tritium from a solid breeding blanket through a helium purge stream. Several simulations have been performed in order to optimize the design of a Pd-Ag multi-tube module in terms of geometry, operating parameters, and membrane module configuration (series vs. parallel). The results demonstrate that a pre-concentration stage before the Pd-membrane unit is mandatory because of the very low tritium concentration in the He which leaves the breeding blanket of the fusion reactor. The most suitable operating conditions could be reached by: (i) increasing the hydrogen partial pressure in the lumen side and (ii) decreasing the shell pressure. The preliminary design of a membrane unit has been carried out for the case of the DEMO fusion reactor: the optimized membrane module consists of an array of 182 Pd-Ag tubes of 500 mm length, 10 mm diameter, and 0.100 mm wall thickness (total active area of 2.85 m2). Full article
Figures

Figure 1

Open AccessFeature PaperReview
Algorithms for a Single Hormone Closed-Loop Artificial Pancreas: Challenges Pertinent to Chemical Process Operations and Control
Processes 2016, 4(4), 39; doi:10.3390/pr4040039 -
Abstract
The development of a closed-loop artificial pancreas to regulate the blood glucose concentration of individuals with type 1 diabetes has been a focused area of research for over 50 years, with rapid progress during the past decade. The daily control challenges faced by
[...] Read more.
The development of a closed-loop artificial pancreas to regulate the blood glucose concentration of individuals with type 1 diabetes has been a focused area of research for over 50 years, with rapid progress during the past decade. The daily control challenges faced by someone with type 1 diabetes include asymmetric objectives and risks, and one-sided manipulated input action with frequent relatively fast disturbances. The major automation steps toward a closed-loop artificial pancreas include (i) monitoring and overnight alarms for hypoglycemia (low blood glucose); (ii) overnight low glucose suspend (LGS) systems to prevent hypoglycemia; and (iii) fully closed-loop systems that adjust insulin (and perhaps glucagon) to maintain desired blood glucose levels day and night. We focus on the steps that we used to develop and test a probabilistic, risk-based, model predictive control strategy for a fully closed-loop artificial pancreas. We complete the paper by discussing ramifications of lessons learned for chemical process systems applications. Full article
Figures

Figure 1

Open AccessArticle
Modeling and Hemofiltration Treatment of Acute Inflammation
Processes 2016, 4(4), 38; doi:10.3390/pr4040038 -
Abstract
The body responds to endotoxins by triggering the acute inflammatory response system to eliminate the threat posed by gram-negative bacteria (endotoxin) and restore health. However, an uncontrolled inflammatory response can lead to tissue damage, organ failure, and ultimately death; this is clinically known
[...] Read more.
The body responds to endotoxins by triggering the acute inflammatory response system to eliminate the threat posed by gram-negative bacteria (endotoxin) and restore health. However, an uncontrolled inflammatory response can lead to tissue damage, organ failure, and ultimately death; this is clinically known as sepsis. Mathematical models of acute inflammatory disease have the potential to guide treatment decisions in critically ill patients. In this work, an 8-state (8-D) differential equation model of the acute inflammatory response system to endotoxin challenge was developed. Endotoxin challenges at 3 and 12 mg/kg were administered to rats, and dynamic cytokine data for interleukin (IL)-6, tumor necrosis factor (TNF), and IL-10 were obtained and used to calibrate the model. Evaluation of competing model structures was performed by analyzing model predictions at 3, 6, and 12 mg/kg endotoxin challenges with respect to experimental data from rats. Subsequently, a model predictive control (MPC) algorithm was synthesized to control a hemoadsorption (HA) device, a blood purification treatment for acute inflammation. A particle filter (PF) algorithm was implemented to estimate the full state vector of the endotoxemic rat based on time series cytokine measurements. Treatment simulations show that: (i) the apparent primary mechanism of HA efficacy is white blood cell (WBC) capture, with cytokine capture a secondary benefit; and (ii) differential filtering of cytokines and WBC does not provide substantial improvement in treatment outcomes vs. existing HA devices. Full article
Figures

Figure 1

Open AccessFeature PaperProject Report
Process Intensification in Fuel Cell CHP Systems, the ReforCELL Project
Processes 2016, 4(4), 37; doi:10.3390/pr4040037 -
Abstract
This paper reports the findings of a FP7/FCH JU project (ReforCELL) that developed materials (catalysts and membranes) and an advance autothermal membrane reformer for a micro Combined Heat and Power (CHP) system of 5 kWel based on a polymer electrolyte membrane fuel cell
[...] Read more.
This paper reports the findings of a FP7/FCH JU project (ReforCELL) that developed materials (catalysts and membranes) and an advance autothermal membrane reformer for a micro Combined Heat and Power (CHP) system of 5 kWel based on a polymer electrolyte membrane fuel cell (PEMFC). In this project, an active, stable and selective catalyst was developed for the reactions of interest and its production was scaled up to kg scale (TRL5 (TRL: Technology Readiness Level)). Simultaneously, new membranes for gas separation were developed. In particular, dense supported thin palladium-based membranes were developed for hydrogen separation from reactive mixtures. These membranes were successfully scaled up to TRL4 and used in lab-scale reactors for fluidized bed steam methane reforming (SMR) and autothermal reforming (ATR) and in a prototype reactor for ATR. Suitable sealing techniques able to integrate the different membranes in lab-scale and prototype reactors were also developed. The project also addressed the design and optimization of the subcomponents (BoP) for the integration of the membrane reformer to the fuel cell system. Full article
Figures

Open AccessReview
A Review on the Dissection of Quenched Blast Furnaces—Spanning from the Early 1950s to the 1970s
Processes 2016, 4(4), 36; doi:10.3390/pr4040036 -
Abstract
Since its invention until the 1950s, the iron blast furnace was viewed as a strange ‘black box’. Its operation was largely empirical and much of the information needed for monitoring and control of the process was yet to be known. More complete information
[...] Read more.
Since its invention until the 1950s, the iron blast furnace was viewed as a strange ‘black box’. Its operation was largely empirical and much of the information needed for monitoring and control of the process was yet to be known. More complete information was needed concerning the process such as the reduction of iron-bearing raw materials, the distribution of materials throughout the stack, the size, location, and structure of the fusion zone, and the transfer of silicon, sulfur, and carbon to the slag and metal. Hence, to obtain a better understanding of the blast furnace process, some iron-makers came up with the idea of quenching the contents of the furnace following normal operations. This was done in a neutral nitrogen atmosphere. The quenched contents were then sampled for analysis. Thus, this paper was written to discuss such works, spanning from the early 1950s to the 1970s. Care has been taken to include most of their findings and readers who have a fair amount of iron-making knowledge should be able to see and understand the in-furnace phenomena as the ‘black box’ unfolds itself. Most of the text will be focused on two important studies into the matter, the first being the U.S. Bureau of Mines case in 1959 and the next being the Iron and Steel Institute of Japan (ISIJ) studies in the 1970s. The contribution of these works to modern day blast furnace operation is also discussed in the paper. Full article
Figures

Figure 1

Open AccessFeature PaperReview
Embedded Control in Wearable Medical Devices: Application to the Artificial Pancreas
Processes 2016, 4(4), 35; doi:10.3390/pr4040035 -
Abstract
Significant increases in processing power, coupled with the miniaturization of processing units operating at low power levels, has motivated the embedding of modern control systems into medical devices. The design of such embedded decision-making strategies for medical applications is driven by multiple crucial
[...] Read more.
Significant increases in processing power, coupled with the miniaturization of processing units operating at low power levels, has motivated the embedding of modern control systems into medical devices. The design of such embedded decision-making strategies for medical applications is driven by multiple crucial factors, such as: (i) guaranteed safety in the presence of exogenous disturbances and unexpected system failures; (ii) constraints on computing resources; (iii) portability and longevity in terms of size and power consumption; and (iv) constraints on manufacturing and maintenance costs. Embedded control systems are especially compelling in the context of modern artificial pancreas systems (AP) used in glucose regulation for patients with type 1 diabetes mellitus (T1DM). Herein, a review of potential embedded control strategies that can be leveraged in a fully-automated and portable AP is presented. Amongst competing controllers, emphasis is provided on model predictive control (MPC), since it has been established as a very promising control strategy for glucose regulation using the AP. Challenges involved in the design, implementation and validation of safety-critical embedded model predictive controllers for the AP application are discussed in detail. Additionally, the computational expenditure inherent to MPC strategies is investigated, and a comparative study of runtime performances and storage requirements among modern quadratic programming solvers is reported for a desktop environment and a prototype hardware platform. Full article
Figures

Open AccessArticle
Operator Training Simulator for an Industrial Bioethanol Plant
Processes 2016, 4(4), 34; doi:10.3390/pr4040034 -
Abstract
Operator training simulators (OTS) are software tools for training process operators in large-scale industrial applications. Here, we describe the development, implementation and training of an OTS for a large-scale industrial plant for bioethanol production. The design of the OTS is based on conceptual
[...] Read more.
Operator training simulators (OTS) are software tools for training process operators in large-scale industrial applications. Here, we describe the development, implementation and training of an OTS for a large-scale industrial plant for bioethanol production. The design of the OTS is based on conceptual analysis (previously reported by us in this journal) of various configuration alternatives and training procedures at the plant. In this article, we report on how the conceptual design is used in simulation models and graphical user interfaces and how the design is applied for training of operators in the real plant environment. The results imply that OTS would be time- and cost-efficient tools for application in the biotechnological industry. Full article
Figures

Figure 1

Open AccessFeature PaperArticle
Pure Hydrogen Production in Membrane Reactor with Mixed Reforming Reaction by Utilizing Waste Gas: A Case Study
Processes 2016, 4(3), 33; doi:10.3390/pr4030033 -
Abstract
A rise in CO2 and other greenhouse gases’ concentration from gas refinery flares and furnaces in the atmosphere causes environmental problems. In this work, a new process was designed to use waste gas (flue gas and flare gas) of a domestic gas
[...] Read more.
A rise in CO2 and other greenhouse gases’ concentration from gas refinery flares and furnaces in the atmosphere causes environmental problems. In this work, a new process was designed to use waste gas (flue gas and flare gas) of a domestic gas refinery to produce pure hydrogen in a membrane reactor. In particular, the process foresees that the energy and CO2 content of flue gas can provide the heat of the mixed reforming reaction to convert flare gas into hydrogen. Furthermore, the characteristics of the feed stream were obtained via simulation. Then, an experimental setup was built up to investigate the performance of a membrane reactor allocating an unsupported dense Pd-Ag membrane at the mentioned conditions. In this regard, a Ni/CeO2 catalyst was loaded in the membrane reformer for mixed reforming reaction, operating at 450 °C, in a pressure range between 100 and 350 kPa and a gas hourly space velocity of around 1000 h−1. The experimental results in terms of methane conversion, hydrogen recovery and yield, as well as products’ compositions are reported. The best results of this work were observed at 350 kPa, where the MR was able to achieve about 64%, 52% and 50% for methane conversion, hydrogen yield and recovery, respectively. Furthermore, with the assistance of the experimental tests, the proposed process was simulated in the scaling up to calculate the needed surface area for MR in the domestic gas refinery. Full article
Figures

Open AccessFeature PaperReview
Origins and Evolution of Inorganic-Based and MOF-Based Mixed-Matrix Membranes for Gas Separations
Processes 2016, 4(3), 32; doi:10.3390/pr4030032 -
Abstract
Gas separation for industrial, energy, and environmental applications requires low energy consumption and small footprint technology to minimize operating and capital costs for the processing of large volumes of gases. Among the separation methods currently being used, like distillation, amine scrubbing, and pressure
[...] Read more.
Gas separation for industrial, energy, and environmental applications requires low energy consumption and small footprint technology to minimize operating and capital costs for the processing of large volumes of gases. Among the separation methods currently being used, like distillation, amine scrubbing, and pressure and temperature swing adsorption, membrane-based gas separation has the potential to meet these demands. The key component, the membrane, must then be engineered to allow for high gas flux, high selectivity, and chemical and mechanical stability at the operating conditions of feed composition, pressure, and temperature. Among the new type of membranes studied that show promising results are the inorganic-based and the metal-organic framework-based mixed-matrix membranes (MOF-MMMs). A MOF is a unique material that offers the possibility of tuning the porosity of a membrane by introducing diffusional channels and forming a compatible interface with the polymer. This review details the origins of these membranes and their evolution since the first inorganic/polymer and MOF/polymer MMMs were reported in the open literature. The most significant advancements made in terms of materials, properties, and testing conditions are described in a chronological fashion. Full article
Figures

Open AccessFeature PaperArticle
Comparison of Membrane Chromatography and Monolith Chromatography for Lactoferrin and Bovine Serum Albumin Separation
Processes 2016, 4(3), 31; doi:10.3390/pr4030031 -
Abstract
These last few decades, membranes and monoliths have been increasingly used as stationary phases for chromatography. Their fast mass transfer is mainly based on convection, which leads to reduced diffusion, which is usually observed in resins. Nevertheless, poor flow distribution, which causes inefficient
[...] Read more.
These last few decades, membranes and monoliths have been increasingly used as stationary phases for chromatography. Their fast mass transfer is mainly based on convection, which leads to reduced diffusion, which is usually observed in resins. Nevertheless, poor flow distribution, which causes inefficient binding, remains a major challenge for the development of both membrane and monolith devices. Moreover, the comparison of membranes and monoliths for biomolecule separation has been very poorly investigated. In this paper, the separation of two proteins, bovine serum albumin (BSA) and lactoferrin (LF), with similar sizes, but different isoelectric points, was investigated at a pH of 6.0 with a BSA-LF concentration ratio of 2/1 (2.00 mg·mL−1 BSA and 1.00 mg·mL−1 LF solution) using strong cation exchange membranes and monoliths packed in the same housing, as well as commercialized devices. The feeding flow rate was operated at 12.0 bed volume (BV)/min for all devices. Afterward, bound LF was eluted using a phosphate-buffered saline solution with 2.00 M NaCl. Using membranes in a CIM housing from BIA Separations (Slovenia) with porous frits before and after the membrane bed, higher binding capacities, sharper breakthrough curves, as well as sharper and more symmetric elution peaks were obtained. The monolith and commercialized membrane devices showed lower LF binding capacity and broadened and non-symmetric elution peaks. Full article
Figures

Open AccessArticle
Incorporating Enhanced Decision-Making Capabilities into a Hybrid Simulator for Scheduling of Batch Processes
Processes 2016, 4(3), 30; doi:10.3390/pr4030030 -
Abstract
A simulation model can accurately capture the details of product recipes in a batch process. By incorporating enhanced capabilities for making key assignment decisions in the simulation executive a model can mimic the experiential knowledge and rules employed in operating a process. As
[...] Read more.
A simulation model can accurately capture the details of product recipes in a batch process. By incorporating enhanced capabilities for making key assignment decisions in the simulation executive a model can mimic the experiential knowledge and rules employed in operating a process. As the process complexity and problem size increase using the mathematical programming (MP) techniques to generate schedules becomes increasingly difficult. A simulation run typically takes very little computation time and generates a schedule that is verifiable. Moreover, the model can be used to explore a wide range of parametric space to evaluate alternate policies and the impact of process uncertainties. Although there is no guarantee of optimality, the quality of schedules thus generated is very good and can be deployed in operations. In this paper the decision-making capabilities of the BATCHES simulator are presented with its application to a set of scheduling problems reported extensively in the literature. The results show that ‘smart’ simulation can be used effectively for a large set of scheduling problems. Full article
Figures