Robust Additive Manufacturing Performance through a Control Oriented Digital Twin

: The additive manufacturing process control utilizing digital twins is an emerging issue. However, robustness in process performance is still an open aspect, due to uncertainties, e.g., in material properties. To this end, in this work, a digital twin offering uncertainty management and robust process control is designed and implemented. As a process control design method, the Linear Matrix Inequalities are adopted. Within speciﬁc uncertainty limits, the performance of the process is proven to be acceptably constant, thus achieving robust additive manufacturing. Variations of the control law are also investigated, in order for the applicability of the control to be demonstrated in different machine architectures. The comparison of proposed controllers is done against a ﬁne-tuned conventional proportional–integral–derivative (PID) and the initial open-loop model for metals manufacturing. As expected, the robust control design achieved a 68% faster response in the settling time metric, while a well-calibrated PID only achieved 38% compared to the initial model.


Introduction
Thermal-based manufacturing processes such as additive manufacturing (AM) processes are gaining more and more attention in the industries for aerospace, automotive, and other sectors [1]. At the same time, recent advantages in additive manufacturing (AM) have been inserting new challenges in monitoring and control, as the need of high-quality AM products requires controllable operations throughout the shopfloor under the existence of disturbances, the uncertainty of material properties, and delays [2][3][4]. What is more, adaptive control in manufacturing processes, in general, provides an undoubtedly better process performance in terms of toolwear in subtractive processes or higher efficiency in non-conventional ones, such as energy efficiency in laser-based AM [5][6][7].
As a matter of fact, the impact of AM could be quantified from the number of publications that have been published in journals, conferences, and white papers with most papers published in the last decade. Most publications regard the optimization of process parameters, process control [8,9], monitoring techniques, machine learning methods for classification of online captured data based on image sensing, process and material phenomena [10], thermal or multiphysics modelling [11,12], and other fields. Figures 1 and 2 summarize the growth and the attention from academia and the industry in authoring new publications on AM regarding the controllability of the processes, monitoring, microstructure control, optimization, modeling, and other perspectives of AM. The datapoints in both figures are obtained from the Scopus database.
It is also noted that the closed-loop control (given the latest developments in the control theory) seems to overcome the challenges of tracking and stabilizing, and additionally, it could constitute underlying technology in digital twin (DT) workflows [13]. As a matter of fact, three main categories of control could be identified, adaptive, optimal, and robust [14,15]. Robust control methodology is utilized in the current research work, as it

State of The Art on Manufacturing Processes Monitoring and Control
Process monitoring and control are essential to manufacturing systems and processes in order for them to be operated under the designed requirements of the engineers while keeping the errors minimal in an automated and energy efficient manner [6]. The latter is an ongoing challenge of the current trend of industry regarding the characterization of sustainability of their production processes and systems through the monitoring of all the Key Performance Indicators (KPIs) through a variety of sensors in each case. Sustainability is also affected by quality policies, such as defects and waste reduction. For instance, a x FOR PEER REVIEW 2 of 20 robust [14,15]. Robust control methodology is utilized in the current research work, as it deals with the assumption of the variation of the nominal process within specified bounds. This is attempted through the optimization problem of the Linear Matrix Inequalities (LMIs), which are widely used in the control dynamical systems [16]. The concept of robust control engineering in manufacturing processes and systems with similar optimization algorithms is used in milling [17] and in the inventory problem [18] through LMIs.
. Evolution of publications number regarding additive manufacturing (AM), based on Scopus database and query BS-KEY(additive manufacturing)). Accessed on 18 March 2021.

State of The Art on Manufacturing Processes Monitoring and Control
Process monitoring and control are essential to manufacturing systems and processes in order for them to be operated under the designed requirements of the engineers while keeping the errors minimal in an automated and energy efficient manner [6]. The latter is an ongoing challenge of the current trend of industry regarding the characterization of sustainability of their production processes and systems through the monitoring of all the Key Performance Indicators (KPIs) through a variety of sensors in each case. Sustainability is also affected by quality policies, such as defects and waste reduction. For instance, a

State of the Art on Manufacturing Processes Monitoring and Control
Process monitoring and control are essential to manufacturing systems and processes in order for them to be operated under the designed requirements of the engineers while keeping the errors minimal in an automated and energy efficient manner [6]. The latter is an ongoing challenge of the current trend of industry regarding the characterization of sustainability of their production processes and systems through the monitoring of all the Key Performance Indicators (KPIs) through a variety of sensors in each case. Sustainability is also affected by quality policies, such as defects and waste reduction. For instance, a standard approach in laser-based manufacturing processes, most common for in situ quality monitoring, would be to use the CCD-based and CMOS cameras [19] for measuring the melt-pool characteristics such as the melting area, a melting distance (either width or length), and peak temperature, utilizing the appropriate filters. However, the datasets of such processing units are large and there is need for data reduction through several approaches, such as Principal Component Analysis (PCA), in order to capture the useful data [20]. For instance, typical machine learning algorithms are neural networks implementing the classification of the parts with respect to their quality, in terms of microstructure, pore creation, and crack developing with exceptional success rate [21]. One additional challenge is that of real-time communications in the processing of the big-data, especially on the thermal-based processes where the interval time of the measurement is short. To this end, Stavropoulos et al. [22] proposed and implemented a human-machineinterface (HMI) quality diagnosis platform for the case of laser welding utilizing PCA, support vector machine (SVM) for the 1st prediction of the stich, and then hidden Markov models to address the part quality assessment.
The aforementioned measurements from the sensors are vital to the controller design (also known as controller synthesis) and thus to the manipulation of the production; typical industrial applications have been found to utilize PID controllers. Where the process requires more sophisticated control, such as controlling multiple-input-multipleoutput (MIMO) dynamic systems (manufacturing processes), model predictive controllers (MPC) have been recommended [23], due to the predefined constraints that take into account both control design and optimality [24]. Another approach of controller design is through convex optimization with different strategies of stabilization, tracking, statefeedback, and observer-based, such as the ones discussed by López-Estrada et al. [25]. Nevertheless, open-loop AM machines are still working on the basis of the optimal selection of process parameters [26]. Most of the publications based on AM processes and process control investigate real-time physical experiments with the usage of sensing devices such pyrometer, thermal cameras acoustic emissions, and other monitoring techniques in order to collect valuable data and manipulate them for the control of dynamic systems [19,27]. Calleja et al. focused on the process control of feed-rate on laser cladding to track the desired path-feed [28]. Most authors use controllers either from the PID family, such a simple proportional gain K p (P controller) or combination of P, PI, PID [29][30][31][32][33], or a more complex methodology, such as model predictive control (MPC) [34][35][36][37]. However, no publication has been found to apply H-Infinity (a form of norm correlating input-output), H-2 (a different norm), or mixed-mode controller via solving Linear Matrix Inequalities, hence, this is investigated here as an effective means to control a single parameter, that of the maximum observed temperature at the melt-pool area, under uncertainties. The current work attempts to build a framework for building a digital twin towards robust manufacturing of metal parts by integrating models. The goal is to use this framework for real-time manufacturing process optimization.

Approach
Thermal processes, such as AM (and welding), as implied above, need attention towards maintaining the predefined process parameters within specific bounds in order to manufacture properly the designed product; each iteration of processing, even for the same component, might be different, due to the environmental parameters such as ambient temperature, powder properties, or the welding configuration. As a matter of fact, a recent article from the Encyclopedia of Systems and Control described the control on AM as following "from a control standpoint, AM processes fundamentally require delivering a specified amount of material and energy at prescribed locations and time" [38]. In this sense, accuracy in the control of the temperature field is of high importance, in terms of both cycle time and product quality [39]. Controlling the power of the laser is crucial as well as the motion of the head [40,41] and the regulation of the laser frequency. The latter can be complicated in terms of the involved physics phenomena, leading to increased modelling complexity [42] and computational time. In any case, theoretical modelling is of high importance in this sense [10,43]. From the perspective of real time optimization, an approach to controller design can be achieved through a closed-loop involving convex optimization, taking into account different strategies and implementations including; stabilization, tracking, state feedback, and observer-based, such as the ones discussed in [25]. Figure 3 shows the novel concept of the Robust Manufacturing Process, which mainly consists of the monitoring devices, a control system, and the stock (or raw material), all depending on the process. Furthermore, the Digital Twin is becoming more and more relevant in industrial applications, due to the enhancements that it provides. The current concept for designing a Digital Twin, taking into account the Robust Manufacturing Process concept, involves three different categories of inputs and a single category of output. The first one is the "Demands", which are the initial criteria of the customer or the firm. Such criteria are the optimization of the process, the energy efficiency, reduction of the processing time, synthesis robustness against uncertainties, disturbances, delay, and the better quality of the product. The second one is the "Resources", which involve the physical aspects of the process, such as the sensing devices (e.g., CMOS, CCD, pyrometers, etc.), the laserhead and the light source, the raw material, or the stock and the Digital Twin simulation based on data-driven models. The third and last of the inputs is related to the robustness, taking into account all the different uncertainties of the process. The uncertainties may be characterized by the variation of the material properties (e.g., powder), of the monitoring signal (feedback signal from the process output) and of the controller's signals, which drive the actuators of the process. Finally, it is mentioned that the "Outputs" are mandatory to satisfy the aforementioned demands and to achieve a near-perfect product, taking also into account the uncertainties. can be complicated in terms of the involved physics phenomena, leading to increased modelling complexity [42] and computational time. In any case, theoretical modelling is of high importance in this sense [10,43]. From the perspective of real time optimization, an approach to controller design can be achieved through a closed-loop involving convex optimization, taking into account different strategies and implementations including; stabilization, tracking, state feedback, and observer-based, such as the ones discussed in [25]. Figure 3 shows the novel concept of the Robust Manufacturing Process, which mainly consists of the monitoring devices, a control system, and the stock (or raw material), all depending on the process. Furthermore, the Digital Twin is becoming more and more relevant in industrial applications, due to the enhancements that it provides. The current concept for designing a Digital Twin, taking into account the Robust Manufacturing Process concept, involves three different categories of inputs and a single category of output. The first one is the "Demands", which are the initial criteria of the customer or the firm. Such criteria are the optimization of the process, the energy efficiency, reduction of the processing time, synthesis robustness against uncertainties, disturbances, delay, and the better quality of the product. The second one is the "Resources", which involve the physical aspects of the process, such as the sensing devices (e.g., CMOS, CCD, pyrometers, etc.), the laser-head and the light source, the raw material, or the stock and the Digital Twin simulation based on data-driven models. The third and last of the inputs is related to the robustness, taking into account all the different uncertainties of the process. The uncertainties may be characterized by the variation of the material properties (e.g., powder), of the monitoring signal (feedback signal from the process output) and of the controller's signals, which drive the actuators of the process. Finally, it is mentioned that the "Outputs" are mandatory to satisfy the aforementioned demands and to achieve a nearperfect product, taking also into account the uncertainties. The current research concerns process control towards guaranteeing a desired performance in the response. A FEM procedure for LPBF is built to verify the process control through this innovative solution. This model has been verified in the past [41] and has been proven to predict to a satisfactory degree the process behavior. It constitutes a nominal process behavior that the digital twin can operate on and add uncertainty. The results from the model are imported to a software manipulating dynamic systems [44] via a parametric dynamic system identification involving the desired KPIs, such as peak temperature of the melt-pool and melt-pool size, as well as process parameters, such as laser power. The identification estimates a state-space realization of the empirical model, and a robust controller can be generated. The result of the so-called "Re-Run Model" (defined as the "real" dynamic system response to the controller output) is regarded to evaluate the performance of the controller with respect to the tracking response. This approach can The current research concerns process control towards guaranteeing a desired performance in the response. A FEM procedure for LPBF is built to verify the process control through this innovative solution. This model has been verified in the past [41] and has been proven to predict to a satisfactory degree the process behavior. It constitutes a nominal process behavior that the digital twin can operate on and add uncertainty. The results from the model are imported to a software manipulating dynamic systems [44] via a parametric dynamic system identification involving the desired KPIs, such as peak temperature of the melt-pool and melt-pool size, as well as process parameters, such as laser power. The identification estimates a state-space realization of the empirical model, and a robust controller can be generated. The result of the so-called "Re-Run Model" (defined as the "real" dynamic system response to the controller output) is regarded to evaluate the performance of the controller with respect to the tracking response. This approach can be applied to processes related to other processes as well, taking into account the physics, as well as to systemic approaches, such as chemical kinetics. The non-linearities have been proven to be addressed on a case-by-case approach, through linearization and changes in system dynamics.
Since control through a digital twin towards guaranteeing a desired performance in the process response is relevant, regardless of the existence of uncertainties/disturbances, an elaborated framework has been designed to address this, as shown in Figure 4. There are four phases considered, annotated in Latin numerals (preparation, training, running, generalization). This procedure is harmonized with architecture and implementation principles [45] and takes into account a variety of technologies indicated hereafter. Furthermore, there are three layers, which the entities have been classified into (physical, cyber, and cognitive). The training and running phases, engaging the cyber and the physical aspects, are mainly presented herein, with the cognition and the knowledge parts having been slightly neglected without loss of generality. Examples on knowledge management that are highly relevant in this case are related to aggregating KPIs towards alarms management [46] and creating intuition through the modelling-oriented knowledge towards correlating process mechanism aspects to process performance [47].
Metals 2021, 11, x FOR PEER REVIEW be applied to processes related to other processes as well, taking into account the as well as to systemic approaches, such as chemical kinetics. The non-linearities h proven to be addressed on a case-by-case approach, through linearization and ch system dynamics.
Since control through a digital twin towards guaranteeing a desired perform the process response is relevant, regardless of the existence of uncertainties/distu an elaborated framework has been designed to address this, as shown in Figure  are four phases considered, annotated in Latin numerals (preparation, training, generalization). This procedure is harmonized with architecture and implem principles [45] and takes into account a variety of technologies indicated hereaf thermore, there are three layers, which the entities have been classified into ( cyber, and cognitive). The training and running phases, engaging the cyber and t ical aspects, are mainly presented herein, with the cognition and the knowledge p ing been slightly neglected without loss of generality. Examples on knowledge ment that are highly relevant in this case are related to aggregating KPIs toward management [46] and creating intuition through the modelling-oriented know wards correlating process mechanism aspects to process performance [47]. The training phase, which is shown in detail in Figure 5, is crucial for the qu tion of the material properties in order for the control design to operate based on nal process model. Moreover, the integration of the uncertainty is achieved in t since a set of models is then defined. This piece of information is useful mainl design of the process control form. The equations of the design require an estim the boundaries of the uncertainty. This will be illustrated in the case study, herea worth mentioning that the use of fast monitoring and identification techniques i case, very important for the training phase's feasibility. Such techniques are the spectroscopy [48], as well as thermal methods. Further functionalities for the train cedure are (i) the modelling manipulation and (ii) the capability tracking. The fir related to retrieving real time models out of theoretical models. These can then grated into the digital twin workflow for the process control form (also known a control law) estimation. On the other hand, the tracking capability refers to the fe The training phase, which is shown in detail in Figure 5, is crucial for the quantification of the material properties in order for the control design to operate based on a nominal process model. Moreover, the integration of the uncertainty is achieved in this way, since a set of models is then defined. This piece of information is useful mainly for the design of the process control form. The equations of the design require an estimation of the boundaries of the uncertainty. This will be illustrated in the case study, hereafter. It is worth mentioning that the use of fast monitoring and identification techniques is, in this case, very important for the training phase's feasibility. Such techniques are the real time spectroscopy [48], as well as thermal methods. Further functionalities for the training procedure are (i) the modelling manipulation and (ii) the capability tracking. The first one is related to retrieving real time models out of theoretical models. These can then be integrated into the digital twin workflow for the process control form (also known as process control law) estimation.
On the other hand, the tracking capability refers to the feasibility of the control form, since the machine controllers do not always have the ability of assimilating a new control form. The adaptation of the controller in such an instance, as tailored for the current novel Digital Twin, is also demonstrated in the case study, having taken a pulsed laser source into consideration. To accelerate the training procedure, a database, acting as a fast control design medium, can be integrated into the case where the material properties and the uncertainties are recurring. The actual controller parameters can then be retrieved in an automated way. The process of making the corresponding associations takes place in the fourth phase, named as 'Generalization', and provides the framework with knowledge management and flexibility.

Simulation
Laser-based AM processes are complex phenomena (Figure 6), since they involve materials and especially powders, with a specific spatial distribution and size distribution, thermal-fluid dynamics during melting, dual phase transition (melting and evaporation), shielding gases existence (i.e., argon [49]), and others. In this research work, the continuum modelling approach was adopted on the basis on Finite Element Method (FEM), as uncertainty is taken into account at the fourth step. The adaptation of the controller in such an instance, as tailored for the current novel Digital Twin, is also demonstrated in the case study, having taken a pulsed laser source into consideration. To accelerate the training procedure, a database, acting as a fast control design medium, can be integrated into the case where the material properties and the uncertainties are recurring. The actual controller parameters can then be retrieved in an automated way. The process of making the corresponding associations takes place in the fourth phase, named as 'Generalization', and provides the framework with knowledge management and flexibility.

Simulation
Laser-based AM processes are complex phenomena (Figure 6), since they involve materials and especially powders, with a specific spatial distribution and size distribution, thermal-fluid dynamics during melting, dual phase transition (melting and evaporation), shielding gases existence (i.e., argon [49]), and others. In this research work, the continuum modelling approach was adopted on the basis on Finite Element Method (FEM), as uncertainty is taken into account at the fourth step.

Dynamic System Identification of Simulation
In order to synthesize a controller, a mathematical model needs to be calculated through the input(s)-output(s) relationship, as given from physics-based models. The dynamic process could be identified with polynomial approaches such as autoregressivemoving-average model (ARMAX), autoregressive (ARX), Box-Jenkins (BJ), Output-Error (OE), or state-space methods such as subspace [35]. In our study, the ARX technique has been utilized, which is derived making use of the appropriate criteria.

Process Control of Laser-Based Manufacturing Processes
The next step would be the design of the process controller. In this research work, optimal and robust controllers are adopted. This is performed through solving an optimization problem (linear matrix inequalities system). Then, through this procedure, the matrices are retrieved and the optimality under constraints is guaranteed, leading to the desired controller. Hence, the user defines the KPIs that the process should track, namely peak temperature (choice in this work), melt-pool characteristics (width, length, depth), solidification rate, cool-down rate, and others. Therefore, the closed-loop system, as illustrated in Figure 7, is constructed to meet the output(s) KPI(s) requirements and regulate the dynamic profile on the process parameter(s).
fourth phase, named as 'Generalization', and provides the framework with knowledge management and flexibility.

Simulation
Laser-based AM processes are complex phenomena (Figure 6), since they involve materials and especially powders, with a specific spatial distribution and size distribution, thermal-fluid dynamics during melting, dual phase transition (melting and evaporation), shielding gases existence (i.e., argon [49]), and others. In this research work, the continuum modelling approach was adopted on the basis on Finite Element Method (FEM), as uncertainty is taken into account at the fourth step.

Dynamic System Identification of Simulation
In order to synthesize a controller, a mathematical model needs to be calculated through the input(s)-output(s) relationship, as given from physics-based models. The dynamic process could be identified with polynomial approaches such as autoregressivemoving-average model (ARMAX), autoregressive (ARX), Box-Jenkins (BJ), Output-Error (OE), or state-space methods such as subspace [35]. In our study, the ARX technique has been utilized, which is derived making use of the appropriate criteria.

Process Control of Laser-Based Manufacturing Processes
The next step would be the design of the process controller. In this research work optimal and robust controllers are adopted. This is performed through solving an optimization problem (linear matrix inequalities system). Then, through this procedure, the matrices are retrieved and the optimality under constraints is guaranteed, leading to the desired controller. Hence, the user defines the KPIs that the process should track, namely peak temperature (choice in this work), melt-pool characteristics (width, length, depth) solidification rate, cool-down rate, and others. Therefore, the closed-loop system, as illustrated in Figure 7, is constructed to meet the output(s) KPI(s) requirements and regulate the dynamic profile on the process parameter(s).

Uncertainty Manipulation Through Robustness
The main idea of the LMIs method is to define a known problem as an optimization problem with a linear objective and linear matrix inequalities (LMI) constraints as shown in Equation (1) [51].
For controlling a dynamic system, the algorithms H-Infinity and H-2 are widely acceptable for the robustness of performance without the tuning of parameters, as is the case with the PID controllers. Thus, each dynamic system that represents an AM process

Uncertainty Manipulation through Robustness
The main idea of the LMIs method is to define a known problem as an optimization problem with a linear objective and linear matrix inequalities (LMI) constraints as shown in Equation (1) [51]. For controlling a dynamic system, the algorithms H-Infinity and H-2 are widely acceptable for the robustness of performance without the tuning of parameters, as is the case with the PID controllers. Thus, each dynamic system that represents an AM process should be designed under one or more of the following assumptions:
The disadvantage of the convex optimization problem in the control and especially in the field of thermal-based processes is the complexity of the system to model in comparison with the simple PID controller with the tuning of three simple parameters without the knowledge if the tuned parameters are the optimal ones. The proposed controllers guarantee, however, the robust optimization of the system without the tuning of controller parameters. In this research work, we adopt three optimal controllers from Caverly and Forbes [52] and Oliveira et al. [53] and a robust MPC from Hu and Ding [54]. In total, the controllers are: • In literature for AM processes, under the uncertainty analysis, the reliability and the variation in the quality of the powder is still a challenge [56]. The powder parameters, during melting and solidification, are sources of uncertainty [57]. Other sources of uncertainty in the material are the diameter of particles in the powder, the absorption coefficient, the diffusion coefficient, and others. Lopez et al. [4] have attempted to address such issues, having assumed that the material properties of the absorption coefficient, latent heat, melting temperature, and the thermal diffusivity are distributed in a normal distribution. In the measurement field, it may be the sensing devices (i.e., temperature signals uncertainties) which lead to errors.

Case Study: Robust Digital Twin for LPBF
In this work, the finite element method in two-dimensional space is adopted to demonstrate a laser powder bed fusion AM process, as performed previously in the literature. A single layer of powder (1.0 mm × 0.05 mm) and a substrate of 1.0 mm × 0.15 mm dimensions are assumed for the model. A sensitivity analysis is developed to adjust the FEM parameters such as mesh size, type of mesh element, discretization elements, absolute tolerance, and relevance tolerance in order to stabilize and converge the problem's solution. A finer mesh (linear quad type) is employed with a mesh size of 5 µm (10× of radius) for the layer and a coarser mesh for the substrate. For the purpose of synthesizing a robust controller, only the heat transfer mechanisms are considered, including the radiation and the convection. Mass and momentum equilibrium equations have been deliberately neglected, such as computational fluid dynamics (CFD) coupling with heat transfer (multiphysics modelling) [58], the Marangoni effect (surface tension), and vapor recoil pressure (evaporation phase, from liquid state to gaseous state) to simplify the process control [49]. It has been proven through simulations that for the used process parameters herein, the existence of such phenomena does not have major impact on the results, it only adds quite some complexity to modelling. Neglecting such phenomena, however, is not advised to be performed in a general case. Finally, it is mentioned that phase change is also integrated, as described by Ansari et al. [59].

Heat Transfer of the Model
The thermal equilibrium is described by Equation (2) where, Q is the external heat source to the system (here, it was selected to be delivered from a Gaussian beam which is mainly used in LPBF [60]), T is the temperature, ρ is the density, C p is the specific heat capacity, k is the thermal conductivity, and t is the time. Equation (3) describes the Gaussian distribution on the free surface of powder (in W/m 2 ), where r o is the beam's radius, a is the absorbance derived from ray-tracing in powder, P laser is the power of laser, and r is the radial distance from the laser beam center and is described by Equation (4).

Boundary and Initial Conditions
The initial temperature of the chamber is assumed to be equal to 293 K, as described in Equation (5).

Upper Free Side
The upper side involves convection and radiation heat transfer, due to the argon gas inside the chamber, and is described by Equation (6). The convection coefficient is assumed to be equal to 10 W/m 2 K [61], and the coefficient of radiation emissivity is assumed to be temperature-dependent [62].

Left and Right Side
The left and right side involve only convection, as described by Equation (7).

Lower Side
As far as the lower side of the substrate is concerned, its modelling is described by Equation (8), implying that no-interaction with the rest of the geometry is involved.
Vlasea et al. [63] identify the controllable process parameters in LPBF, such as the laser power, scan speed, beam diameter, and so forth. In this study, the peak temperature is considered to be the output signal, while the controllable process parameter is the laser power. Table 1 summarizes all the assumed process parameter values for the current simulation. Also, the material properties can be found in Table A1.

Systemic Modelling and Design of Controllers
The system identification provides the minimal state-space realization system G p : (A, B, C, D).
x(k + 1) = Ax(k) where, x is the state, y the measured output signal, u is the controller's output signal, w is the exogenous signal, and z is the performance signal. The dynamic output feedback controller is described by Equation (10), and the parameters {A c , B c , C c , D c } are calculated via the optimization problem. In this step, the controller is designed under the assumption of uncertainty in material properties such as the powder in the LPBF. We assumed a variation of the nominal values on the Ti-6 Al-4 V, −10% ≤ a uncert ≤ 10%. To further elaborate on this, five discrete absolute values were selected in order to reduce the computational time of FEM and the RMPC algorithm, namely −10%, −7.5%, −5%, −2.5%, 0% (nominal process), 2.5%, 5%, 7.5%, and 10%. Each simulation provides a response of the maximum temperature on the melt-pool and the relationship between input-output, as described by an ARX model. Hence, eight different mathematical models occur in the uncertain process description and a single one as the nominal one. Then, we considered the following linear parameter-varying (LPV) state-space in order to synthesize the Robust MPC algorithm under the assumption of known uncertainties from Hu and Ding [54], as shown in the Equation (11), and the controller has the form of the Equation (10).
where, A(k), B(k), C(k) ∈ Ω and the polytopic Ω with all the uncertainties has the following form [64].

Results and Discussion
The maximum temperature in the melt-pool area was observed as the KPI (controllable parameter), and it is shown in Figure 8. The peak temperature along time on the meltpool reaches approximately 2780 K after the settling time of 140 µs with the current process parameters.

System Representation
Next, we calculated the outcomes of five different parametric system identifications. After trial and error of all the aforementioned techniques, we conclude to the ARX method (autoregressive systemic description with exogenous terms). A system of third order has been selected on the basis of the Bayesian Information Criterion (BIC) as well as the RSS/SSS criterion, with the lowest values in both metrics. All the estimated models are shown in Figure 9, while the selection of the order is shown in the Figure 10 (axes X and Y denote the order of model and criterion value, respectively).

System Representation
Next, we calculated the outcomes of five different parametric system identific After trial and error of all the aforementioned techniques, we conclude to the ARX m (autoregressive systemic description with exogenous terms). A system of third ord been selected on the basis of the Bayesian Information Criterion (BIC) as well RSS/SSS criterion, with the lowest values in both metrics. All the estimated mod shown in Figure 9, while the selection of the order is shown in the Figure 10 (axes Y denote the order of model and criterion value, respectively).

System Representation
Next, we calculated the outcomes of five different parametric system identifications. After trial and error of all the aforementioned techniques, we conclude to the ARX method (autoregressive systemic description with exogenous terms). A system of third order has been selected on the basis of the Bayesian Information Criterion (BIC) as well as the RSS/SSS criterion, with the lowest values in both metrics. All the estimated models are shown in Figure 9, while the selection of the order is shown in the Figure 10 (axes X and Y denote the order of model and criterion value, respectively).

Performance of the Initial Digital Twin
The optimization problem in LMIs of LPBF improves the desired tracking reference of 2800 K in comparison to PID and the empirical initial model. The H-Infinity, H 2 , and the observer-based controllers produced similar dynamic power signals. The assessment of all the controllers is conducted utilizing typical metrics of the step response, as depicted in Figure 11a, while Figure 11b presents the comprehensive results from the all the developed controllers and the empirical power signal. The comprehensive results of the initial Digital Twin are presented in the Table 2. To illustrate the contrast in dynamical profile of power with the constant one, the step response metrics are investigated.

Performance of the Initial Digital Twin
The optimization problem in LMIs of LPBF improves the desired tracking reference of 2800 K in comparison to PID and the empirical initial model. The H-Infinity, H2, and the observer-based controllers produced similar dynamic power signals. The assessment of all the controllers is conducted utilizing typical metrics of the step response, as depicted in Figure 11a, while Figure 11b presents the comprehensive results from the all the developed controllers and the empirical power signal. The comprehensive results of the initial Digital Twin are presented in the Table 2. To illustrate the contrast in dynamical profile of power with the constant one, the step response metrics are investigated.  The developed LMIs calculate similar controllers according to the metric of settling time, rise time, and the overshoot. The response of the proposed controllers via convex optimization is apparently faster than the PID controller, with the settling time and the rise time of the latter being approximately twice as in the H-Infinity. In the literature, most researchers have assumed constant process parameters. For instance, the response of a single track can be considered, similarly to this case study, with an estimated settling time circa 250 µs with a constant power signal of 100 W [65]. Another report employs a research for the best candidates process parameters of the LPBF such as the scanning speed and the laser beam diameter while maintaining a constant value of the power at 200 W [66].

Robustness of the LPBF DT
Then, the existence of variation in the material properties is assumed. Figure 12 shows the response of peak temperature for all the uncertainties, normalized with respect to the nominal process in order to make a comprehensive overview of the imported uncertainty. The polytopic uncertainties vary from -10% to 10% from the nominal value of each material property in both phases, namely the density, the specific heat capacity, and the thermal conductivity. The blue solid line indicates the nominal process which is slightly under 100%, due to the empirical input data for the tracking of the 2800 K on the FEM simulation, the colored dashed lines denote the uncertainty within the range of (0, 10] % while the rest lie in the [-10, 0) % area. The tracking design has been empirical, directly imposed as a value in the FEM model in this case, so underachieving in tracking is logical.
Step response metrics (a), robust optimization solution based on LMIs in comparison with a fine-tuned PID controller (b). Figure 13 shows the response and the controller's output for the polytopic closed-loop system. The algorithm took about 810 sec (13.5 min) to solve the whole LMIs and compute a single controller. It is clearly observed that all the responses converge to the desired value of 2800 K, but the close up of Figure 13 depicts the variance percentage of the responses. The blue line (nominal process) should be on the 100%, but the response from FEM with the single controller from RMPC algorithm is a bit lower than 98%. The uncertain parameter above 100% presents with good performance in tracking, with the maximum positive uncertain parameter 10% reaching the desired tracking value with an error of 1%, as shown in Figure 13 with the black solid line. On the other side, processes with negative uncertainty parameters, below 100%, do not respond as the previous ones. The maximum negative value of -10% reaches an error of -5% from the tracking value. The fact that this controller underperforms in the case of the nominal variation has to do with the fact that it is a single controller optimizing the whole ensemble (all the subsystems of the polytopic system at once). Thus, the overall performance is statistical and is not guaranteed for each case separately. A different approach could be used in adopting a switched (varying per system) controller, however, the complexity with respect to monitoring the defect in that case would be much larger, leading to a slightly different workflow as well.
to the nominal process in order to make a comprehensive overview of the imported uncertainty. The polytopic uncertainties vary from -10% to 10% from the nominal value of each material property in both phases, namely the density, the specific heat capacity, and the thermal conductivity. The blue solid line indicates the nominal process which is slightly under 100%, due to the empirical input data for the tracking of the 2800 K on the FEM simulation, the colored dashed lines denote the uncertainty within the range of (0, 10] % while the rest lie in the [-10, 0) % area. The tracking design has been empirical, directly imposed as a value in the FEM model in this case, so underachieving in tracking is logical.  Figure 13 shows the response and the controller's output for the polytopic closedloop system. The algorithm took about 810 sec (13.5 min) to solve the whole LMIs and compute a single controller. It is clearly observed that all the responses converge to the desired value of 2800 K, but the close up of Figure 13 depicts the variance percentage of the responses. The blue line (nominal process) should be on the 100%, but the response from FEM with the single controller from RMPC algorithm is a bit lower than 98%. The uncertain parameter above 100% presents with good performance in tracking, with the maximum positive uncertain parameter 10% reaching the desired tracking value with an error of 1%, as shown in Figure 13 with the black solid line. On the other side, processes with negative uncertainty parameters, below 100%, do not respond as the previous ones. The maximum negative value of -10% reaches an error of -5% from the tracking value. The fact that this controller underperforms in the case of the nominal variation has to do with the fact that it is a single controller optimizing the whole ensemble (all the subsystems of the polytopic system at once). Thus, the overall performance is statistical and is not guaranteed for each case separately. A different approach could be used in adopting a switched (varying per system) controller, however, the complexity with respect to monitoring the defect in that case would be much larger, leading to a slightly different workflow as well. To guarantee the method's applicability in different LPBF systems, less flexible laser sources are taken into consideration. Flexibility is used here, in the sense that the laser power being generated from the controller cannot be accurately reproduced. Consequently, a pulsed laser has been regarded and the power of its pulses is modulated to have an amplitude that can compensate for the loss of power, due to the pulses duty cycle. The normalized responses of the controlled process are given in Figure 14, for both cases. To guarantee the method's applicability in different LPBF systems, less flexible laser sources are taken into consideration. Flexibility is used here, in the sense that the laser power being generated from the controller cannot be accurately reproduced. Consequently, a pulsed laser has been regarded and the power of its pulses is modulated to have an amplitude that can compensate for the loss of power, due to the pulses duty cycle. The normalized responses of the controlled process are given in Figure 14, for both cases.
The laser-based manufacturing process control through the LMIs indeed provides an enhancement in the response in comparison to PID. Another aspect of an LMI-based controller is the difficulty of construction of the matrices, while a simple PID only needed calibrated gains which are not optimum. However, it is worth being adopted in industrial applications, especially since adding this procedure in digital twin mechanisms like that of Figure 4 reduces the difficulty [67] and complexity of the synthesis of such a controller. As a matter of fact, in the case of the continuous laser, it has been quite promising, leading to robust manufacturing. The case of the pulsed laser, however, has been limited towards the laser power regulation. To this end, the variation in the result in time is obvious. However, small scale regulations could be used to enhance this output, even though it has been considered to be infeasible, due to technological constrains. Furthermore, any phenomena of smaller scale would probably need further elaboration of the models, including extra mechanisms or different sources (i.e., ultrafast lasers). To guarantee the method's applicability in different LPBF systems, less flexible laser sources are taken into consideration. Flexibility is used here, in the sense that the laser power being generated from the controller cannot be accurately reproduced. Consequently, a pulsed laser has been regarded and the power of its pulses is modulated to have an amplitude that can compensate for the loss of power, due to the pulses duty cycle. The normalized responses of the controlled process are given in Figure 14, for both cases.  Several reports have shown that more conventional controllers are been used in the field of AM. Additionally, a NIST report indicates that using the FPGA system to fully control a PBF process is feasible [68]. Furthermore, researchers already have employed PI controllers in laser-cladding in an embedded FPGA [31] and also in cases of commercial 3D printing [69]. Finally, the optimization parameters could also be transmitted to a real-time simulator with the identical process in order to predict k steps forward in the residual stresses or unwanted developed deformations.

Conclusions
The proposed robust controllers through the optimization of LMIs provide a superior performance against a fine-tuned PID controller, especially under the existence of uncertainties in the material properties. Achieving robust additive manufacturing is not a straightforward task, however. Nevertheless, the current framework for a digital twin is an attempt to successfully address the problem of robust process manufacturing. Besides the reduction in the process time and achieving temperature tracking, the enhancement in terms of a process performance variation is promising. This type of controller is retrieved through convex optimization. The second goal of the current study has been to determine the tracking performance under the appearance of material uncertainties on the powder in both phases (solid and liquid). The uncertainty in the problem follows the same procedure as the first one (simulation, identification, design of robust control), however a polytopic systemic description is required to retrieve a single state-space model. Both goals target the Digital Twin scheme for enhanced control through the proposed controllers for a rapid response of the process, while the DT identifies a vital adjustment on the KPIs. The findings indicate a preferable closed-loop performance in terms of settling and rise time in contrary to existing conventional methods and the open-loop simulation model with constant process parameters. However, the energy efficiency is yet to be involved in the relevant criteria.
Furthermore, as far as future work is concerned, the implementation of the algorithm in a CPS next to a machine in order to assess all the relevant KPIs, beyond efficiency and sustainability, is pending. The experimental validation may be employed by an embedded FPGA to the AM machine to manipulate the desired process parameters under the robust control theory with the use of the LMI optimization problem. Finally, regulation (control) of many different process parameters, such as motion parameters, is another direction of a future outlook.
Author Contributions: Conceptualization, A.P.; software, C.K.M.; validation, P.S.; writing-original draft preparation, C.K.M. and A.P.; writing-review and editing, G.C. All authors have read and agreed to the published version of the manuscript.
Funding: This work is conducted under the framework of EU Project AVANGARD. This project has received funding from the European Union's Horizon 2020 research and innovation program under grant agreement No 869986. The dissemination of results herein reflects only the authors' view and the Commission is not responsible for any use that may be made of the information it contains.

Conflicts of Interest:
The authors declare no conflict of interest.