Next Article in Journal
Distribution and Metabolic Activities of Marine Microbes in Response to Natural and Anthropogenic Stressors
Next Article in Special Issue
Lifecycle Environmental Benefits with a Hybrid Electric Propulsion System Using a Control Algorithm for Fishing Boats in Korea
Previous Article in Journal
Coupling a Parametric Wave Solver into a Hydrodynamic Circulation Model to Improve Efficiency of Nested Estuarine Storm Surge Predictions
Previous Article in Special Issue
Effectiveness of the Speed Reduction Strategy on Exhaust Emissions and Fuel Oil Consumption of a Marine Generator Engine for DC Grid Ships
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Prediction of Emission Characteristics of Generator Engine with Selective Catalytic Reduction Using Artificial Intelligence

1
Division of Marine Engineering, Korea Maritime and Ocean University, Busan 49112, Korea
2
Interdisciplinary Major of Maritime and AI Convergence, Korea Maritime and Ocean University, Busan 49112, Korea
3
Division of Marine System Engineering, Korea Maritime and Ocean University, Busan 49112, Korea
4
Division of Marine Information Technology, Korea Maritime and Ocean University, Busan 49112, Korea
*
Authors to whom correspondence should be addressed.
J. Mar. Sci. Eng. 2022, 10(8), 1118; https://doi.org/10.3390/jmse10081118
Submission received: 29 July 2022 / Revised: 8 August 2022 / Accepted: 12 August 2022 / Published: 13 August 2022
(This article belongs to the Special Issue Marine Alternative Fuels and Environmental Protection II)

Abstract

:
Eco-friendliness is an important global issue, and the maritime field is no exception. Predicting the composition of exhaust gases emitted by ship engines will be of consequence in this respect. Therefore, in this study, exhaust gas data were collected from the generator engine of a real ship along with engine-related data to predict emission characteristics. This is because installing an emission gas analyzer on a ship has substantial economic burden, and, even if it is installed, the accuracy can be increased by a virtual sensor. Furthermore, data were obtained with and without operating the SCR (often mounted on ships to reduce NOx), which is a crucial facility to satisfy environment regulation. In this study, four types of datasets were created by adding cooling and electrical-related variables to the basic engine dataset to check whether it improves model performance or not; each of these datasets consisted of 15 to 26 variables as inputs. CO2 (%), NOx (ppm), and tEx (°C) were predicted from each dataset using an artificial neural network (ANN) model and a support vector machine (SVM) model with optimal hyperparameters selected by trial and error. The results confirmed that the SVM model performed better on smaller datasets, such as the one used in this study compared to the ANN model. Moreover, the dataset type, DaCE, which had both cooling and electrical-related variables added to the basic engine dataset, yielded the best overall prediction performance. When the performance of the SVM model was measured using the test data of a DaCE on both no-SCR mode and SCR mode, the RMSE (R2) of CO2 was between 0.1137% (0.8119) and 0.0912% (0.8975), the RMSE (R2) of NOx was between 17.1088 ppm (0.9643) and 13.6775 ppm (0.9776), and the RMSE (R2) of tEx was between 4.5839 °C (0.8754) and 1.5688 °C (0.9392).

1. Introduction

At the beginning of 2021, 99,800 ships were sailing worldwide on seas to transport logistics and energy; this was an increase of 3% compared to January 2020. From 1970 to 2020, international maritime trade steadily increased. Most of the exhaust from a ship is generated from the marine main engine, marine generator engine (G/E), and boiler. The volume of greenhouse gases (GHG), including carbon dioxide (CO2), methane, and nitrogen oxides (NOx), increased from 977 million tons in 2012 to 1076 million tons in 2018. Moreover, the share of ships in the global anthropogenic emissions increased from 2.76% in 2012 to 2.89% in 2018 [1,2]. According to the study, the allometric relationship between global fleet size and NOx from shipping was observed to be positive [3]. Therefore, the amount of NOx will be increased gradually, and adverse effects cannot be ignored. NOx in GHG affects human health and the environment, in addition to depleting the ozone layer and causing acid rain [4]. Furthermore, shipping emissions contribute to 1–14% of PM and 7–24% of NOx in European coastal areas [5]. Accordingly, the International Maritime Organization (IMO) imposes sanctions mandating the reduction in NOx, and emission control areas have been defined in several places. Therefore, ships have started implementing fuel–water emulsion, exhaust gas recirculation (EGR), selective catalytic reduction (SCR), and common rail fuel injection to reduce PM and NOx. Alternative fuels such as biodiesel, methanol, and LNG are another option to reduce emissions and ammonia, as well as hydrogen, which are considered as potential future fuels to meet the regulations [6,7]. Even if such eco-friendly technology is applied, if emission is not monitored, it is impossible to know whether environmental regulations are satisfied or whether the equipment is operating normally. Therefore, emission measurement is important in the mid- to long-term management aspect.
Compared to a ship, the power plant has a small number of funnels per unit of energy generated; hence, expensive equipment, such as a continuous emission monitoring system (CEMS) can be installed to check the emissions. However, even when such equipment is installed, an adequate measurement accuracy cannot be guaranteed owing to problems [8] such as low flow rates, leaks, sample condensation, different lengths of the flue gas sampling pipeline, and possibility of damage to or rapid aging of the equipment [9,10]. Therefore, machine learning (ML) studies using an artificial neural network (ANN) model [9], attention mechanism long short-term memory model [10], and least squares support vector machine (LS-SVM) model [9,11] were conducted to compensate for the drawbacks of power plant CEMSs in performing emission predictions.
In recent years, artificial intelligence (AI) has been applied to diverse fields, including maritime applications. Studies related to marine diesel engines have mainly focused on fault detection and diagnosis [12,13,14], performance analysis and prediction [15,16], and optimization [17,18]. Studies on other maritime machinery include prediction and monitoring of machinery system conditions [19,20], energy management systems [21], and fuel consumption [22].
In this study, we created an emission prediction model of a G/E because it is very important, as it is always running whether the ship is sailing or moored in the port. The pertinent literature on using ML to predict the emissions generated by a diesel engine was explored and is summarized in Table 1. However, ML studies on G/E emissions are very limited; furthermore, most of the studies are related to diesel engine emissions from vehicles or emissions of single-cylinder diesel engines at laboratory scales. Therefore, this study can be considered necessary and valuable for predicting maritime emissions of a generator engine. Moreover, considering that industrial generator engines used in various places such as power plants and buildings as emergency generators are the same type as the G/E, this study can be applied to several fields of research.
According to previous studies, it can be seen that ANNs, which are currently in vogue, were used for emission predictions, followed by support vector machines (SVMs). It can be estimated that the SVM, which has a great advantage in solving small sample regression problems, was used in many studies because it was not possible to obtain long-term big data by installing emission measurement devices in laboratory experiments or on cars [23].
Among those mentioned in Table 1, four studies [24,25,26,27] predicted emissions using nonconventional fuel oil for diesel engines. However, in this study, marine gas oil (MGO), which is widely used according to the 2020 IMO sulfur content regulation, was used. Similar to this study, the authors of [28] performed predictions on the performance and emissions of a diesel electric generator (DG). However, only an ANN model was used, with the number of input features limited to three, which might not be sufficient to reflect various characteristics of the DG. In addition, only carbon-based emission was predicted, and hyperparameter tuning, a process for finding an optimal model, was not implemented. Another study [29], similar to the above, predicted the performance and emissions using data generated from a verified thermodynamic model of a G/E. However, in this study, NOx and Soot were predicted for emissions, but not for carbon-based emissions. In addition, four input features and simulation data were used instead of the actual data. This method has drawbacks in terms of time and energy required to generate simulation data for general application to numerous ships. Therefore, an approach that can be applied to all kinds of ships in a simple way is required.
Table 1. Reviewed articles related to engine exhaust emissions using AI.
Table 1. Reviewed articles related to engine exhaust emissions using AI.
AuthorsResearch ContentField
[30]-Predicted the exhaust gas temperature and compared it with that from four different algorithms, namely, the ANN, random forest, SVM, and gradient boosting regression trees.Not clearly stated
[23]-Studied the effects of the model parameters and the training sample size on the prediction accuracy of the SVM regression model for an HCNG engine.Vehicle
[24]-Reviewed engine modeling based on statistical and ML methodologies through response surface and ANN techniques for various alternative fuels in both SI and CI engines.Not clearly stated
[25]-Built an ANN to predict the engine performance and emission characteristics for different injection timings, using waste cooking oil as a biodiesel blended with diesel.Not clearly stated
[26]-Analyzed the performance and the emissions of a four-stroke SI engine operating on ethanol–gasoline blends with the aid of ANN.Vehicle
[27]-Investigated ANN to predict the SI engine performance and exhaust emissions for methanol and gasoline. Vehicle
[28]-Modeled an ANN to predict CO2, CO/CO2 ratio, flue gas temperature, and gross efficiency in three-phase, 415 V, DG sets of different capacities operated at different loads, speeds, and torques.Industry
[29]-Investigated ANN and SVM based on Taguchi orthogonal array owing to the availability of limited experimental data of CRDI-assisted G/E for emissions prediction.Maritime
[31]-Established a NOx emissions prediction model of a diesel engine for both steady and transient operating states with an ensemble method based on principal component analysis, genetic algorithm, and SVM.Vehicle
[32]-Utilized an ANN algorithm with engine speed and load as the model inputs, and fuel consumption and emission as the model outputs.Vehicle
-Compared experimentally measured data and model predictions for forecasting engine efficiency and emissions.
[33]-Modeled performance and emission parameters of single-cylinder four-stroke CRDI engine coupled with EGR by GEP and compared the results with those from an ANN model.Not clearly stated
[34]-Explored the potential of ANN to predict the performance and emissions with load, fuel injection pressure, EGR, and fuel injected per cycle as input data for a single-cylinder four-stroke CRDI engine under varying EGR strategies.Not clearly stated
Since the installation of a marine emission analyzer on ships is not obligatory, shipping companies do not use it while paying a high cost; however, even if it were installed on the ship ignoring the cost aspects, ML methods would have to be applied to increase the prediction accuracy of the analyzer. Furthermore, emission measurements entail significant time and cost, and they suffer from disadvantages, such as a harmful effect on the environment during the experiments. Therefore, it would be ideal to measure the emissions during the period of preparing the performance report for each load of the G/E, which is routinely performed to check the condition of the G/E by engineer. Assuming that the ship is not equipped with an expensive emission analyzer for engine, an emission prediction model can be created and distributed when the ship is built as a virtual sensor, and it would be desirable to update this model with regular measurements during the dry-dock period or when the ship is berthed at a port. This method can be easily implemented in next-generation smart generator engines equipped with data collection devices and is expected to be widely used as a measure to respond to environmental regulations. Therefore, in this study, we aimed to verify whether the aforementioned aspects can be leveraged by measuring the emissions while the G/E performance report is being compiled; moreover, we tested the model on data measured on a different day (after and before turbocharger overhauling).
In this study, we attempted to create models that could predict emissions with the SCR (an eco-friendly technology for reducing the NOx emissions). To the best of our understanding, studies related to G/Es with the SCR to predict emissions by ML are scarce or nonexistent; thus, this research is meaningful and valuable for future studies on emission predictions of generator engines. A novel aspect of this study involves improving upon the currently existing results by using and comparing ANN and SVM models; furthermore, 15 to 26 input features acquired from the ship’s database were directly used for datasets. Additionally, the prediction results of the models were compared using datasets considering not only the variables of the G/E, but also the cooling and electrical-related variables. Lastly, we tried to prove the superiority of the DaCE dataset, which added cooling and electrical-related variables to the existing G/E dataset, through performance evaluation in ANN and SVM models.

2. Materials and Methods

2.1. Description of Machinery and System

2.1.1. G/E and Cooling System

The specifications of the G/E used in this study are listed in Table 2. The G/E is a critical component that generates all the electricity required for various machines and the occupants of a ship. The G/E receives oil from a fuel tank via a pump and sprays it through a set of nozzles into the engine cylinders. The fuel, thus atomized, mixes with the air supplied by a T/C and generates power from combustion; this power is then transmitted through the reciprocating motion of a piston. With this power, the crankshaft rotates, and the rotor of the generator connected to the crankshaft also rotates to produce electricity. The turbine wheel of the T/C receives the energy of the exhaust gas generated from G/E and rotates the central shaft; thus, the compressor wheel of the T/C supplies air into the cylinder. An air cooler cooled by cooling water is installed between the T/C and engine cylinder, thus increasing the power output, reducing smoke, and improving the fuel consumption [35]. However, the viscosity of the LO, whose purpose is to lubricate the engine is lowered by the heat generated from the engine; therefore, it passes through a lube oil cooler. In the past, sea water was supplied as a coolant to the air cooler and lube oil cooler. However, currently, most of the engines use a central cooling system, as it increases the operational window between the downtimes for cleaning the cooler; thus, essentially, only the central cooling heat exchanger needs to be cleaned. As shown on the bottom left side of Figure 1, the central cooling heat exchanger exchanges heat between the seawater and freshwater, and this CFW is supplied to the cooler of each machine; in the case of G/E, it is supplied to the air cooler and lube oil cooler.

2.1.2. SCR

The specifications of the SCR used in this study are listed in Table 3. SCR is an eco-friendly technology that can reduce NOx, as can be confirmed from Figure 2. The figure shows that NOx was reduced by up to 749 ppm; moreover, it was further reduced after turbocharger overhaul compared to values before turbocharger overhaul. As shown in Figure 1, the SCR device consists of a urea supply unit, urea dosing unit, control unit, injection pipe, SCR chamber, and soot blowing unit. Urea solution is supplied from the urea tank to the urea dosing unit through the pump of the urea supply unit. The urea dosing unit supplies the urea solution to the injection pipe and is automatically activated by the control unit. Additionally, an automatic purging system is provided to clean the urea line between the urea dosing unit and the injection pipe when the SCR is stopped. The injection pipe atomizes the urea aqueous solution through the urea injection nozzle and static mixer and supplies it to the exhaust gas line. The SCR chamber consists of a catalyst layer. The urea solution is used as a reducing agent and is decomposed as given by Equation (1) into ammonia and CO2 in the hot exhaust gas stream. Ammonia decomposed in the aqueous urea solution is converted into nitrogen molecules and water by a chemical reaction with NOx on the surface of the catalyst layer, as given by Equation (2). The soot blowing unit sprays compressed air on the surface of the catalyst layer to remove dust and soot particles from the engine combustion process to maintain the NOx reduction performance.
NH 2 2 CO +   H 2 O   2 NH 3 + CO 2
4 N O + 4 N H 3 + O 2   4 N 2 + 6 H 2 O 6 N O 2 + 8 N H 3   7 N 2 + 12 H 2 O

2.2. Description of Workflow

2.2.1. Overview of the Training Sequence

In this study, as shown in Figure 3, ship data and exhaust gas data were acquired on two different days, with an interval of 7 days between their acquisitions. One was termed BTC, to be used as the test set, and the other was termed ATC to be used as the training/validation set. In most cases, the G/E performance is measured when the G/E load is maintained at 0%, 25%, 50% and 75%; in this study, the exhaust gas data were acquired as the load was increased from 25% to 50% and 75%; then, again after operating the SCR, the emissions were measured while decreasing the load from 75% to 50% and 25%. These data were preprocessed by merging and filtering. Thus, datasets for four cases as input data were created on the basis of different variables to predict CO2, NOx, and tEx. The ATC was used for training until the loss functions of the ANN and SVM models were minimized through a hyperparameter tuning. Subsequently, the performances of the models were measured using the BTC. Because the machine’s performance worsens as the engine is driven, the models were trained with the good case and tested on the relatively bad case. If the training and test sets were made from a combination of BTC and ATC, there would be a high possibility that the information of the test set would be diluted in the training set. Therefore, we tried to evaluate the performance of the models with completely new and bad condition data collected before the T/C overhaul to obtain conditions similar to the case of developing and distributing a model at the time of shipbuilding, followed by predicting the emissions after some time passes by; this is the hypothesis that was established a priori. Python, a popular computer programming language in AI research, was used, and data analysis was performed on a Jupyter notebook. The open-source libraries used included Numpy, Pandas, Matplotlib, Seaborn, Scipy, Scikit-Learn, Tensorflow, and Keras. Numpy and Pandas were used for data analysis; Matplotlib and Seaborn were used for data visualization; Scikit-Learn was used for data analysis, preprocessing, metrics, and AI model learning; Scipy was also used for metrics; lastly, Tensorflow and Keras were used to create the ANN.

2.2.2. Data Acquisition

Both BTC and ATC were gathered using the method described in Section 2.2.1. As shown in Figure 1, the exhaust gas data were collected for each load, while creating the G/E performance report by attaching the gas sampling probe of the exhaust gas analyzer to the IMO flange at the top of the exhaust gas pipe of the G/E. The type of fuel used was MGO, and the fuel type for the initial setting of the exhaust gas analyzer was set by referring to the specifications listed in Table 4. After raising or lowering the load of the G/E for each stage, it was stabilized for approximately 10 min, and then the exhaust gas data were collected after every 1 s for 10 min. The collected data were then saved to the local computer as an Excel spreadsheet through the acquisition software after all the measurements were completed. Table 5 can be referred to for the accuracy of the exhaust gas analyzer. The manufacturer of the gas analyzer stated that the accuracy of flue gases satisfies the MARPOL Annex VI and NOx Technical Code. Appendix 4 (calibration of the analytical and measurement instruments) of the NOx Technical Code 2008 states that the true concentration of a calibration and span gas shall be within ±2% of the nominal value.
The ship data were automatically saved; therefore, after the measurements, the MS Access file was extracted from the database and converted into an Excel file. The data comprised 8640 samples and 1000 features, and it contained all the featured data of the ship over a 24 h period at 10 s intervals. Therefore, only the G/E-related features during the measurement were selected, and the 1000 features were compressed to 15–26 features.

2.2.3. Data Preprocessing and Dataset Generation

The variables used in this study and their ranges are summarized in Table 6. In the case of exhaust gas data, CO2 (%), NOx (ppm), and funnel exhaust gas temperature (tEx, °C) of the G/E were selected as the values to be predicted. However, the values were unstable for approximately 20 s initially; therefore, only the data gathered subsequently were used. The ship data had a 10 s interval for each sample; therefore, the exhaust gas data were also arranged in 10 s intervals to coincide with the ship data, and these two were merged with each other to create the complete dataset. Among the G/E-related features, constant or unnecessary variables were deleted. In addition, each of the six cylinder exhaust gas temperature variables and each of the two T/C exhaust gas inlet temperature variables had a high correlation; therefore, it was difficult to determine the effect of those variables in isolation, and the significance of specific variables may be lost. Therefore, average values of the variables were created and substituted for the individual variables of the cylinder exhaust gas temperature and T/C exhaust gas inlet temperature. Lastly, a pure G/E dataset for the no-SCR mode, named Da-sx, was created. Here, sx means no-SCR mode.
The CFW, after exchanging heat in the central cooler with the seawater, was supplied to the air cooler, and dense air was supplied into the cylinder, thus affecting the exhaust gas; therefore, three variables related to the CFW were selected to generate DaC-sx. The variable “LT CFW temperature difference” represents the difference between the inlet and outlet temperatures of the CFW flowing through the air cooler and lube oil cooler of the G/E, and “central CFW cooler temperature difference” represents the difference in the temperatures of the CFW between inlet and outlet of the central cooler.
Moreover, in addition to the mechanical process variables, the values of the electrical-related variables also change according to the G/E load, even though they are not directly affected by the exhaust gas. Therefore, six variables were selected to generate DaE-sx. In this dataset, two average variables were created to reduce redundancy: one was the average temperature of the R, S, and T phases of the alternator windings, and the other was the average G/E current in the R, S, and T phases. Lastly, DaCE-sx was created by adding both cooling and electrical-related variables to Da-sx.
To create the SCR mode dataset, only two variables “G/E load to G/E SCR” and “urea injection” were added to the set of variables of the no-SCR mode dataset; the designator “so” was added at the end of the name for the dataset instead of “sx”. As an example, the correlation heatmap of the dataset Da-sx, wherein the correlation value varied from –1 to 1, is shown in Figure 4. A closer correlation value to −1 indicates a more negative correlation, while a closer value to 1 indicates a more positive correlation. In this figure, the names of the variables are abbreviated; however, they are exactly the same as those described in the “common data for all datasets” in Table 6. From the heatmap, it can be seen that the target values of CO2, NOx, and tEx were strongly positively correlated with the LO inlet temperature, cylinder exhaust gas out temperature—average, T/C exhaust gas in temperature—average, and charge air outlet temperature. Therefore, it can be assumed that the dataset DaC would help increase the model prediction performance, because the CFW of DaC affected the lube oil cooler and air cooler, consequently affecting the temperature of the LO, charge air, and exhaust gas. However, other pressure variables were inversely correlated with the target values, except for the charge air inlet pressure. This can be determined by considering that the target values increased as the engine load increased. As the load increased, the heat generated by the engine increased and the viscosity of the pressure variables decreased, thereby decreasing the pressure. Conversely, the T/C driven by the heat of the exhaust gas increased the charge air pressure, as the load increased.
After extracting 75% of the data for the training set and 25% of the data for the validation set from each load, i.e., 25%, 50%, and 75% for both the no-SCR mode and SCR mode for the ATC, a total of eight different types of datasets were created, as shown in Figure 3. They were combined to create 16 datasets including eight training sets and eight validation sets having an even distribution of the samples of each load. The training set was used to train the model, while the validation set was used to prevent overfitting of the trained model and tune the hyperparameters. Lastly, the performance of the model was evaluated with the test set. The reason for assigning a ratio of 75% to 25% to the training and validation sets is that this study used a small dataset, and, if the training set is used in a large proportion, it could lead to inconsistent results, as indicated in [36]. In addition, when dividing the training and validation sets to compare the performance of different datasets in the same model, the random seed was similarly set to have the same row of data. In the case of eight BTCs, this division into training and validation sets was not performed because the entire dataset was a test set. Before training the ANN and SVM, standard scaling, which is one of the normalization methods, was used to improve the learning performance of the model, and its formula is shown in Equation (3); here, the mean of all data was made zero and the variance was made 1 by subtracting the mean and scaling to unit variance.
z = x μ σ ,
where x is the training sample, μ is the mean of the training samples, σ is the standard deviation of the training samples, and z is the standardized training sample.

3. Theory/Modeling

3.1. Artificial Neural Network

ANN is an ML model inspired by a BNN. The ANN structure was first introduced by neurophysiologist Warren McCulloch and mathematician Walter Pitts in 1943 [37]. Subsequently, the simplest ANN structure, called a perceptron, was proposed by Frank Rosenblatt in 1957 [38]. The description of the operation of an artificial neuron in a BN is shown in the upper right side of Figure 5.
The BNN is composed of a cell body containing a nucleus, dendrite, axon, and axon terminals. The input data, multiplied by weights, are delivered to the dendrite, and a weighted sum of the input data with a bias added is transferred to an activation function in the cell body. The data then flow along the axon and are emitted from the axon terminals as output data. These output data are transferred as the input data to the dendrites of other neurons, and a set of these neurons is called an ANN.
However, the perceptron is composed of one layer and cannot solve problems, such as XOR; therefore, the MLP, in which several perceptrons are stacked, was proposed by Marvin Minsky and Seymour Papert [39]. The MLP is composed of an input layer, several hidden layers, and an output layer. It is also called a DNN, and its structure is shown in Figure 5. However, the initially proposed MLP suffered from a disadvantage in that it could not learn weights, while the backpropagation training algorithm introduced by David Rumelhart, Geoffrey Hinton, and Ronald Williams in 1986 could solve this problem [40]. The backpropagation is calculated in the forward direction from the input of the input layer to the output of the output layer, and it stores the intermediate calculated values for the backward calculation. Then, the error between the output and target values is calculated, and the error gradient is measured for the weight between the layers by applying the chain rule, which is called backpropagation, as it propagates from the output to the input layer. Finally, by performing a gradient descent, the weights are adjusted to reduce the error; this process is repeated several times until the cost function is minimized, and each of these iterations is named an epoch.
Several hyperparameters exist in the design of an ANN, e.g., the number of hidden layers, number of nodes in each layer, activation function applied to each node, kernel initializer for weight initialization, optimizer as the training method of the model, learning rate, batch size, and epochs. However, there is no single correct method for selecting the hyperparameters; therefore, a model was designed by trying various combinations of hyperparameters according to the characteristics of the data to find the optimal result. After several trial-and-error attempts, a model with a structure of 64–32–16–8–4–1, where each number represents the number of nodes in the hidden and output layers, was designed, as shown in Figure 6. There were three models for each dataset to predict CO2, NOx, and tEx separately. As the activation function, Swish proposed by the Google Brain team, which is a variant of the ReLU, was used, and its formula is given by Equation (4) [41]. This function is calculated by multiplying the sigmoid by variable x, and it has a shape similar to that of ReLU.
However, as it allows negative values it has the advantage of solving the problem of dying ReLU, which replaces negative values with zeros. To solve the problem of vanishing or exploding gradients, a kernel initializer was proposed [42]; among various available kernel initializers, He-uniform, in which the weights follow a uniform distribution within the range of Equation (5), was used to randomly initialize the weights of each layer. Here, fanin is the number of input units in the weight tensor [43]. Nadam, which applies the Nesterov method to Adam, was used as an optimizer; it should be noted that Nadam updates the weights, as shown in Equation (6). A detailed explanation of the process involved is beyond the scope of this study, and readers are referred to [44] for additional details. In addition, because it is necessary to make predictions with new data, a dropout was applied to every hidden layer for generalization and preventing overfitting by setting the ratio to 10% [45]. The hyperparameters used in designing the ANN model are summarized in Table 7.
f x = x 1 + e x .
l i m i t ,   l i m i t ,   where   l i m i t = 6 f a n i n .
θ t     θ t 1 α t n ^ t + ϵ m ^ t .

3.2. SVM

The SVM was developed by Vapnik and colleagues at AT&T Bell Laboratories [46,47,48,49,50,51] and has been one of the most popular models in ML to date. The SVM can be used for both classification with SVC and regression with SVR, as illustrated in Figure 7. The SVM works well on small to medium-sized datasets and has been utilized in many studies owing to its good performance [52,53,54]. As this study also constructed a model with a small dataset, the SVM was used to build a model in this study. The basic concept involves the SVC learning to maximize the width of the road between the classes within the error margin, as well as the SVR learning to fit as many samples as possible in the road. The previously mentioned road is the sum of the margins, which represent the distances from the decision boundary to the support vectors. The SVM solves nonlinear problems through linear regression by mapping low-dimensional data to high-dimensional data using a kernel function. There are four representative kernel functions, as given by Equations (7)–(10), and, in this study, the commonly used RBF was utilized. Here, x i ,   x j are the input vectors, and γ (gamma) is a hyperparameter that plays the role of regulation. It should decrease when the model is overfitted and increase when the model is underfitted. Assuming that x i is the input data, w is the weight vector, b is a scalar, and f x i is the output of the SVM in Equation (11), the SVM finds the optimal linear regression function by minimizing the loss function of Equation (12). ξ i + , ξ i are slack variables for each sample, which determine how much margin is violated by the i-th sample; ε (epsilon) is the margin. However, a conflicting problem is faced with the loss function; one solution is to make 1 2 w T w so small as to increase the margin, and the other is to make ξ i + + ξ i so small as to minimize the margin error. Therefore, the sum of the slack variables is multiplied by hyperparameter C, which acts as a penalty factor [55]. There are several hyperparameters, such as C, ε, and γ, in SVM, because optimal hyperparameter values applicable to all the problems do not exist, and it is common to find hyperparameters with the best performance through a grid search process. In this study, C = 100, ε = 0.0005, and γ = 0.01 for the no-SCR mode, and C = 10, ε = 0.01, and γ = 0.002 for the SCR mode were selected to train the model. A detailed description of the SVM theory can be found in [56,57,58].
-Linear:
K x i ,   x j = x i T x j .
-Polynomial:
K x i ,   x j = ( γ x i T x j + r ) d ,         γ > 0 .
-RBF:
K x i ,   x j = exp γ x i x j 2 ,         γ > 0 .
-Sigmoid:
K x i ,   x j = tanh γ x i T x j + r ,         γ > 0 .
f x i = w T x i + b .
Minimize   1 2 w T w + C i = 1 n ξ i + + ξ i , subject   to   ε ξ i     y i f x i     ε + ξ i + ξ i + , ξ i   0 .

3.3. Performance Measurement Metrics

In this study, to evaluate the performance of the models, the metrics RMSE, MAE, MAPE, R2, and r y y ^ were adopted, and their formulae are given by Equations (13), (14), (15), (16), and (17), respectively. When training the model, because the data were standardized in both ANN and SVM, the predicted values were inverse-transformed, as shown in Equation (18). Then, the model performance was assessed by comparing the predicted values ( y ^ i ) and actual values ( y i ) using the abovementioned metrics.
RMSE = 1 n   i = 1 n y i y ^ i 2 .
MAE = 1 n   i = 1 n y i y ^ i .
MAPE = 100 % n   i = 1 n y i y ^ i y i .
R 2 = 1 i = 1 n y i y ^ i 2 i = 1 n y i y ¯ 2 .
r y y ^ = i = 1 n y i y ¯ y ^ i y ^ ¯   i = 1 n y i y ¯ 2 i = 1 n y ^ i y ^ ¯   2 .
x = z × σ + μ .

4. Results and Discussion

4.1. Comparison of Emission Predictions of ANN and SVM Models for the No-SCR Mode

4.1.1. Comparison of Dataset Types for the No-SCR Mode

Da-sx was generated by merging the G/E data and exhaust gas data; DaC-sx and DaE-sx were generated by adding cooling and electrical-related variables to Da-sx, respectively; DaCE-sx was generated by adding both cooling and electric variables to Da-sx. In the case of ANN, the predictions were slightly different for each trained model owing to random weight initialization. Therefore, 15 instances of training were performed to create 15 different models for each emission characteristic, and evaluations with the validation set were performed for each model to check the influence of different datasets. Table 8 summarizes the results of each dataset for the 64–32–16–8–4–1 ANN model with optimal hyperparameters. The values shown in Table 8a are the averages of the calculated metrics tested from the validation set with 15 models trained on the training set. In the case of CO2, it can be seen that the error was almost negligible, as reflected by the RMSE (MAE) value, which is less than 0.0461% (0.0338%); this is almost the same as that of the other tables. Therefore, in this study, NOx and tEx were the main concerns because the CO2 prediction performance was confirmed with excellent results. For NOx, the RMSE (MAE) values decreased by 0.6159 ppm (0.6812 ppm), 1.7881 ppm (1.3073 ppm), and 1.2385 ppm (1.0419 ppm) for DaC-sx, DaE-sx, and DaCE-sx, respectively. In the case of tEx, unlike NOx, the RMSE (MAE) value in DaCE-sx compared to Da-sx yielded the largest error reduction of 1.5326 °C (1.084 °C), and DaC-sx and DaE-sx showed better performance than Da-sx. Therefore, it can be said that DaCE-sx was the best choice because the difference between DaCE-sx and DaE-sx was not large for CO2 and NOx, as listed in Table 8a. In Table 8b, it can be seen that, when the cooling parameters were added to the test dataset BTC, the prediction performance of NOx and tEx was slightly decreased. However, when DaCE-sx was compared with Da-sx, the RMSE (MAE) decreased by 5.5982 ppm (4.1466 ppm), indicating that the NOx prediction performance was the best. In the case of CO2 and tEx, the lowest RMSE (MAE) was obtained in with DaE-sx; however, it was slightly different from that of DaCE-sx. Therefore, DaCE-sx offered the best prediction with ANN in the no-SCR mode; the same can be confirmed with other metrics, such as MAPE, R2, and Pearson r. Furthermore, the superiority of DaCE-sx can be checked by referring to Table 8c and Figure 8. In Figure 8, data points of DaCE-sx got closer to the best prediction line compared to others. Especially for tEx, it could be precisely confirmed that the prediction performance improved when cooling and electrical variables were added. The cooling and electrical variables added to the Da dataset changed in proportion to the engine load. In addition, the emissions and tEx also changed with engine load. Therefore, the DaCE dataset, which has a high correlation with the target variables, generally showed good performance.
As shown in Table 9a, in the case of NOx, the RMSE (MAE, MAPE) of DaE-sx was the lowest at 5.9282 ppm (4.7832 ppm, 0.5672%); however, it was only slightly different from that of DaCE-sx. In the case of tEx, DaCE-sx yielded the best performance, with its RMSE (MAE, MAPE) being the lowest at 0.4150 °C (0.3143 °C, 0.1324%). When the electrical-related parameters were added, the R2 and Pearson’s r were 1.0. As shown in Table 9b, the RMSE (MAE, MAPE) of NOx was larger at 45.9671 ppm (39.2833 ppm, 4.7777%) with Da-sx. However, the predictions with DaC-sx and DaE-sx revealed that the RMSE (MAE, MAPE) was dramatically reduced by 17.171 ppm (14.3075 ppm, 1.8415%) and 30.2676 ppm (25.7416 ppm, 3.2877%), respectively. As shown in Figure 9, it could be confirmed that data points of DaE-sx and DaCE-sx centered to the best prediction line, while those of Da-sx and DaC-sx did not. In conclusion, with the SVM model, the prediction results of DaE-sx for CO2, NOx, and tEx were superior to those of the other datasets.

4.1.2. Comparison of Model Performances with the No-SCR Mode

For the ANN model, owing to the weight initialization, the predictions were slightly different for each trained model. Therefore, a total of 15 models were generated through 15 training sessions. By testing the validation set with this model, the model with the lowest RMSE (MAE) was selected, the performance was predicted with the test dataset BTC, and the results were compared with those of the SVM model. As shown in Table 8c, DaCE-sx had the best performance with the ANN model, apart from a negligible CO2. Referring to Figure 10, it can be seen that the error in CO2 prediction with the ANN model was almost zero. In the case of tEx, the red line indicating the real values did not overlap with the predicted values, except for DaCE-sx, which had an almost zero error. In the case of NOx, the error was the smallest in DaCE-sx; therefore, the orange line overlapped the royal blue line well. In addition, the values for each load can be seen by referring to the NOx graph, which had a step-like change. In Figure 11, the error of DaE-sx for all the target data with the SVM model was close to zero compared to other datasets. From the prediction results of DaCE-sx with the ANN model and DaE-sx with the SVM model, the RMSE (R2) values were 0.1162% (0.8036) and 0.0901% (0.8819), respectively, for CO2, 19.1140 ppm (0.9720) and 15.6995 ppm (0.9811), respectively, for NOx, and 4.2768 °C (0.9471) and 4.1195 °C (0.9509), respectively, for tEx, thus indicating that the SVM model was superior to the ANN model.

4.2. Comparison of Emission Predictions of ANN and SVM Models with the SCR Mode

4.2.1. Comparison of Dataset Type with the SCR Mode

For the dataset DaE-so, as presented in Table 10a, the results of the ANN model when the SCR was operational, the RMSE (MAE) values were 13.0547 ppm (9.4762 ppm) and 0.3538 °C (0.2760 °C) for NOx and tEx, respectively, thus indicating the best prediction performance. For CO2, DaCE-so exhibited the best performance. As can be seen from Table 10b, DaC-so performed the best in predicting CO2 and tEx emissions in terms of the RMSE, R2, and Pearson r. In the case of NOx, Da-so, which was the most basic type among all the datasets, yielded the best performance. However, the difference in the metrics for CO2 and tEx was insignificant among the datasets. In the case of NOx, the difference in the RMSE (MAE) between DaCE-so and Da-so was 0.4879 ppm (0.0301 ppm), which is not considered significant. Referring to Table 10c, the differences in CO2 and tEx between datasets were not large. However, as shown in Figure 12, DaC-so-test and Da-so-test performed the best in predicting CO2 and tEx, respectively. In the case of NOx, DaCE-so-test had the smallest RMSE (MAE) at 15.1043 ppm (12.5964 ppm); referring to Figure 12, it can be seen that data points were more located at the center than others.
For the SVM model, from Table 11 and Figure 13, it can be observed that DaCE-so yielded the best performance for all metrics in terms of CO2, NOx, and tEx. In Figure 13, when the value of CO2 was 6.0%, it can be seen that the data points of Da-so and DaC-so were far from the best prediction line compared to DaE-so and DaCE-so. In the prediction of NOx, when it was 300 ppm, the data points of DaCE-so were most concentrated along the best prediction line. When predicting tEx, data points near 380 °C were best distributed along the best prediction line for DaCE-so.

4.2.2. Comparison of Model Performance in the SCR Mode

As shown in Table 10c, in the ANN model, CO2, NOx, and tEx performed best in DaC-so-test, DaCE-so-test, and Da-so-test, respectively. From Figure 14, It can be observed that the error of DaCE-so with the ANN model was small compared to that of other datasets. Even though CO2 and tEx performed well in other datasets, it can be seen from the graph that this was negligible. In Figure 15, it is evident that, in the SCR mode, the SVM model predictions were superior for all the datasets. Even though it is difficult to visually assess them, a comparison of the RMSE (R2) values shows that DaCE-so was the best. Furthermore, in Figure 15, it can be intuitively seen that the area between the DaCE-so error line and the zero-value baseline was the smallest. Lastly, considering that the prediction errors of CO2 and tEx among the datasets were insignificant, the performances of the ANN and SVM models were compared for DaCE-so. The RMSE (R2) values for ANN and SVM were 0.1138% (0.8404) and 0.0912% (0.8975), respectively, for CO2, 15.1043 ppm (0.9565) and 13.6775 ppm (0.9643), respectively, for NOx, and 1.9636 °C (0.8049) and 1.5688 °C (0.8754), respectively, for tEx, indicating that the SVM model was superior to the ANN model.

5. Conclusions

To predict the emissions and exhaust gas temperatures with and without an SCR for a G/E, the exhaust gas variables were measured and ship data were collected during the period of documenting the G/E performance. Most of the previous studies predicted the emissions or performance using only the engine variables as the input data. However, the CFW supplied from the outside flows through the engine and is directed to an air cooler. Therefore, the CFW affects the charging air and eventually affects the exhaust gas; thus, a dataset called DaC was created in this study. In addition, another dataset DaE was created by adding the electrical variables proportional to the engine load; lastly, the dataset DaCE was created by adding both the cooling and the electrical variables. For the predictions for the SCR mode, the amount of urea injected and percentage of the G/E load were added to the four datasets created above. To compare the model performances, after several trial-and-error attempts, an ANN model of the 64–32–16–8–4–1 structure with optimal hyperparameters and an SVM model using the RBF kernel trick were created. They were trained with the data after the T/C overhaul, verified with the validation set, and tested with the data before the T/C overhaul.
From the prediction results, it can be observed that NOx had a larger variation between the datasets compared to CO2 and tEx; therefore, the NOx prediction performance was mainly focused on. When comparing the performance of the dataset in the no-SCR mode, with the ANN model, the results of the validation set presented in Table 8a show that DaE had an excellent predictive performance for CO2 and NOx. However, the difference in the prediction performance of NOx compared to that of DaCE was not significant, and the prediction of tEx was better with DaCE. From DaC in Table 8b and DaE in Table 8c, it can be seen that the performance was inferior to that of Da with regard to NOx prediction; however, the best performance was achieved in DaCE. In the SVM model as shown in Table 9, DaE performed the best, and there was no significant difference from that of DaCE.
For the ANN model with the SCR mode, DaE performed the best with the validation set. Table 10b shows that Da had the best performance; however, there was no significant difference from that of DaCE. From Table 10c, it can be observed that the performance of DaE was inferior to that of Da; however, DaCE yielded the best performance. In the case of the SVM model, as shown in Table 11, DaCE exhibited the best performance. The performances of DaC and DaE were worse than that of Da in some cases; however, overall, DaCE generally yielded good performances. Even if the performance was worse than that of other datasets in some cases, the differences were insignificant; thus, it is preferable to use DaCE.
In conclusion, according to the evaluation of the performance of the models, it can be seen that SVM outperformed ANN regardless of whether SCR was operated, which explains that SVM is powerful in small datasets. When the performance of the SVM model was measured using the test data of DaCE in both no-SCR mode and SCR mode, the RMSE (MAPE, R2) of CO2 was between 0.1137% (1.2924%, 0.8119) and 0.0912% (0.7636%, 0.8975), the RMSE (MAPE, R2) of NOx was between 17.1088 ppm (5.7718%, 0.9643) and 13.6775 ppm (1.7120%, 0.9776), and the RMSE (MAPE, R2) of tEx was between 4.5839 °C (1.1871%, 0.8754) and 1.5688 °C (0.3061%, 0.9392). Accordingly, it can be seen that satisfactory results could be obtained when the DaCE dataset and SVM were used. Therefore, when applying ML as a concept of a virtual sensor for engine emission measurement equipped with a data collection device, it would be desirable to apply SVM. This is because, initially, the amount of data is insufficient. However, when the number of collected datasets increases with time, the prediction performance is expected to improve further, and ANN or other ML models can also be considered.
As a follow-up study, AI modeling based on data acquired under long-term operating conditions is required. In addition, in order to satisfy TIER 3, which is a strong NOx regulation for marine engines, it is also necessary to study an engine equipped with EGR, which is the only alternative to SCR that can be used.

Author Contributions

Conceptualization, M.-H.P., J.-H.C., J.-J.H. and W.-J.L.; methodology, M.-H.P., C.-M.L., A.J.N., H.-J.J. and J.-H.C.; data curation, M.-H.P., C.-M.L., A.J.N., H.-J.J. and J.-J.H.; software, M.-H.P.; visualization, M.-H.P.; supervision, J.-H.C., J.-J.H. and W.-J.L.; project administration, W.-J.L.; writing—original draft, M.-H.P., J.-J.H. and W.-J.L.; writing—review and editing, M.-H.P., J.-J.H. and W.-J.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the ‘Autonomous Ship Technology Development Program (K_G012001614002)’ funded by the Ministry of Trade, Industry & Energy (MOTIE, Korea) and the National Research Foundation of Korea (NRF) funded by the Ministry of Education (2022R1F1A1073764).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Nomenclature

ATCAfter turbocharger overhaul dataR2Coefficient of determination
BNBiological neuron RBFGaussian radial basis function
BNNBiological neural network ReLURectified linear unit
BTCBefore turbocharger overhaul dataRMSERoot-mean-squared error
CFWCooling freshwaterrpmRevolutions per minute
CICompression ignitionSISpark ignition
CRDICommon rail direct injectionSVCSupport vector classification
DNNDeep neural network SVRSupport vector regression
F.OFuel oilT/CTurbocharger
GEPGene expression programmingtExFunnel exhaust gas temperature
HCNGHydrogen-enriched compressed natural gasXORExclusive or
L.OLube oil r y y ^ Pearson correlation coefficient (Pearson r)
L.TLow temperature y i Actual values
MAEMean absolute error y ¯ Average of actual values
MAPEMean absolute percentage error y ^ i Predicted values
MLPMultiple layer perceptron y ^ ¯ Average of predicted values

References

  1. UNCTAD. Review of Maritime Transport 2021; UNCTAD: Geneva, Switzerland, 2021. [Google Scholar]
  2. IMO. Fourth IMO GHG Study 2020 Full Report; IMO: London, UK, 2021. [Google Scholar]
  3. Chen, J.; Fei, Y.; Wan, Z. The Relationship between the Development of Global Maritime Fleets and GHG Emission from Shipping. J. Environ. Manag. 2019, 242, 31–39. [Google Scholar] [CrossRef] [PubMed]
  4. Boningari, T.; Smirniotis, P.G. Impact of Nitrogen Oxides on the Environment and Human Health: Mn-Based Materials for the NOx Abatement. Curr. Opin. Chem. Eng. 2016, 13, 133–141. [Google Scholar] [CrossRef]
  5. Viana, M.; Hammingh, P.; Colette, A.; Querol, X.; Degraeuwe, B.; de Vlieger, I.; van Aardenne, J. Impact of Maritime Transport Emissions on Coastal Air Quality in Europe. Atmos. Environ. 2014, 90, 96–105. [Google Scholar] [CrossRef]
  6. ABS. ABS Advisory on Nox Tier Iii Compliance; ABS: Houston, TX, USA, 2020. [Google Scholar]
  7. Ni, P.; Wang, X.; Li, H. A Review on Regulations, Current Status, Effects and Reduction Strategies of Emissions for Marine Diesel Engines. Fuel 2020, 279, 118477. [Google Scholar] [CrossRef]
  8. Liu, J.; Wang, H. Machine Learning Assisted Modeling of Mixing Timescale for LES/PDF of High-Karlovitz Turbulent Premixed Combustion. Combust. Flame 2022, 238, 111895. [Google Scholar] [CrossRef]
  9. Adams, D.; Oh, D.H.; Kim, D.W.; Lee, C.H.; Oh, M. Prediction of SOx–NOx Emission from a Coal-Fired CFB Power Plant with Machine Learning: Plant Data Learned by Deep Neural Network and Least Square Support Vector Machine. J. Clean. Prod. 2020, 270, 122310. [Google Scholar] [CrossRef]
  10. Wang, X.; Liu, W.; Wang, Y.; Yang, G. A Hybrid NOx Emission Prediction Model Based on CEEMDAN and AM-LSTM. Fuel 2021, 310, 122486. [Google Scholar] [CrossRef]
  11. Zhai, Y.; Ding, X.; Jin, X.; Zhao, L. Adaptive LSSVM Based Iterative Prediction Method for NOx Concentration Prediction in Coal-Fired Power Plant Considering System Delay. Appl. Soft Comput. J. 2020, 89, 106070. [Google Scholar] [CrossRef]
  12. Tan, Y.; Zhang, J.; Tian, H.; Jiang, D.; Guo, L.; Wang, G.; Lin, Y. Multi-Label Classification for Simultaneous Fault Diagnosis of Marine Machinery: A Comparative Study. Ocean Eng. 2021, 239, 109723. [Google Scholar] [CrossRef]
  13. Quintanilha, I.M.; Elias, V.R.M.; da Silva, F.B.; Fonini, P.A.M.; da Silva, E.A.B.; Netto, S.L.; Apolinário, J.A.; de Campos, M.L.R.; Martins, W.A.; Wold, L.E.; et al. A Fault Detector/Classifier for Closed-Ring Power Generators Using Machine Learning. Reliab. Eng. Syst. Saf. 2021, 212, 107614. [Google Scholar] [CrossRef]
  14. Wang, X.; Cai, Y.; Li, A.; Zhang, W.; Yue, Y.; Ming, A. Intelligent Fault Diagnosis of Diesel Engine via Adaptive VMD-Rihaczek Distribution and Graph Regularized Bi-Directional NMF. Meas. J. Int. Meas. Confed. 2021, 172, 108823. [Google Scholar] [CrossRef]
  15. Castresana, J.; Gabiña, G.; Martin, L.; Uriondo, Z. Comparative Performance and Emissions Assessments of a Single-Cylinder Diesel Engine Using Artificial Neural Network and Thermodynamic Simulation. Appl. Therm. Eng. 2021, 185, 116343. [Google Scholar] [CrossRef]
  16. Tuan Hoang, A.; Nižetić, S.; Chyuan Ong, H.; Tarelko, W.; Viet Pham, V.; Hieu Le, T.; Quang Chau, M.; Phuong Nguyen, X. A Review on Application of Artificial Neural Network (ANN) for Performance and Emission Characteristics of Diesel Engine Fueled with Biodiesel-Based Fuels. Sustain. Energy Technol. Assess. 2021, 47, 101416. [Google Scholar] [CrossRef]
  17. Ouyang, T.; Huang, G.; Su, Z.; Xu, J.; Zhou, F.; Chen, N. Design and Optimisation of an Advanced Waste Heat Cascade Utilisation System for a Large Marine Diesel Engine. J. Clean. Prod. 2020, 273, 123057. [Google Scholar] [CrossRef]
  18. Zhou, H.; Yang, W.; Sun, L.; Jing, X.; Li, G.; Cao, L. Reliability Optimization of Process Parameters for Marine Diesel Engine Block Hole System Machining Using Improved PSO. Sci. Rep. 2021, 11, 21983. [Google Scholar] [CrossRef]
  19. Asalapuram, V.S.; Khan, I.; Rao, K. A Novel Architecture for Condition Based Machinery Health Monitoring on Marine Vessels Using Deep Learning and Edge Computing. In Proceedings of the 2019 22nd IEEE International Symposium on Measurement and Control in Robotics: Robotics for the Benefit of Humanity (ISMCR 2019), Houston, TX, USA, 19–21 September 2019. [Google Scholar]
  20. Lazakis, I.; Gkerekos, C.; Theotokatos, G. Investigating an SVM-Driven, One-Class Approach to Estimating Ship Systems Condition. Ships Offshore Struct. 2019, 14, 432–441. [Google Scholar] [CrossRef]
  21. Tang, R.; Li, X.; Lai, J. A Novel Optimal Energy-Management Strategy for a Maritime Hybrid Energy System Based on Large-Scale Global Optimization. Appl. Energy 2018, 228, 254–264. [Google Scholar] [CrossRef]
  22. Uyanık, T.; Karatuğ, Ç.; Arslanoğlu, Y. Machine Learning Approach to Ship Fuel Consumption: A Case of Container Vessel. Transp. Res. Part D Transp. Environ. 2020, 84, 102389. [Google Scholar] [CrossRef]
  23. Duan, H.; Huang, Y.; Mehra, R.K.; Song, P.; Ma, F. Study on Influencing Factors of Prediction Accuracy of Support Vector Machine (SVM) Model for NOx Emission of a Hydrogen Enriched Compressed Natural Gas Engine. Fuel 2018, 234, 954–964. [Google Scholar] [CrossRef]
  24. Yusri, I.M.; Abdul Majeed, A.P.P.; Mamat, R.; Ghazali, M.F.; Awad, O.I.; Azmi, W.H. A Review on the Application of Response Surface Method and Artificial Neural Network in Engine Performance and Exhaust Emissions Characteristics in Alternative Fuel. Renew. Sustain. Energy Rev. 2018, 90, 665–686. [Google Scholar] [CrossRef]
  25. Shivakumar; Srinivasa Pai, P.; Shrinivasa Rao, B.R. Artificial Neural Network Based Prediction of Performance and Emission Characteristics of a Variable Compression Ratio CI Engine Using WCO as a Biodiesel at Different Injection Timings. Appl. Energy 2011, 88, 2344–2354. [Google Scholar] [CrossRef]
  26. Najafi, G.; Ghobadian, B.; Tavakoli, T.; Buttsworth, D.R.; Yusaf, T.F.; Faizollahnejad, M. Performance and Exhaust Emissions of a Gasoline Engine with Ethanol Blended Gasoline Fuels Using Artificial Neural Network. Appl. Energy 2009, 86, 630–639. [Google Scholar] [CrossRef]
  27. Çay, Y.; Korkmaz, I.; Çiçek, A.; Kara, F. Prediction of Engine Performance and Exhaust Emissions for Gasoline and Methanol Using Artificial Neural Network. Energy 2013, 50, 177–186. [Google Scholar] [CrossRef]
  28. Ganesan, P.; Rajakarunakaran, S.; Thirugnanasambandam, M.; Devaraj, D. Artificial Neural Network Model to Predict the Diesel Electric Generator Performance and Exhaust Emissions. Energy 2015, 83, 115–124. [Google Scholar] [CrossRef]
  29. Niu, X.; Yang, C.; Wang, H.; Wang, Y. Investigation of ANN and SVM Based on Limited Samples for Performance and Emissions Prediction of a CRDI-Assisted Marine Diesel Engine. Appl. Therm. Eng. 2017, 111, 1353–1364. [Google Scholar] [CrossRef]
  30. Liu, J.; Huang, Q.; Ulishney, C.; Dumitrescu, C.E. Machine Learning Assisted Prediction of Exhaust Gas Temperature of a Heavy-Duty Natural Gas Spark Ignition Engine. Appl. Energy 2021, 300, 117413. [Google Scholar] [CrossRef]
  31. Liu, B.; Hu, J.; Yan, F.; Turkson, R.F.; Lin, F. A Novel Optimal Support Vector Machine Ensemble Model for NOX Emissions Prediction of a Diesel Engine. Meas. J. Int. Meas. Confed. 2016, 92, 183–192. [Google Scholar] [CrossRef]
  32. Fu, J.; Yang, R.; Li, X.; Sun, X.; Li, Y.; Liu, Z.; Zhang, Y.; Sunden, B. Application of Artificial Neural Network to Forecast Engine Performance and Emissions of a Spark Ignition Engine. Appl. Therm. Eng. 2022, 201, 117749. [Google Scholar] [CrossRef]
  33. Roy, S.; Ghosh, A.; Das, A.K.; Banerjee, R. Development and Validation of a GEP Model to Predict the Performance and Exhaust Emission Parameters of a CRDI Assisted Single Cylinder Diesel Engine Coupled with EGR. Appl. Energy 2015, 140, 52–64. [Google Scholar] [CrossRef]
  34. Roy, S.; Banerjee, R.; Bose, P.K. Performance and Exhaust Emissions Prediction of a CRDI Assisted Single Cylinder Diesel Engine Coupled with EGR Using Artificial Neural Network. Appl. Energy 2014, 119, 330–340. [Google Scholar] [CrossRef]
  35. Sekar, R.R. Trends in Diesel Engine Charge Air Cooling. SAE Tech. Pap. 1982, 91, 820284–820688. [Google Scholar]
  36. Kim, J.H.; Kim, Y.; Lu, W. Prediction of Ice Resistance for Ice-Going Ships in Level Ice Using Artificial Neural Network Technique. Ocean Eng. 2020, 217, 108031. [Google Scholar] [CrossRef]
  37. McCulloch, W.S.; Pitts, W. A Logical Calculus of the Ideas Immanent in Nervous Activity. Bull. Math. Biophys. 1943, 5, 115–133. [Google Scholar] [CrossRef]
  38. Rosenblatt, F. The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain. Psychol. Rev. 1958, 65, 386–408. [Google Scholar] [CrossRef]
  39. Minsky, M.; Papert, S. Perceptrons; MIT Press: Cambridge, MA, USA, 1969. [Google Scholar]
  40. Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Learning Internal Representations by Error Propagation. In Readings in Cognitive Science: A Perspective from Psychology and Artificial Intelligence; Elsevier: Amsterdam, The Netherlands, 2013. [Google Scholar]
  41. Ramachandran, P.; Zoph, B.; Le, Q.V. Swish: A Self-Gated Activation Function. In Proceedings of the 6th International Conference on Learning Representations (ICLR 2018), Vancouver, BC, Canada, 30 April–3 May 2018. [Google Scholar]
  42. Glorot, X.; Bengio, Y. Understanding the Difficulty of Training Deep Feedforward Neural Networks. Proc. J. Mach. Learn. Res. 2010, 9, 249–256. [Google Scholar]
  43. He, K.; Zhang, X.; Ren, S.; Sun, J. Delving Deep into Rectifiers: Surpassing Human-Level Performance on Imagenet Classification. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 1026–1034. [Google Scholar] [CrossRef]
  44. Dozat, T. Incorporating Nesterov Momentum into Adam. In Proceedings of the ICLR 2016-Workshop Track, San Juan, Puerto Rico, 2–4 May 2016. [Google Scholar]
  45. Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. J. Mach. Learn. Res. 2014, 15, 1929–1958. [Google Scholar]
  46. Boser, B.E.; Guyon, I.M.; Vapnik, V.N. Training Algorithm for Optimal Margin Classifiers. In Proceedings of the Fifth Annual ACM Workshop on Computational Learning Theory, Pittsburgh, PA, USA, 27–29 July 1992. [Google Scholar]
  47. Guyon, I.; Boser, B.; Vapnik, V. Automatic Capacity Tuning of Very Large VC-Dimension Classifiers. Adv. Neural Inf. Process. Syst. 1993, 5, 147–155. [Google Scholar]
  48. Cortes, C.; Vapnik, V. Support-Vector Networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
  49. Schölkopf, B.; Burges, C.; Vapnik, V. Extracting Support Data for a given Task. In Proceedings of the 1st International Conference on Knowledge Discovery & Data Mining, Montreal, QC, Canada, 20–21 August 1995. [Google Scholar]
  50. Schölkopf, B.; Burges, C.; Vapnik, V. Incorporating Invariances in Support Vector Learning Machines. In Proceedings of the Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Bochum, Germany, 16–19 July 1996; Springer: Berlin/Heidelberg, Germany, 1996; Volume 1112 LNCS. [Google Scholar]
  51. Vapnik, V.; Golowich, S.E.; Smola, A. Support Vector Method for Function Approximation, Regression Estimation, and Signal Processing. Adv. Neural Inf. Process. Syst. 1996, 9, 281–287. [Google Scholar]
  52. Chi, M.; Feng, R.; Bruzzone, L. Classification of Hyperspectral Remote-Sensing Data with Primal SVM for Small-Sized Training Dataset Problem. Adv. Space Res. 2008, 41, 1793–1799. [Google Scholar] [CrossRef]
  53. Lee, K.; Chung, Y.; Byun, H. SVM-Based Face Verification with Feature Set of Small Size. Electron. Lett. 2002, 38, 787–789. [Google Scholar] [CrossRef]
  54. Pal, M.; Foody, G.M. Evaluation of SVM, RVM and SMLR for Accurate Image Classification with Limited Ground Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2012, 5, 1344–1355. [Google Scholar] [CrossRef]
  55. Géron, A. Hands-On Machine Learning with Scikit-Learn, Keras and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems; O’Reilly Media, Inc.: Sebastopol, CA, USA, 2019. [Google Scholar]
  56. Noble, W.S. What Is a Support Vector Machine? Nat. Biotechnol. 2006, 24, 1565–1567. [Google Scholar] [CrossRef]
  57. Drucker, H.; Surges, C.J.C.; Kaufman, L.; Smola, A.; Vapnik, V. Support Vector Regression Machines. Adv. Neural Inf. Process. Syst. 1996, 9, 155–161. [Google Scholar]
  58. Smola, A.J.; Schölkopf, B. A Tutorial on Support Vector Regression. Stat. Comput. 2004, 14, 199–222. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of the experimental setup.
Figure 1. Schematic diagram of the experimental setup.
Jmse 10 01118 g001
Figure 2. Comparison of NOx ppm levels between no-SCR and SCR modes.
Figure 2. Comparison of NOx ppm levels between no-SCR and SCR modes.
Jmse 10 01118 g002
Figure 3. Flowchart for emission prediction modeling.
Figure 3. Flowchart for emission prediction modeling.
Jmse 10 01118 g003
Figure 4. Correlation heatmap of Da-sx.
Figure 4. Correlation heatmap of Da-sx.
Jmse 10 01118 g004
Figure 5. ANN depicted as a BNN.
Figure 5. ANN depicted as a BNN.
Jmse 10 01118 g005
Figure 6. Example of the designed ANN with a structure of 64–32–16–8–4–1 layers.
Figure 6. Example of the designed ANN with a structure of 64–32–16–8–4–1 layers.
Jmse 10 01118 g006
Figure 7. Support vector regression.
Figure 7. Support vector regression.
Jmse 10 01118 g007
Figure 8. Comparison of dataset performances for ANN model with the no-SCR mode.
Figure 8. Comparison of dataset performances for ANN model with the no-SCR mode.
Jmse 10 01118 g008
Figure 9. Comparison of dataset performances for SVM model with the no-SCR mode.
Figure 9. Comparison of dataset performances for SVM model with the no-SCR mode.
Jmse 10 01118 g009
Figure 10. Comparison of real and predicted values for different datasets with ANN models in no-SCR mode.
Figure 10. Comparison of real and predicted values for different datasets with ANN models in no-SCR mode.
Jmse 10 01118 g010
Figure 11. Comparison of real and predicted values for different datasets with SVM models in no-SCR mode.
Figure 11. Comparison of real and predicted values for different datasets with SVM models in no-SCR mode.
Jmse 10 01118 g011
Figure 12. Comparison of dataset performances for ANN model with the SCR mode.
Figure 12. Comparison of dataset performances for ANN model with the SCR mode.
Jmse 10 01118 g012
Figure 13. Comparison of dataset performances for SVM model with the SCR mode.
Figure 13. Comparison of dataset performances for SVM model with the SCR mode.
Jmse 10 01118 g013
Figure 14. Comparison of real and predicted values for different datasets with ANN models in SCR mode.
Figure 14. Comparison of real and predicted values for different datasets with ANN models in SCR mode.
Jmse 10 01118 g014
Figure 15. Comparison of real and predicted values for different datasets with SVM models in SCR mode.
Figure 15. Comparison of real and predicted values for different datasets with SVM models in SCR mode.
Jmse 10 01118 g015
Table 2. Engine specifications.
Table 2. Engine specifications.
Type of Engine4-Stroke, Vertical, Direct Injection Single-Acting, and Trunk Piston Type with T/C and Intercooler
Cylinder configurationInline
Number of cylinders6
Rated speed900 rpm
Power per cylinder200 kW
Cylinder bore210 mm
Piston stroke320 mm
Swept volume per cylinder11.1 dm3
Mean piston speed9.6 m/s
Mean effective pressure24.1 bar
Compression ratio17:1
Cylinder firing order1–4–2–6–3–5
Table 3. SCR specifications.
Table 3. SCR specifications.
NOx emission value after the SCR2.31 g/kWh
Pressure drop across the SCR≤150 mmAq
Ammonia slip≤10 ppm
Sulfur content of the fuel oil for SCR operation≤0.1%
Maximum allowable exhaust gas temperature≤400 °C
Table 4. Fuel specifications.
Table 4. Fuel specifications.
API gravity, 60 °F35.6
Specific gravity, 15/4 °C0.8464
Flash point66.0 °C
Sulfur0.0340 wt.%
Kinematic viscosity2.8940 mm2/s
Net heat of combustion10,220 kcal/kg
Gross heat of combustion10,891 kcal/kg
Table 5. Accuracy of exhaust gas analyzer.
Table 5. Accuracy of exhaust gas analyzer.
Flue gas CO2±2% ppm
Flue gas NOx±2% ppm
Exhaust gas temperature±0.4 °C (−100 to +200.0 °C)
±1.0 °C (200 to +1370.0 °C)
Table 6. Parameters and their ranges for the emission prediction model.
Table 6. Parameters and their ranges for the emission prediction model.
ParameterRange
Common data for all datasets
LO inlet temperature (°C)63.98–66.96
Cylinder exhaust gas outlet temperature—average (°C)386.07–440.86
FO inlet temperature (°C)14.99–19.98
T/C exhaust gas inlet temperature—average (°C)422.95–530.89
LO inlet pressure (kg/cm2)4.65–4.89
T/C exhaust gas outlet temperature (°C)357.9–408.89
FO inlet pressure (kg/cm2)6–6.2
Exhaust gas compensation temperature (°C)24.96–36
LO filter inlet pressure (kg/cm2)5.1–5.3
Charge air outlet temperature (°C)36–38
FO filter inlet pressure (kg/cm2)6–6.6
Charge air inlet pressure (kg/cm2)0.5–2.4
T/C LO inlet pressure (kg/cm2)3–3.5
T/C rpm pick-up (rpm)22,603.18–40,771.68
Engine rpm pick-up (rpm)897–901.98
Additional data for DaC
LT CFW outlet temperature (°C)34.97–40.98
LT CFW temperature difference (°C)0–5.03
Central CFW cooler temperature difference (°C)0.47–1.11
Additional data for DaE
Alternator winding phase temperature-average (°C)37.75–65.64
G/E generator current (A)336–1186
G/E generator power (kW)242–791.7
G/E current phase-average (A)331–1176.33
G/E bus net used power (kW)341–864
G/E non-drive-end bearing temperature (°C)39.18–54.09
Additional data for SCR mode
G/E load to G/E SCR (%)23.05–75.32
Urea injection (l/h)4.7–14.68
Prediction data
CO2 (%)5.48–7.44
NOx (ppm)51–1016
tEx (°C)158.1–382.2
Table 7. Hyperparameters for the ANN model.
Table 7. Hyperparameters for the ANN model.
HyperparameterValue
ANN structure64–32–16–8–4–1
Activation functionSwish
Kernel initializerHe-uniform
OptimizerNadam
Learning rate for the optimizer0.0001
Dropout rate10%
Patience for early stopping300
Epochs3000
Batch size16
Table 8. Metrics from different datasets for the ANN model with the no-SCR mode (a) trained 15 times from the ATC training set with average taken of the 15 metrics from the validation set; (b) 15 trained models tested from the BTC with average taken of the 15 metrics; (c) the best model for the validation set tested from the BTC and its metrics.
Table 8. Metrics from different datasets for the ANN model with the no-SCR mode (a) trained 15 times from the ATC training set with average taken of the 15 metrics from the validation set; (b) 15 trained models tested from the BTC with average taken of the 15 metrics; (c) the best model for the validation set tested from the BTC and its metrics.
Da-sxDaC-sxDaE-sxDaCE-sx
CO2
(%)
NOx
(ppm)
tEx
(°C)
CO2
(%)
NOx
(ppm)
tEx
(°C)
CO2
(%)
NOx
(ppm)
tEx
(°C)
CO2
(%)
NOx
(ppm)
tEx
(°C)
(a)RMSE0.046113.59375.58570.046012.97785.10070.042511.80565.29330.044912.35524.0531
MAE0.033810.97274.00190.033510.29153.79660.03139.66543.65590.03259.93082.9179
MAPE0.54741.35601.83220.54341.28451.73030.50711.19131.73010.52591.24041.3278
R20.94240.98530.99180.94280.98600.99230.95100.98910.99150.94530.98780.9955
Pearson r0.97350.99500.99770.97400.99600.99800.97690.99610.99860.97550.99660.9989
Da-sx-testDaC-sx-testDaE-sx-testDaCE-sx-test
CO2
(%)
NOx
(ppm)
tEx
(°C)
CO2
(%)
NOx
(ppm)
tEx
(°C)
CO2
(%)
NOx
(ppm)
tEx
(°C)
CO2
(%)
NOx
(ppm)
tEx
(°C)
(b)RMSE0.120825.80676.76770.114826.86127.02200.101021.34845.12220.109320.20855.5835
MAE0.091520.19675.27980.088320.51175.24520.075616.14024.11950.084116.05014.4798
MAPE1.43602.33281.50561.38342.35731.48811.18101.88051.16921.31391.87241.2706
R20.78460.94770.85830.80750.94130.84910.84980.96370.91550.82450.96530.9042
Pearson r0.90900.97950.93270.92130.97730.93400.93350.98580.96210.92660.98660.9550
(c)RMSE0.110020.674211.89770.108320.73007.15900.099123.85586.78800.116219.11404.2768
MAE0.083516.58488.98620.087216.23985.27170.074218.91375.12370.092914.91963.5232
MAPE1.30521.94772.56131.36181.83571.47581.16082.10301.43251.43761.67470.9975
R20.82420.96730.59050.82940.96710.85180.85730.95640.86670.80360.97200.9471
Pearson r0.92290.98580.78560.91620.98580.93420.93260.98650.94170.93350.99100.9752
Table 9. Metrics from different datasets for the SVM model with the no-SCR mode: (a) training from the ATC training set and metrics from the validation set; (b) trained model tested from the BTC and its metrics.
Table 9. Metrics from different datasets for the SVM model with the no-SCR mode: (a) training from the ATC training set and metrics from the validation set; (b) trained model tested from the BTC and its metrics.
Da-sxDaC-sxDaE-sxDaCE-sx
CO2
(%)
NOx
(ppm)
tEx
(°C)
CO2
(%)
NOx
(ppm)
tEx
(°C)
CO2
(%)
NOx
(ppm)
tEx
(°C)
CO2
(%)
NOx
(ppm)
tEx
(°C)
(a)RMSE0.03568.90502.26630.03366.89262.14220.03435.92820.63330.03246.01930.4150
MAE0.02746.38481.67110.02725.08551.46820.02664.78320.35860.02645.04550.3143
MAPE0.44210.75460.61910.43660.59600.55830.42830.56720.15480.42340.60010.1324
R20.96570.99390.99870.96950.99630.99880.96820.99730.99990.97160.99721.0000
Pearson r0.98290.99700.99940.98510.99820.99940.98540.99871.00000.98770.99861.0000
Da-sx-testDaC-sx-testDaE-sx-testDaCE-sx-test
CO2
(%)
NOx
(ppm)
tEx
(°C)
CO2
(%)
NOx
(ppm)
tEx
(°C)
CO2
(%)
NOx
(ppm)
tEx
(°C)
CO2
(%)
NOx
(ppm)
tEx
(°C)
(b)RMSE0.131245.96718.95780.113628.79619.51690.090115.69954.11950.113717.10884.5839
MAE0.107339.28337.64970.093624.97588.49660.065913.54173.58230.083614.59284.1775
MAPE1.68894.77772.22931.45842.93622.47501.02431.49000.99891.29241.71201.1871
R20.74990.83820.76790.81240.93650.73800.88190.98110.95090.81190.97760.9392
Pearson r0.89950.92430.91990.90580.97110.92760.93990.99200.97540.93030.99620.9712
Table 10. Metrics from different datasets for the ANN model with the SCR mode: (a) trained 15 times from the ATC training set with average taken of the 15 metrics from the validation set; (b) 15 trained models tested from BTC with average taken of the 15 metrics; (c) the best model for the validation set tested from BTC and its metrics.
Table 10. Metrics from different datasets for the ANN model with the SCR mode: (a) trained 15 times from the ATC training set with average taken of the 15 metrics from the validation set; (b) 15 trained models tested from BTC with average taken of the 15 metrics; (c) the best model for the validation set tested from BTC and its metrics.
Da-soDaC-soDaE-soDaCE-so
CO2
(%)
NOx
(ppm)
tEx
(°C)
CO2
(%)
NOx
(ppm)
tEx
(°C)
CO2
(%)
NOx
(ppm)
tEx
(°C)
CO2
(%)
NOx
(ppm)
tEx
(°C)
(a)RMSE0.049813.31010.71960.048113.82790.68930.047813.05470.35380.047113.74660.3843
MAE0.03519.69770.55990.034510.02850.52400.03469.47620.27600.03429.82600.3080
MAPE0.56338.64400.15020.55398.63880.14070.55458.48540.07380.54868.70090.0825
R20.94790.96990.97010.95140.96750.97250.95190.97100.99280.95340.96790.9910
Pearson r0.97570.98670.98800.97720.98480.99090.97730.98770.99720.97800.98570.9972
Da-so-testDaC-so-testDaE-so-testDaCE-so-test
CO2
(%)
NOx
(ppm)
tEx
(°C)
CO2
(%)
NOx
(ppm)
tEx
(°C)
CO2
(%)
NOx
(ppm)
tEx
(°C)
CO2
(%)
NOx
(ppm)
tEx
(°C)
(b)RMSE0.103618.53441.78060.101520.23991.76210.101719.63931.81630.104719.02231.8988
MAE0.069815.17201.28280.068216.15931.40400.067816.43351.43410.070715.20211.5524
MAPE1.06477.66870.33931.03628.13860.37161.02838.14050.37931.07667.69660.4103
R20.86690.93150.83390.87250.92020.84070.87130.92450.83220.86410.92860.8166
Pearson r0.93500.97680.91970.93610.96740.92050.93480.97520.91790.93220.96990.9086
(c)RMSE0.109316.79531.54300.095416.37051.85510.099818.91791.75130.113815.10431.9636
MAE0.071314.00851.06060.059013.82681.47160.067116.37561.24980.084812.59641.6660
MAPE1.09016.98460.28000.89756.99910.38921.01147.69400.33031.29546.39930.4401
R20.85260.94620.87950.88770.94890.82580.87720.93180.84480.84040.95650.8049
Pearson r0.92620.98670.94110.94470.97960.91550.93780.97640.92910.92070.98280.9031
Table 11. Metrics from different datasets for the SVM model with the SCR mode: (a) training from ATC training set and the value of metrics from the validation set; (b) trained model tested from BTC and its value of metrics.
Table 11. Metrics from different datasets for the SVM model with the SCR mode: (a) training from ATC training set and the value of metrics from the validation set; (b) trained model tested from BTC and its value of metrics.
Da-soDaC-soDaE-soDaCE-so
CO2
(%)
NOx
(ppm)
tEx
(°C)
CO2
(%)
NOx
(ppm)
tEx
(°C)
CO2
(%)
NOx
(ppm)
tEx
(°C)
CO2
(%)
NOx
(ppm)
tEx
(°C)
(a)RMSE0.040111.35611.09490.040211.24681.06450.038710.90780.94940.038110.82150.9516
MAE0.03088.48960.79260.03078.43050.67220.02868.28350.70480.02848.10280.6140
MAPE0.49797.63290.21140.49597.57580.17910.46287.42430.18810.45937.31560.1636
R20.96480.97760.92670.96460.97800.93070.96720.97930.94490.96810.97970.9446
Pearson r0.98290.98890.96470.98290.98910.96700.98420.98980.97330.98450.98990.9736
Da-so-testDaC-so-testDaE-so-testDaCE-so-test
CO2
(%)
NOx
(ppm)
tEx
(°C)
CO2
(%)
NOx
(ppm)
tEx
(°C)
CO2
(%)
NOx
(ppm)
tEx
(°C)
CO2
(%)
NOx
(ppm)
tEx
(°C)
(b)RMSE0.098713.94921.77590.098713.77281.79470.091113.92511.57290.091213.67751.5688
MAE0.059711.78221.40150.058111.53511.36970.051611.83021.24590.050811.46121.1570
MAPE0.90145.85100.37170.87575.71860.36270.77725.93120.33010.76365.77180.3061
R20.87970.96290.84040.87990.96380.83700.89750.96300.87480.89750.96430.8754
Pearson r0.94710.99090.93090.94580.99040.92710.94810.99160.94290.94810.99120.9431
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Park, M.-H.; Lee, C.-M.; Nyongesa, A.J.; Jang, H.-J.; Choi, J.-H.; Hur, J.-J.; Lee, W.-J. Prediction of Emission Characteristics of Generator Engine with Selective Catalytic Reduction Using Artificial Intelligence. J. Mar. Sci. Eng. 2022, 10, 1118. https://doi.org/10.3390/jmse10081118

AMA Style

Park M-H, Lee C-M, Nyongesa AJ, Jang H-J, Choi J-H, Hur J-J, Lee W-J. Prediction of Emission Characteristics of Generator Engine with Selective Catalytic Reduction Using Artificial Intelligence. Journal of Marine Science and Engineering. 2022; 10(8):1118. https://doi.org/10.3390/jmse10081118

Chicago/Turabian Style

Park, Min-Ho, Chang-Min Lee, Antony John Nyongesa, Hee-Joo Jang, Jae-Hyuk Choi, Jae-Jung Hur, and Won-Ju Lee. 2022. "Prediction of Emission Characteristics of Generator Engine with Selective Catalytic Reduction Using Artificial Intelligence" Journal of Marine Science and Engineering 10, no. 8: 1118. https://doi.org/10.3390/jmse10081118

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop