Next Article in Journal
A Central Role for Troponin C Amino-Terminal α-Helix in Vertebrate Thin Filament Ca2+-Activation
Previous Article in Journal
Polystyrene–Carbon Nanotube Composites: Interaction Mechanisms, Preparation Methods, Structure, and Rheological Properties—A Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhancing Predictive Accuracy Under Data Scarcity: Modeling Molecular Interactions to Describe Sealing Material Compatibility with Bio-Hybrid Fuels

Institute for Fluid Power Drives and Systems (ifas), RWTH Aachen University, 52074 Aachen, Germany
*
Author to whom correspondence should be addressed.
Physchem 2025, 5(2), 15; https://doi.org/10.3390/physchem5020015
Submission received: 14 February 2025 / Revised: 7 March 2025 / Accepted: 31 March 2025 / Published: 8 April 2025
(This article belongs to the Section Theoretical and Computational Chemistry)

Abstract

:
Bio-hybrid fuels, chemically derived from sustainable raw materials and green energies, offer significant potential to reduce carbon dioxide emissions in the transport sector. However, when these fuels are used as drop-in replacements in internal combustion engines, material compatibility with common sealing materials is not always given. Within the cluster of excellence, “The Fuel Science Center (FSC)” at RWTH Aachen, experimental immersion tests were conducted on a limited set of fuel and sealing material combinations. Given the extensive range of possible fuel and sealing combinations, a data-based machine learning prediction framework was developed and validated to pre-select promising fuel candidates. Due to the limited number of samples, preliminary results indicate a need to expand the database. Since experimental investigations are time-consuming and costly, this work explores faster physics-motivated data generation approaches by modeling molecular interactions between fuel and sealing materials. Two modeling scales are employed. One calculates the intermolecular distance using density functional theory. The other uses Hansen solubility parameters, representing an abstract modeling of intermolecular forces. Both approaches are compared, and their limitations are assessed. Including the generated data in the prediction framework improves its accuracy.

1. Introduction

Given the need to explore alternative propulsion systems in the transportation sector, liquid bio-hybrid fuels emerge as a promising energy carrier. Bio-hybrid fuels summarize multiple fuels that combine carbon sources, such as renewable feedstock or waste, with external energy inputs. The production process incorporates renewable energy or biological processes to elevate the energy state of the base product, enabling the synthesis of fuels with a high energy density. Complementing battery-electric, hydrogen, and ammonia-powered systems, the fuels’ high energy density makes them especially suitable for difficult-to-electrify applications, such as aircrafts and heavy-duty machinery.
Within the Cluster of Excellence “The Fuel Science Center (FSC)” at RWTH Aachen University, these fuels are studied holistically, focusing on developing methods to identify optimal fuel candidates that balance environmental, economic, and technical requirements. This is achieved through an interdisciplinary fuel design process, addressing fuel properties so that they comply with current standards and regulations. This includes engine (combustion)-relevant properties such as research octane number (RON) [1,2], ignition delay time (IDT) [3,4], or catalytic activity in the exhaust system, fluid mechanical and rheological properties such as density, viscosity, and surface tension [5,6], as well as toxicological assessments [2].
The full potential of these fuels can only be realized when they are used in existing combustion systems as “drop-in” fuels, requiring no significant modifications to the current infrastructure. However, this requires material compatibility with all system components, mainly static and dynamic seals. Previous studies and immersion tests have shown that especially many bio-hybrid fuels are incompatible with conventional elastomer sealing materials. These interactions can cause significant swelling, with elastomer volume increasing by over 200%, leading to immediate failure in technical applications. Alongside swelling, additional wear mechanisms, such as changes in hardness and chemical reactions, have also been observed. With these investigations, suitable sealing material for existing fuels can be identified, or conversely, fuels or fuel blends can be optimized for improved material compatibility [5,7,8].
However, the field of potential fuel candidates, blends, and sealing material combinations is vast, making a comprehensive investigation impractical. Additionally, immersion tests are manually intensive, time-consuming, and costly, highlighting the need for a targeted experimental design to select combinations for further strategic study. To address this, a supervised machine learning regression approach has been developed to predict elastomer property changes for specific fuel and seal combinations after immersion. A pairing is considered valid if the predicted property changes in the sealing material fall within predefined value ranges. These ranges are derived from current fuel standards, observations with conventional fuels, and plausible technical limitations, such as a maximum allowable volume increase of, for example, 50 % . Combinations with values that deviate significantly from this range are excluded from further consideration, thereby narrowing down the field of possible combinations [9].
An initial application of the framework highlights the need for optimization, as model evaluation reveals generally low training and testing scores. This is primarily due to the limited data samples [9]. In addition to ongoing experimental data generation and model refinements, this study proposes alternative, faster approaches for generating data by modeling the interactions between the fuel and sealing material. The assessment of the quality of such simulated data for usage in a supervised machine learning model in the context of predicting material compatibility is the task of this paper.
After an introduction to the general ML process, examples showcase its application in fluid power systems and the fuel design process. Subsequently, two methods used in this work to model the interaction between fuel and elastomer are introduced: one approach models interactions at the quantum-mechanical level, known as density functional theory (DFT), while the other provides a more generalized, abstract representation, referred to as the Hansen solubility parameters (HSPs) [10].
Section 2 introduces the process of the experimental data generation and details the architecture of the prediction framework. Additionally, the methodologies for synthetic data generation, including both DFT-based and HSP-based approaches, are presented. Results are presented in Section 3, which is structured in three parts. First, the outcomes of the data generation process are presented, detailing the characteristics and reliability of the synthetic data. Second, the baseline results, obtained using only experimental data within the prediction framework, are analyzed to establish a reference for comparison. Finally, the impact of incorporating synthetic data is evaluated by comparing the prediction performance between the baseline and extended database cases. The results of both cases are analyzed in Section 4, focusing on the effects of dataset expansion on model performance and identifying key areas for future refinement. The overall structure of this work is visually represented in Figure 1.

1.1. Machine Learning Application

Machine learning is a branch of artificial intelligence that enables computers to learn patterns from data and make decisions without explicit programming. The general process involves data collection and pre-processing, model selection, training to extract underlying patterns, and evaluation to refine predictions. This methodology has been successfully applied across a range of domains, including fluid power, physical and chemical system modeling, and material science.
Recent advancements in physics-informed machine learning have demonstrated significant success in tribological investigations, where ML models are integrated with physical laws to enhance prediction accuracy and interpretability [11,12,13,14,15,16]. In the field of fluid power, ML has been employed for fault detection [17] and condition monitoring of hydraulic systems [18,19], illustrating its potential to improve system reliability and performance by identifying anomalies and predicting failures. Historically, the prediction of material compatibility—particularly in hydraulic systems—has relied on group contribution methods to estimate Hansen solubility parameters (HSPs) [20].
In the context of fuel design, ML techniques have been leveraged for uncertainty quantification [21] and for predicting key properties such as fuel ignition quality using graph neural networks [1]. Additional investigations have focused on the material compatibility of bio-hybrid fuels and elastomers using HSP-based methods [7], and ML frameworks have been developed for the evaluation of drop-in aviation fuels [22].

1.2. Interaction Simulation HSPs and DFT

This work explores two methods to model intermolecular interactions between fuels and sealing materials. The extent of intermolecular interaction between the fuel and elastomer is a crucial point to consider when evaluating material compatibility. Failure of the sealing material is a result of swelling of the elastomer, which in turn is a result of the sealing elastomer interacting with the fuel molecules on a molecular level. A common approach to assess material compatibility involves describing the interactions using three numerical parameters known as Hansen solubility parameters. Another approach involves quantum-mechanical analysis of molecular interactions using density functional theory (DFT). This method iteratively calculates the equilibrium state between two molecules and determines the parameters characterizing this state. In the subsequent sections, both approaches are introduced.
  • HSP
The concept of Hansen solubility parameters (HSPs) is commonly used to select suitable solvents for a given solute. The principle of “like dissolves like” predicts solubility when the intermolecular forces of the solvent and solute are similar in type and strength. An initial approach to predict the solubility of two substances was proposed by Hildebrand based on the eponymous Hildebrand parameter δ T . This parameter is defined as the square root of the cohesive energy density of a substance, where V is the molar volume of the pure substance, and E is its energy of vaporization [23].
δ T = ( E / V ) 1 / 2
However, this single parameter proved to be insufficient to fully represent the complex interactions between different molecules. Therefore, Hansen expanded Hildebrand’s approach and further divided the binding energy into the proportions caused by dispersive forces E D by polar forces E P , and by forces caused by hydrogen bonds E H [10].
E = E D + E P + E H
Again, dividing each component of the binding energy by the molar volume yields the three squared Hansen solubility parameters (HSPs), δ D , δ P , and δ H , which in total are equal to the Hildebrand solubility parameter.
δ T 2 = δ D 2 + δ P 2 + δ H 2
Each substance (molecule) can be located in the three-dimensional HSP space with these three parameters. The Euclidean distance R a between the locations of two molecules in this space indicates the similarity of these molecules. Smaller distances suggest greater solubility potential between the substances. The dispersive parameter is weighted by a factor of four, which was determined empirically.
R a = 4 ( δ D 2 δ D 1 ) + ( δ P 2 δ P 1 ) + ( δ H 2 δ H 1 )
Generally, HSPs are obtained via experimental investigations, but HSP values are available for many pure substances in the literature. Furthermore, HSPs can be calculated using the group contribution method or QSPR. However, since this work initially focuses only on pure and common substances, values from the literature [10] are utilized in the following to predict numerical volume change values for data generation.
  • DFT
Examining the system on a molecular level enables an analysis of the elastomer–fuel interaction. To allow for a quantitative interpretation of this interaction, a minimum accuracy needs to be obtained using the chosen method. Due to the inherent lack of electron correlation and thus limited accuracy in Hartree–Fock theory, as well as the increasing cost of calculation in terms of CPU hours for post-Hartree–Fock methods, DFT offers a good trade-off between accuracy and cost for electronic structure calculation of organic molecules.
The electronic properties of a chemical system are related to the system’s electronic wave function. For one-electron systems, the wavefunction can be determined via an analytical solution of the Schrödiger equation. However, this cannot be done for systems involving more than one electron. This is where density functional theory (DFT) comes into play for electronic structure calculations.
The central variable in DFT is the electron density ρ , which relates to the probability of finding any electron at place r . This relationship is shown in Equation (5) using the formalism of the Born rule that states that the probability of electron residency in a specific place is proportional to the square of the amplitude of the wave function Ψ [24].
ρ ( r ) = n Ψ x , x 2 , , x n * Ψ x , x 2 , , x n d m s d x 2 d x n
The mathematical foundation of DFT lies in the Hohenberg–Kohn thorem (see Equation (6)), which postulates the existence of a functional of the density F [ ρ ( r ) ] that expresses the correct ground state energy E 0 of the system and thus allows for the description of such systems using the electron density [25].
E 0 = F [ ρ 0 ( r ) ]
Kohn and Sham made this approach computationally accessible by implementing orbitals of a comparison system of non-interacting fermions (Kohn–Sham orbitals) into the mathematical treatment (Kohn–Sham DFT) [26]. However, a description of the underlying universal exchange correlation functional has not been successful. But, various functionals approximate the relationship between electron density and exchange correlation energy for different applications to different extents. These functionals are classified into categories and arranged on Jacob’s ladder [27]. Here, the user must make a trade-off between increasing the complexity and accuracy of the functional on one side and increasing computational cost on the other.
In the past, DFT was used in various studies to research intermolecular interactions. Examples include the work of Miranda-Quintana et al. [28], which used minimum-interaction energies from DFT to predict the reactive behavior of a pair of reagents. The so-called “conceptual DFT” has been proven to explain general chemical concepts like the HSAB principle [29], emphasizing the importance of this theory for interpreting intermolecular interactions. DFT has also contributed to the development of various descriptors regarding reactivity; for example, the prediction of site reactivity in substituted phenyl molecules [30], further explaining chemical interactions between molecules.
With the development of computational methods that account for long-range interactions, the prediction of intermolecular interaction energies becomes more accurate [31]. In the context of researching polymer–solvent intermolecular interactions, DFT was used by Yamada et al. [32] to quantify NBR–solvent interactions, leading to theoretical results in good agreement with experimental results. Similarly, through DFT, Wu et al. investigated intermolecular interactions between a polyacrylonitrile polymer and different solvent molecules [33].

2. Materials and Methods

This section introduces the methods and procedures for data generation, both empirical and simulation-based, and the implementation of the prediction framework. To date, only experimental data have been used to train and evaluate the prediction framework. This study investigates the contribution of simulated data to the accuracy of prediction.

2.1. Experimental Data Generation

Central to the investigation of elastomer compatibility are immersion tests conducted following ISO 1817 [34]. Standard reference elastomer NBR (SRE-NBR 28/SX), according to DIN ISO 13226 [35], was immersed in different bio-hybrid fuels and pure fuel components for 28 days. Specimens measuring 25 × 25 × 2 mm3 were fully submerged in sufficient fuel throughout the test duration. The use of SRE, with its known composition and absence of additives, enables comparison of results with the literature and previous studies while isolating the influence of individual fuel properties and constituents.
During the testing period, changes in elastomer mass, volume, and hardness were evaluated following DIN 53521 [36] at defined time intervals. Of particular importance in assessing elastomer compatibility are the changes in specimen properties, specifically the volume change Δ V and hardness change Δ H at the final interval (V and H) relative to their initial states ( V 0 and H 0 ) (see Equation (7)). These results are shown in Figure 2. For a detailed description of the measurement principles, see Hofmeister et al. [5,8]. At this stage, only volume change data are considered for further use throughout this study.
Δ V = V V 0 V 0 · 100 , Δ H = H H 0 H 0 · 100

2.2. Prediction Framework

Central to this study is the ML prediction framework developed and initially validated in [9]. This section introduces the structure of this framework.
A supervised learning (SL) regression approach was chosen to predict the property changes of elastomers. A numerical value was computed using a dataset containing input features and corresponding target variables (labels). In SL, a model is trained to predict the actual output, called a label, based on given inputs. In this case, the input parameters for the ML models were combined in a molecular fingerprint, which was generated for each fuel candidate using the open-source cheminformatics Python library RDKit [37] and Mordred [38]. Molecular fingerprints are digital representations of chemical structures, capturing essential information such as molecular size, shape, and chemical properties in a one-dimensional vector. Here, molecular fingerprints represent, among others, key fuel properties related to molecular size and polarity, which have been identified as influencing the volume and hardness changes of elastomer specimens when immersed in different fuels. The labels are the measured volume changes obtained from previous immersion tests. This represents a single-output regression problem, where linear and nonlinear models can be used to predict the target value. Rather than selecting one single model, the framework can account for multiple models subsequently.
The framework is implemented in Python 3.10, allowing library extensions that access mathematical and machine learning toolboxes. This study’s library list is shown in Table 1. Among the tools provided by scikit-learn [39], the framework utilizes four built-in regression models: linear regression, lasso regression, multi-layer perceptron (MLP) regression, and decision tree regression.
The dataset, stored in an Excel file, linked each fuel or pure fluid’s CAS number to its corresponding SRE-NBR volume changes. During data input to the framework, the canonical SMILES representation was generated for each CAS number using PubChemPy [43]. A 3D molecular model was then constructed using RDKit [37], and all molecular descriptors were calculated using Mordred [38]. This way, 1826 numerical features were obtained for each data sample. Their correlation to the target value (volume change) was first calculated to consider only the most influential features. Then, they were sorted in descending order based on their correlation score. Subsequently, features with a correlation score below an absolute threshold of 0.33 were removed. In doing this, the number of features was reduced by a factor of around 30.
Next, a common drawback inherent to data-driven algorithms was addressed by applying feature scaling to the input data. This normalizes the features in a typical band, resolving the issue of different units and magnitudes in the input data. Scaling was performed by applying a standard scaler, which normalizes each feature by subtracting its mean and dividing it by its standard deviation. This transformation ensures each feature has a mean of 0 and a standard deviation of 1 [9]. Another drawback, especially in the framework’s development phase, is the high dimensionality of features alongside a limited amount of data samples. Hence, a principal component analysis (PCA) was performed to reduce dimensionality by transforming the original features into a new set of orthogonal, uncorrelated variables. The number of principal components was selected based on the available data samples, with approximately 10 data samples corresponding to each input feature. Lastly, it was ensured that the model architecture was optimally adapted the given data. This adjustment ensures optimal performance. However, the field of possible model architectures is almost infinite. Therefore, automated/heuristic/model-based algorithms such as halving grid search CV were implemented in the framework.
Due to the small dataset (n = 49), statistical uncertainties arose, which were addressed within the framework using k-fold cross-validation. The data were divided into five non-overlapping subsets. For each subset, hyperparameter tuning was performed. The best model was then evaluated using R 2 metrics. This process was repeated for all folds, and the average scores were calculated to facilitate model comparison.

2.3. Simulative Data Generation

The impact of incorporating physics-related synthetic data on the accuracy of the prediction framework is central to this study. This section, therefore, introduces the process for generating data samples without additional experiments. Fundamental to this method are the previously presented approaches, HSP and DFT, which characterize the interactions between fuel and elastomer with numerical values. These values predict the volume increase for fuel candidates and pure substances that have not been experimentally tested. The newly generated data samples were then integrated into the prediction framework to enhance its capabilities.
The process was divided into the following steps. First, the corresponding values were either retrieved from the literature, as in the case of HSP, or calculated using the DFT approach. This was done for all previously tested fuel candidates for which the volume increase was known based on experiments (see Table 3). In a subsequent step, the correlation between these values and the volume increase was learned by a supervised learning regression model similar to the one presented in the prediction framework above. Instead of molecular descriptors, the values resulting from HSP and DFT served as input features and described each substance. Lastly, the trained model was applied to previously unseen fuel candidates. A few fuel candidates validated the prediction accuracy during data generation.
In the following, the HSP and DFT values considered are introduced. The three HSPs, δ D , δ P , and δ H , and the molar volume V M of one substance are considered as values for the HSP approach. These values are available from the literature. Next, the DFT computational process is described.
Figure 3 shows the computational process of simulating NBR/fuel molecule intermolecular distance d and binding enthalpies Δ E using density functional theory.
Different fuel species were considered in this study. These molecules were selected based on the availability of experimental data from swelling tests at the FSC. For the elastomer species, cis-butadiene-acrylonitrile-trans-butadiene was used as the model polymer unit according to [32] as it includes the different functional groups present in NBR.
After building the single molecules in Avogadro [44], quantum mechanical geometry optimization was performed using ORCA 5.0 [45]. The underlying level of theory is presented later in this chapter. Then, Mulliken population analysis was performed to determine the charge distribution in the respective molecules, i.e., finding the most positively (and negatively) charged atoms in the molecules based on the assumption that these are the main points of interaction [32]. Based on this analysis, a spatial orientation of an interacting fuel molecule–NBR complex was approximated, and geometry optimization of this complex was performed with the same level of theory. Differences in the sum of the enthalpy of the single molecules and the fuel molecule–NBR complex were considered to determine the binding enthalpy Δ E . The intermolecular distance d can be abstracted from the optimized geometry by measuring the the shortest distance between the nitrogen atom in the model NBR molecule and the most positively charged hydrogen atom in the fuel molecule.
All geometry optimizations were performed using the Karlsruhe triple-zeta basis set def2-TZVPP [46] together with the meta-GGA exchange correlation functional r2Scan [47]. D3BJ dispersion correction [48] was used to account for medium-range correlation, thus providing higher accuracy. The auxiliary basis set def2/J was used to speed up the calculation through the resolution of the identity Coulomb integral approximation [49]. The underlying level of theory for the geometry optimizations is summarized in Table 2.

3. Results

In this chapter, the results of this study are presented. The first section focuses on the data generation process, detailing the outcomes of the simulations and their reliability. The second section evaluates the predictive framework using experimental data, establishing a baseline performance score. Finally, the third section explores the impact of integrating simulated data with experimental data, assessing how this combined approach influences the predictive accuracy of the framework.

3.1. Simulated Data Results

Table 3 lists all substances experimentally investigated within the FSC and their corresponding HSP and DFT values used as input features. DFT data were calculated for 28 substances, while HSP values were obtained from the literature for 47 substances. The missing DFT results are due to non-converging calculations caused by complex molecular structures. Furthermore, the elastomer volume changes (target) for all of the investigated substances are listed in Table 3.
An initial investigation into the suitability of the gathered values for the data generation process revealed an unfavorable case for the DFT approach. Using the Pearson correlation coefficient to assess the strength of the correlation between individual values and the target variable, volume change, it becomes evident that no significant correlation exists for the DFT values. The calculated intermolecular distance d and binding enthalpy Δ E yielded correlation scores of 0.0732 and 0.0888 , respectively. In contrast, the correlation scores for the HSPs were equal or greater than 0.2637 for all parameters except δ H (see Table 4). However, closer examination of the DFT values highlights the limited usability of the DFT approach, at least for the functional group of alcohols. Among the 28 substances for which DFT values were calculated, 7 belonged to the functional group of alcohols. When considering only these substances, the correlation with the target values was significantly higher, suggesting that the DFT approach may still hold potential for specific functional groups (see Table 4). However, since the approach in this study should apply to substances across all functional groups, the overall low correlation—alongside the limited number of DFT data samples—makes the DFT approach less suitable for further use in the prediction process. Hence, only HSPs predict the volume change for previously untested substances.
The poor performance of the DFT results could be due to the inherent drawbacks of DFT. While the theory of DFT is exact, the exchange-correlation functional is not known and hence all DFT functionals are approximations. Also, the KS-orbitals resulting from the applied KS-DFT approach do not have any physical meaning since they are the orbital of a fictitious system of non-interacting electrons. Thus, it must be carefully evaluated if DFT-results can be used to analyze binding. Higher-accuracy electronic structure methods, like the gold standard of CCSD(T), could be used to benchmark the applied DFT method in this case. Additional to the inherent drawbacks of the theory, there might be issues with the applied level of theory. For the basis set, an attempt was made to minimize the basis set superposition error by using a relatively large basis set. However, additional counterpoise correction might be necessary. Conformational sampling and optimization to the nearest minimum could also verify if the optimized geometry is in fact a global minimum on the potential energy surface. Also, the underlying assumption of the main interaction happening between the most positively and most negatively charged parts of the molecules might be faulty. Other atoms could be evaluated as possible points of interaction.
The volume change prediction in the data generation process emphasizes supervised regression models, including linear, lasso, neural network, and tree-based regression models. Three HSPs and the molar volume were used as input features for these models. The models were trained on a subset of 42 samples out of the 46 substances listed in Table 3, while the remaining 5 samples were used for model validation against experimental obtained volume change data. To ensure robust performance, feature scaling, 5-fold cross-validation, and individual hyperparameter tuning were applied to all the models. An initial investigation indicated that the neural network regression model performed best with the settings presented in Table 5. Consequently, only the results of the neural network model are presented in the following sections.
To evaluate the model’s predictive accuracy, the coefficient of determination R 2 is used as a performance metric. This metric is applied throughout the training process and the final validation of previously unseen data. An R 2 value closer to one indicates better model performance. Monitoring training and validation R 2 scores is essential for detecting overfitting or underfitting. A high training R 2 but significantly lower validation R 2 suggests overfitting, where the model memorizes the training data but fails to generalize. Conversely, low R 2 values for both indicate underfitting, meaning the model lacks the complexity needed to capture underlying patterns.
Figure 4 compares the predicted volume change to the actual values on the right-hand side during the training and validation process. Conversely, the residuals—the difference between the actual and predicted volume change—are plotted against the predicted values. A training R 2 score of 0.999 indicates that the model was effectively trained using the available features and samples. The high validation R 2 score of 0.832 , close to the training score, indicates good predictive accuracy and generalization ability. Both scores are visually represented in the left scatter plot, where the data points closely align with the diagonal line, indicating near-ideal predictions with an R 2 value approaching one. Furthermore, the right-hand side of the figure shows that volume predictions changing up to around 50 % exhibit low residuals. Beyond this threshold, the residuals tend to increase, especially for the validation case, although no clear pattern emerges. Figure 5 presents box plots of the residuals for both the training and validation datasets. The distribution of residuals in the training set appears tightly clustered around zero, indicating a well-fitted model. Most predictions deviate by less than ± 4 percentage points from the actual values. The validation residuals show a wider spread with a shift to positive values, with the median at 8.6 % . This still suggests good generalization with minimal bias. However, it becomes evident that the outlier in the validation set, with an actual volume change of 82 % (2-octanone), exhibits the highest prediction error. This is due to a lack of training samples in this range of volume change. The model demonstrates stability and reliable predictive performance within the range of sufficient training data.
Overall, the selected HSP features and the chosen regression model and architecture demonstrate promising results during training and validation. To generate synthetic data that can be incorporated into the existing experimental dataset, the trained model was applied to a set of 59 untested substances. The relevant HSP features were collected for each of these substances, and the volume change was predicted based on those features. A summary of the substances and their corresponding predicted values is provided in Table 6 and Table 7. It is important to note that no data validation is available at this stage, as no experimental investigations have been conducted for these substances yet.
The addition of simulation more than doubles the sample size, which can enhance the predictive accuracy of the framework presented above without the need for time-consuming immersion tests. Before exploring the impact of an expanded database on prediction accuracy, a baseline performance is established using only the experimental results.

3.2. Baseline Results

To establish a baseline score for comparison and assess whether the newly generated data impact the accuracy of the prediction framework, the model was first trained using only the available experimental data. This section presents the results of the baseline case. For this purpose, four standard regression models are evaluated: linear regression (Linear), lasso regression (Lasso), multilayer perceptron (MLP) regression, and tree-based regression (Tree). All four models are integrated into the prediction framework, with data preprocessing applied uniformly across the entire dataset. Each model is then individually optimized using halving grid search CV to determine the best hyperparameters. Finally, the optimal models are trained and evaluated using 5-fold cross-validation.
The models’ performances were evaluated during training and testing using the coefficient of determination R 2 . Additionally, residual values—the differences between the actual and predicted volume changes—were calculated for each sample in both phases. The distribution of residuals is visualized using a box plot. As the previous chapter shows, the box plot effectively summarizes other representations, such as scatter plots of actual vs. predicted values or residuals, providing a comprehensive overview of model performance. Hence, this method of presentation is chosen for model comparison. All models are trained using 39 samples, while the remaining 10 are reserved for testing. The optimal hyperparameters, determined through hyperparameter tuning, are presented in Table 8. Since the linear model omits adjustable hyperparameters, it is excluded from the table.
Figure 6 presents box plots of the residuals for all investigated regression models, comparing both training and testing phases. The distribution of residuals provides insight into each model’s predictive performance, with a narrower spread indicating higher accuracy. This figure highlights variations in model stability and generalization ability by visualizing the differences between the actual and predicted volume changes.
The median residuals for each model remain close to zero, indicating minimal bias towards larger prediction values. However, the variation in residual spread suggests that specific models exhibit more stability in their predictions than others. The range of residuals highlights differences in the models’ predictive reliability, with some models demonstrating more significant variation in errors across the data points, indicating less consistency in their performance. The MLP and tree-based models showed the highest R 2 values, with training scores of 0.757 and 0.795 , respectively, and testing scores of 0.655 and 0.645 , respectively. These results suggest that both models achieve a good fit for the training data and good generalization of unseen test data. However, outliers in the data still affect model performance, as their influence can lead to deviations in both the training and testing results. Despite this, the range of predicted values (minimum and maximum) remained almost consistent across all models, indicating that the models were similarly constrained in the spread of their predictions. In contrast, the linear and Lasso models yielded lower R 2 values and thus lower predictive accuracy. These models appear to underfit the data, as their relatively simple structure fails to capture the underlying complexity of the relationships in the dataset given the available describing features and number of samples. This results in less-accurate predictions on both the training and testing sets. Notably, none of the models achieved sufficiently high training R 2 values, implying that the selected molecular descriptors, alongside the limited amount of data samples, do not fully capture the underlying patterns in the data. This limitation in feature representation leads to insufficient model training, which in turn adversely affects performance on the test set. Consequently, the linear model’s better performance during testing may be attributed to chance rather than a robust generalization ability.
An increase in predictive accuracy is expected with a larger dataset. Therefore, this study explores alternative approaches to experimental testing for generating new data samples. In the following, the database is expanded by incorporating samples whose elastomer volume change has been predicted using the HSP approach and a regression model.

3.3. Expanded Database Results

This section examines the impact of a larger dataset by comparing the prediction accuracy of the expanded and baseline cases. First, the models were evaluated using the same hyperparameters as in the baseline case. In the second step, new hyperparameters optimized for the expanded dataset were applied to assess potential improvements in predictive performance. The total number of samples, including the newly generated data, amounted to 108. However, some outliers were removed due to constraints related to technical plausibility, resulting in a final dataset of 95 samples. As before, 5-fold cross-validation was performed, yielding 76 training samples and 19 testing samples per fold. Figure 7 compares the residual values and R 2 scores of all investigated models in the baseline case with those obtained using the expanded database. At this stage, the models in both cases share the same hyperparameters from Table 8.
Expanding the dataset shifts the median residuals closer to zero for most models (Linear, Lasso, and tree-based) compared to the baseline case. However, across all the models, the R 2 scores decrease, except for the tree-based model in training, which slightly improves ( R 2 = 0.795 for BL, R 2 = 0.954 for EDB). This indicates a reduction in predictive accuracy for the other models. The tree-based model demonstrates greater robustness, maintaining stable or even improved performance. On the other hand, the MLP model exhibits significant instability, characterized by wider residual spreads and even a negative testing R 2 score.
The results suggest that while increased data volume can improve model training, it may also introduce complexities that negatively impact certain model architectures, particularly those more sensitive to data distribution changes. Therefore, it is essential to adjust the model architecture, where possible, to one that is optimally suited to the characteristics of the given dataset. Thus, in the following analysis, the models are re-optimized for the dataset by individually tuning their hyperparameters to improve performance and adaptability.
The following section presents the model performance results after re-optimizing the hyperparameters for the extended database case. Since the dataset size nearly doubled, each model’s architecture was individually adjusted to accommodate the increased data complexity. The resulting optimal hyperparameters are summarized in Table 9. Furthermore, the number of input features was increased from five to seven to accommodate the larger dataset better and capture additional patterns in the data.
Figure 8 shows the residual distributions and R 2 scores for training and testing across the different models, comparing their performance before and after hyperparameter optimization. While some models showed significant improvements, others experienced a decline in predictive accuracy, indicating that the effects of dataset expansion and parameter tuning varied depending on the model architecture.
For example, the linear model slightly improved, with a lower median residual and reduced spread in the updated configuration. This led to a modest increase in the R 2 score, rising from 0.438 to 0.530 in training and from 0.415 to 0.560 in testing. However, the overall performance remained moderate, with R 2 values of around 0.5 . Since the linear model lacks adjustable hyperparameters, these improvements were primarily attributed to the increased dataset size and additional input features. The lasso regression model showed an apparent enhancement in predictive performance. The residual spread was significantly reduced, though some outliers remained present. The R 2 score improved considerably, increasing from 0.412 to 0.872 in training and from 0.323 to 0.712 in testing. Additionally, the median residual deviation in testing was close to zero, suggesting better model fit and improved generalization. The MLP regression model demonstrated the most substantial improvement. The training R 2 score rose from 0.119 to 0.891 , while the testing R 2 improved to 0.408 . The residual distribution became more centered, with the median closer to zero, and the overall spread was reduced, particularly in training. However, despite these gains, the testing performance still exhibited a relatively wide residual distribution, indicating persistent instability in generalization. Conversely, the tree-based regression model experienced a decline in performance. While the spread of residuals increased in training, the interquartile range remained nearly unchanged. The training R 2 decreased from 0.954 to 0.800 , showing fewer signs of overfitting than before. However, testing performance deteriorated, shifting residual distribution toward positive values and increasing prediction errors. The testing R 2 dropped from 0.556 to 0.343 , indicating reduced generalization capability.
The results highlight that while hyperparameter tuning leads to substantial improvements for the MLP and lasso models, the tree-based model showed a decline in predictive performance. The linear model benefited slightly from the increased dataset, whereas the MLP model, despite achieving substantial improvements in training, still faced challenges in stability during testing. These findings emphasize the need for model-specific optimizations and careful consideration of dataset expansion effects when adjusting model architectures. No uniform statement can be made by comparing the prediction accuracy based on the R 2 score and the residual distribution of the old and new architecture.

4. Discussion

In this study, two approaches were investigated for modeling intermolecular interactions between fuel and sealing materials. However, only one approach demonstrated applicability across a wide range of fuel candidates. The decision to focus solely on the HSP approach was based on the Pearson correlation coefficient. As shown in Table 4, the HSP variables exhibited a stronger overall correlation with volume change compared to the DFT variables. Interestingly, when considering only alcohol-based fuels, the correlation score for the DFT variables increased significantly. This finding aligns with the results reported by Yamada et al. [32], which initially motivated the evaluation of the DFT approach. However, a key limitation of the DFT method is its inability to account for functional groups beyond alcohols, restricting its applicability in this study.
A possible explanation for the poor performance of the DFT approach is that it may not adequately capture the relevant binding interactions between the fuel candidate and NBR. For example, in systems containing alcohols, a distinct hydrogen bond forms between the hydrogen atom on the fuel and the nitrogen atom on the NBR. For compounds lacking an OH group, the formation of such bonds is less evident, suggesting that hydrogen bonding is not the predominant interaction driving elatomer volume change in these cases. It is plausible that, for these other molecules, alternative binding sites with more favorable interactions exist but were not identified by the algorithm. This limitation may be partly due to the initial conditions used for geometry optimization. Adjusting these conditions could lead to different binding configurations and potentially improve the accuracy of the DFT approach.
By choosing the HSP approach, the volume change database was extended via a regression model. However, the dataset available for this process was limited. Despite this constraint, the selected model architecture yielded near-optimal prediction accuracy during training. Nonetheless, the slightly lower accuracy observed during validation suggests a tendency toward overfitting, thereby reducing model robustness. When the trained model was applied to previously unseen data, this overfitting introduced additional uncertainty in the predictions. Furthermore, these predictions could not be directly validated against experimental values due to the nature of the process. An analysis of the validation residuals in Figure 4 reveals that, in regions with sufficient training data, most predictions deviated by less than 10 % from the actual values.
Although this study prioritized rapid and reproducible experimental execution, previous investigations in the literature (e.g., ref. [50]) have shown that the extent of elastomer property change is highly sensitive to the experimental setup and manual execution. In this context, the relevant DIN standard [34] recommends fast execution of manual measurements without specifying a precise time frame. Moreover, the high volatility of most substances examined further complicates the acquisition of precise and reproducible measurements, particularly when multiple operators are involved. The resulting specimen-to-specimen measurement error is comparable in magnitude to the model’s predictive uncertainty. For example, in three previous immersion tests, the mean relative deviation was found to be 4.61 % for NBR in ethanol and 12.49 % for NBR in methanol. These considerations therefore underscore the validity of incorporating simulated data into the prediction framework central to this study.
This study investigates the impact of simulated data on the predictive accuracy of volume change. By comparing the prediction accuracy of models trained on the extended database to those trained on the baseline dataset, several key observations can be made. As discussed in the results section, simply increasing the number of data samples does not necessarily improve model performance. In fact, most models—except for the tree-based regression model—exhibit a decline in predictive accuracy when additional data are introduced. This highlights the need to adjust both the model architecture and the number of molecular input descriptors to effectively accommodate the larger dataset. Only through such modifications does prediction accuracy return to baseline levels or, in some cases, surpass previous performance. However, it is important to note that no model, except for the Lasso regression model, demonstrates significant improvement, and testing accuracy is almost always lower than training accuracy.
Two possible explanations for this trend likely contribute simultaneously. The first is the aforementioned increase in uncertainty introduced by the simulated data. The second is the small dataset, despite the addition of new samples. Clear signs of underfitting were evident, which could be attributed to the limited dataset size. While underfitting is expected to decrease as the number of data samples increases, the nearly equal ratio of experimental to simulated data may amplify uncertainty propagation from the simulated data into the model predictions. To mitigate this effect, a lower ratio of simulated to experimental data could be implemented, as explored by Makansi et al. [19].
At this stage, the limited dataset size remains a key factor preventing definitive conclusions about model performance. While trends in predictive accuracy could be observed, the dataset is not yet large enough to allow for a clear and statistically significant comparison between models. Some models show promising behavior, but the variability in results suggests that further data collection trough experiments is necessary to fully assess their reliability.

5. Conclusions

Within the context of the fuel design process at the Cluster of Excellence “The Fuel Science Center”, the material compatibility of elastomers and bio-hybrid fuels was investigated. By analyzing changes in elastomer properties after being immersed in bio-hybrid fuels, underlying patterns could be identified. These results enable the development of recommendations for future applications, such as optimizing fuel blend compositions to improve compatibility with existing combustion engine sealing systems.
To reduce the the reliance on further time-consuming and costly experimental investigations, a data-driven approach was developed to predict elastomer property changes after fuel immersion. This approach was implemented through a prediction framework that utilizes supervised learning regression models to identify patterns within the available experimental data. The models were trained on molecular fuel parameters alongside corresponding property changes and were subsequently applied to previously untested substances to generate new predictions. However, an initial evaluation of the framework revealed that the limited dataset size led to non-robust predictions and a lack of statistically significant accuracy across the investigated models [9].
To address these limitations, this work explored methods to generate synthetic data for integration into the existing experimental dataset. The data generation strategies focused on modeling intermolecular interactions between fuel candidates or pure fuel constituents and elastomer materials. Two approaches were considered: a detailed density functional theory method and a more abstract approach based on Hansen solubility parameters combined with a regression model. A key objective of this work was to assess the impact of the extended database on the performance of the prediction framework.
Among the two methods, the HSP approach yielded more promising results and was subsequently used to double the number of data samples. Although the data generation process achieved high prediction accuracy, it inherently introduced uncertainty, since the newly generated data could not be directly validated against experimental measurements. The HSP approach was favored due to the widespread availability of HSPs in the literature, whereas DFT calculations are computationally intensive and proved applicable only to substances of the functional group of alcohols. The effects of the extended database on prediction accuracy were evaluated by comparing the baseline case, which contained only experimental data, with the extended dataset. Both the coefficient of determination and the residual distributions during training and testing were analyzed. The findings indicated that simply adding simulated data without adjusting model hyperparameters did not enhance model accuracy or robustness. Only after re-optimizing the hyperparameters to accommodate the larger dataset did the performance reach, or even exceed, the baseline levels. Nevertheless, a significant gap between training and testing accuracy suggests that the models continue to underfit the data. This is likely due to the uncertainty introduced by integrating simulated volume change data in an equal proportion to the experimental data. In combination with the still small dataset, this results in an unfavorable generalization ability. Another, though at this stage less prominent, reason might be that the chosen molecular descriptors do not fully capture the key factors influencing elastomer volume change.
Overall, the developed framework provides a scalable solution, enabling the efficient incorporation of additional data. As the database expands, the robustness of the models is expected to improve, allowing for more precise evaluations and potentially stronger predictive performance in future studies. It is therefore necessary to conduct further experiments to expand the database without introducing uncertainty from simulated data. Also, future research will focus on fine-tuning the balance between experimental and simulated data. Additionally, the potential benefits of directly incorporating HSP values as input parameters into the prediction framework warrant further investigation.

Author Contributions

Conceptualization, L.B. and F.B.-P.; data curation, L.B.; formal analysis, L.B.; funding acquisition, L.B. and K.S.; investigation, L.B., F.B.-P. and L.P.; methodology, L.B. and L.P.; project administration, L.B.; resources, L.B.; software, L.B., F.B.-P. and L.P.; supervision, L.B. and K.S.; validation, L.B.; visualization, L.B.; writing—original draft, L.B., F.B.-P. and L.P.; writing—review and editing, L.B., F.B.-P. and K.S. All authors will be updated at each stage of manuscript processing, including submission, revision, and revision reminder, via emails from our system or the assigned Assistant Editor. All authors have read and agreed to the published version of the manuscript.

Funding

This study was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy–Exzellenzcluster 2186 “The Fuel Science Center” ID: 390919832.

Data Availability Statement

The datasets presented in this article are not readily available because the data are part of an ongoing study.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Schweidtmann, A.M.; Rittig, J.G.; König, A.; Grohe, M.; Mitsos, A.; Dahmen, M. Graph Neural Networks for Prediction of Fuel Ignition Quality. Energy Fuels 2020, 34, 11395–11407. [Google Scholar] [CrossRef]
  2. Ackermann, P.; Braun, K.E.; Burkardt, P.; Heger, S.; König, A.; Morsch, P.; Lehrheuer, B.; Surger, M.; Völker, S.; Blank, L.M.; et al. Designed to Be Green, Economic, and Efficient: A Ketone-Ester-Alcohol-Alkane Blend for Future Spark-Ignition Engines. ChemSusChem 2021, 14, 5254–5264. [Google Scholar] [CrossRef] [PubMed]
  3. Rittig, J.G.; Ritzert, M.; Schweidtmann, A.M.; Winkler, S.; Weber, J.M.; Morsch, P.; Heufer, K.A.; Grohe, M.; Mitsos, A.; Dahmen, M. Graph machine learning for design of high–octane fuels. AIChE J. 2023, 69, e17971. [Google Scholar] [CrossRef]
  4. Morsch, P.; Döntgen, M.; Heufer, K.A. High- and low-temperature ignition delay time study and modeling efforts on vinyl acetate. Proc. Combust. Inst. 2023, 39, 115–123. [Google Scholar] [CrossRef]
  5. Hofmeister, M.; Fischer, F.J.; Boden, L.; Schmitz, K. Challenges in the Use of Bio-Hybrid Fuels As Drop-in Fuels. In Proceedings of the ASME 2024 18th International Conference on Energy Sustainability collocated with the ASME 2024 Heat Transfer Summer Conference and the ASME 2024 Fluids Engineering Division Summer Meeting, Anaheim, CA, USA, 15–17 July 2024. [Google Scholar] [CrossRef]
  6. Weinebeck, A.; Reinertz, O.; Murrenhoff, H. Boundary Lubrication of Biofuels and Similar Molecules. SAE Int. J. Fuels Lubr. 2017, 10, 645–651. [Google Scholar] [CrossRef]
  7. Heitzig, S.; Murrenhoff, H.; Weinebeck, A. Investigation of fluid-seal interaction and their prediction based on the Hansen Parameters. O + P Fluidtechnik Maschinen-und Anlagenbau 2015, 59, 26–34. [Google Scholar]
  8. Hofmeister, M.; Schmitz, K.; Laker, J.; Pischinger, S.; Fischer, M. Neue Herausforderungen an Dichtungswerkstoffe im Hinblick auf bio-hybride Kraftstoffe. Mobility 2022, 7, 44–48. [Google Scholar]
  9. Boden, L.; Hofmeister, M.; Brumand Poor, F.; Pleninger, L.; Schmitz, K. Predicting Compatibility of Sealing Material with Bio Hybrid Fuels Development and Comparison of Machine Learning Methods. In Proceedings of the 22nd International Sealing Conference, Stuttgart, Germany, 1–2 October 2024. [Google Scholar] [CrossRef]
  10. Hansen, C.M. Hansen Solubility Parameters: A User’s Handbook, 2nd ed.; Taylor & Francis: Boca Raton, FL, USA, 2007. [Google Scholar]
  11. Brumand-Poor, F.; Bauer, N.; Plückhahn, N.; Schmitz, K. Fast Computation of Lubricated Contacts: A Physics-Informed Deep Learning Approach. Int. J. Fluid Power 2024, 19, 1–12. [Google Scholar]
  12. Brumand-Poor, F.; Bauer, N.; Plückhahn, N.; Thebelt, M.; Woyda, S.; Schmitz, K. Extrapolation of Hydrodynamic Pressure in Lubricated Contacts: A Novel Multi-Case Physics-Informed Neural Network Framework. Lubricants 2024, 12, 122. [Google Scholar] [CrossRef]
  13. Brumand-Poor, F.; Rom, M.; Plückhahn, N.; Schmitz, K. Physics-Informed Deep Learning for Lubricated Contacts with Surface Roughness as Parameter. Tribol. Und. Schmier. 2024, 71, 26–33. [Google Scholar] [CrossRef]
  14. Brumand-Poor, F.; Barlog, F.; Plückhahn, N.; Thebelt, M.; Schmitz, K. Advancing Lubrication Calculation: A Physics-Informed Neural Network Framework for Transient Effects and Cavitation Phenomena in Reciprocating Seals. In Proceedings of the 22nd International Sealing Conference, Stuttgart, Germany, 1–2 October 2024. [Google Scholar] [CrossRef]
  15. Brumand-Poor, F.; Barlog, F.; Plückhahn, N.; Thebelt, M.; Bauer, N.; Schmitz, K. Physics-Informed Neural Networks for the Reynolds Equation with Transient Cavitation Modeling. Lubricants 2024, 12, 365. [Google Scholar] [CrossRef]
  16. Brumand-Poor, F.; Azanledji, F.K.; Plückhahn, N.; Barlog, F.; Boden, L.; Schmitz, K. Extrapolation of cavitation and hydrodynamic pressure in lubricated contacts: A physics-informed neural network approach. Adv. Model. Simul. Eng. Sci. 2025, 12, 2. [Google Scholar] [CrossRef]
  17. Duensing, Y.; Rodas Rivas, A.; Schmitz, K. Machine Learning for failure mode detection in mobile machinery. In 11. Kolloquium Mobilhydraulik: Karlsruhe, Germany, 10. September 2020; Geimer, M., Synek, P.M., Eds.; KIT Scientific Publishing: Karlsruhe, Germany, 2020. [Google Scholar]
  18. Makansi, F.; Schmitz, K. Simulation-Based Data Sampling for Condition Monitoring of Fluid Power Drives. IOP Conf. Ser. Mater. Sci. Eng. 2021, 1097, 012018. [Google Scholar] [CrossRef]
  19. Makansi, F.; Schmitz, K. Fault Detection and Diagnosis for a Hydraulic Press by Use of a Mixed Domain Database. In Proceedings of the BATH/ASME 2022 Symposium on Fluid Power and Motion Control, Bath, UK, 14–16 September 2022. [Google Scholar] [CrossRef]
  20. Beerbower, A.; Pattison, D.A.; Staffin, G.D. Predicting Elastomer-Fluid Compatibility for Hydraulic Systems. Rubber Chem. Technol. 1964, 37, 246–260. [Google Scholar] [CrossRef]
  21. Panofen, M.; Ackermann, P.; Viell, J.; Mitsos, A.; Dahmen, M. Uncertainty Quantification in Integrated Fuel and Process Design. Energy Fuels 2024, 38, 14743–14756. [Google Scholar] [CrossRef]
  22. Kosir, S.; Heyne, J.; Graham, J. A machine learning framework for drop-in volume swell characteristics of sustainable aviation fuel. Fuel 2020, 274, 117832. [Google Scholar] [CrossRef]
  23. Hildebrand, J.H.; Scott, R.L. The Solubility of Nonelectrolytes/Joel H. Hildebrand Robert L. Scott, 3rd ed.; Dover books on chemistry and physical chemistry; Dover Publ.: New York, NY, USA, 1964. [Google Scholar]
  24. Reinhold, J. Quantentheorie der Moleküle: Eine Einführung, 5th ed.; Studienbücher Chemie; Springer Spektrum: Wiesbaden, Germany, 2015. [Google Scholar] [CrossRef]
  25. Hohenberg, P.; Kohn, W. Inhomogeneous Electron Gas. Phys. Rev. 1964, 136, B864–B871. [Google Scholar] [CrossRef]
  26. Kohn, W.; Sham, L.J. Self-Consistent Equations Including Exchange and Correlation Effects. Phys. Rev. 1965, 140, A1133–A1138. [Google Scholar] [CrossRef]
  27. Perdew, J.P. Jacob’s ladder of density functional approximations for the exchange-correlation energy. In Proceedings of the AIP Conference Proceedings, AIP, Antwerp, Belgium, 8–10 June 2000; pp. 1–20. [Google Scholar] [CrossRef]
  28. Miranda-Quintana, R.A.; Heidar-Zadeh, F.; Fias, S.; Chapman, A.E.A.; Liu, S.; Morell, C.; Gómez, T.; Cárdenas, C.; Ayers, P.W. Molecular Interactions From the Density Functional Theory for Chemical Reactivity: The Interaction Energy Between Two-Reagents. Front. Chem. 2022, 10, 906674. [Google Scholar] [CrossRef]
  29. Geerlings, P.; de Proft, F.; Langenaeker, W. Conceptual density functional theory. Chem. Rev. 2003, 103, 1793–1873. [Google Scholar] [CrossRef]
  30. Morell, C.; Grand, A.; Toro-Labbé, A. New dual descriptor for chemical reactivity. J. Phys. Chem. A 2005, 109, 205–212. [Google Scholar] [CrossRef] [PubMed]
  31. Podeszwa, R.; Szalewicz, K. Communication: Density functional theory overcomes the failure of predicting intermolecular interaction energies. J. Chem. Phys. 2012, 136, 161102. [Google Scholar] [CrossRef] [PubMed]
  32. Yamada, T.; Graham, J.L.; Minus, D.K. Density Functional Theory Investigation of the Interaction between Nitrile Rubber and Fuel Species. Energy Fuels 2009, 23, 443–450. [Google Scholar] [CrossRef]
  33. Wu, Q.Y.; Chen, X.N.; Wan, L.S.; Xu, Z.K. Interactions between polyacrylonitrile and solvents: Density functional theory study and two-dimensional infrared correlation analysis. J. Phys. Chem. B 2012, 116, 8321–8330. [Google Scholar] [CrossRef]
  34. DIN ISO 1817:2016-11; Rubber Vulcanized or Thermoplastic-Determination of the Effect of Liquids. Deutsches Institut für Normung e.V.: Berlin, Germany, 2016.
  35. DIN ISO 13226:2021-06; Elastomere-Standard-Referenz-Elastomere (SREs) zur Charakterisierung des Verhaltens von Flüssigkeiten auf Elastomere. Deutsches Institut für Normung e.V.: Berlin, Germany, 2021.
  36. DIN 53521:1987-11; Prüfung von Kautschuk und Elastomeren-Bestimmung des Verhaltens gegen Flüssigkeiten, Dämpfe und Gase. Deutsches Institut für Normung e.V.: Berlin, Germany, 1987.
  37. Landrum, G.; Tosco, P.; Kelley, B.; Rodriguez, R.; Cosgrove, D.; Vianello, R.; Sriniker; Gedeck, P.; Jones, G.; NadineSchneider; et al. rdkit/rdkit: 2024_09_4 (Q3 2024) Release. 2024. Available online: https://zenodo.org/records/14535873 (accessed on 20 September 2024).
  38. Moriwaki, H.; Tian, Y.S.; Kawashita, N.; Takagi, T. Mordred: A molecular descriptor calculator. J. Cheminform. 2018, 10, 4. [Google Scholar] [CrossRef]
  39. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine Learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
  40. The Pandas Development Team. Pandas-Dev/Pandas: Pandas. Zenodo. 2024. Available online: https://zenodo.org/records/13819579 (accessed on 20 September 2024).
  41. Harris, C.R.; Millman, K.J.; van der Walt, S.J.; Gommers, R.; Virtanen, P.; Cournapeau, D.; Wieser, E.; Taylor, J.; Berg, S.; Smith, N.J.; et al. Array programming with NumPy. Nature 2020, 585, 357–362. [Google Scholar] [CrossRef]
  42. Hunter, J.D. Matplotlib: A 2D Graphics Environment. Comput. Sci. Eng. 2007, 9, 90–95. [Google Scholar] [CrossRef]
  43. Kim, S.; Chen, J.; Cheng, T.; Gindulyte, A.; He, J.; He, S.; Li, Q.; Shoemaker, B.A.; Thiessen, P.A.; Yu, B.; et al. PubChem 2023 update. Nucleic Acids Res. 2023, 51, D1373–D1380. [Google Scholar] [CrossRef]
  44. Hanwell, M.D.; Curtis, D.E.; Lonie, D.C.; Vandermeersch, T.; Zurek, E.; Hutchison, G.R. Avogadro: An advanced semantic chemical editor, visualization, and analysis platform. J. Cheminform. 2012, 4, 17. [Google Scholar] [CrossRef]
  45. Neese, F.; Wennmohs, F.; Becker, U.; Riplinger, C. The ORCA quantum chemistry program package. J. Chem. Phys. 2020, 152, 224108. [Google Scholar] [CrossRef] [PubMed]
  46. Weigend, F.; Ahlrichs, R. Balanced basis sets of split valence, triple zeta valence and quadruple zeta valence quality for H to Rn: Design and assessment of accuracy. Phys. Chem. Chem. Phys. PCCP 2005, 7, 3297–3305. [Google Scholar] [CrossRef]
  47. Furness, J.W.; Kaplan, A.D.; Ning, J.; Perdew, J.P.; Sun, J. Accurate and Numerically Efficient r2SCAN Meta-Generalized Gradient Approximation. J. Phys. Chem. Lett. 2020, 11, 8208–8215. [Google Scholar] [CrossRef]
  48. Grimme, S.; Ehrlich, S.; Goerigk, L. Effect of the damping function in dispersion corrected density functional theory. J. Comput. Chem. 2011, 32, 1456–1465. [Google Scholar] [CrossRef]
  49. Weigend, F. Accurate Coulomb-fitting basis sets for H to Rn. Phys. Chem. Chem. Phys. PCCP 2006, 8, 1057–1065. [Google Scholar] [CrossRef]
  50. Flórez, A.; Burghardt, G.; Jacobs, G. Influencing factors for static immersion tests of compatibility between elastomeric materials and lubricants. Polym. Test. 2016, 49, 8–14. [Google Scholar] [CrossRef]
Figure 1. Structure of this work.
Figure 1. Structure of this work.
Physchem 05 00015 g001
Figure 2. Change in IRHD hardness over change in volume for investigated substances after immersion.
Figure 2. Change in IRHD hardness over change in volume for investigated substances after immersion.
Physchem 05 00015 g002
Figure 3. Flowchart of the computational process and utilized software to calculate intermolecular distance d and binding enthalpy Δ E via quantum mechanical geometry optimization using DFT.
Figure 3. Flowchart of the computational process and utilized software to calculate intermolecular distance d and binding enthalpy Δ E via quantum mechanical geometry optimization using DFT.
Physchem 05 00015 g003
Figure 4. Comparison of predicted and actual volume change (left). Residual values against predicted volume change (right).
Figure 4. Comparison of predicted and actual volume change (left). Residual values against predicted volume change (right).
Physchem 05 00015 g004
Figure 5. Box plot diagrams of residual values of training and validation processes.
Figure 5. Box plot diagrams of residual values of training and validation processes.
Physchem 05 00015 g005
Figure 6. Box plot diagrams of all models.
Figure 6. Box plot diagrams of all models.
Physchem 05 00015 g006
Figure 7. Box plot residual comparison of baseline (BL) with extended database (EDB) case across all models.
Figure 7. Box plot residual comparison of baseline (BL) with extended database (EDB) case across all models.
Physchem 05 00015 g007aPhyschem 05 00015 g007b
Figure 8. Box plot residual comparison of extended database (EDB) case with old and new hyperparameters (NEW).
Figure 8. Box plot residual comparison of extended database (EDB) case with old and new hyperparameters (NEW).
Physchem 05 00015 g008
Table 1. List of utilized libraries.
Table 1. List of utilized libraries.
LibraryUse Case
Pandas [40]
  • Data manipulation and analysis
  • Handling structured data (e.g., tables, spreadsheets)
Numpy [41]
  • Numerical computing and array processing
  • Support for multi-dimensional arrays and matrices
Matplotlib  [42]
  • Data visualization and plotting
Scikit-learn [39]
  • Machine learning and statistical modeling
  • Tools for regression and dimensionality reduction
  • Preprocessing and evaluation of models
PubChemPy [43]
  • Interfacing with the PubChem database for chemical information
  • Fetching compound details, names, and SMILES
RDKit [37]
  • Cheminformatics and molecular modeling
  • Generating molecular descriptors and 3D conformers
  • Substructure searching and visualization of molecular structures
Mordred [38]
  • Calculating molecular descriptors for cheminformatics
  • Supports a wide range of descriptor types (e.g., physical, chemical, topological)
Table 2. Level of theory used for the DFT geometry optimization.
Table 2. Level of theory used for the DFT geometry optimization.
FunctionalBasis SetCoulomb Approx.Dispersion Corr.
r2SCANdef2-TZVPPdef2/JD3BJ
Table 3. Substance data with DFT and HSP features and corresponding target values.
Table 3. Substance data with DFT and HSP features and corresponding target values.
NameCASDFTHSPTarget
d Δ E δ D δ P δ H V M Δ V
[Å] [ E h ] [ MPa 1 / 2 ] [ MPa 1 / 2 ] [ MPa 1 / 2 ] [ cm 3 mol ] [%]
1,3-Dioxolane646-06-02.593−0.009918.16.69.369.9199.20
Di-n-butyl ether142-96-13.194−0.008615.23.44.2170.3 28.30
Isopropanol67-63-02.046−0.012015.86.116.476.815.20
Dimethoxyethane109-87-52.684−0.0105151.88.6169.4102.10
Ethanol64-17-52.094−0.010315.88.819.458.515.45
Methanol67-56-12.067−0.009815.112.322.340.79.68
Cyclopentane287-92-32.923−0.002116.401.894.93.81
Cyclopentanone120-92-32.849−0.006517.911.95.289.1197.14
Butyl alcohol71-36-31.989−0.013415.85.714.5 92148.73
Acetophenone98-86-22.841−0.011119.68.63.7117.4182.43
Propylene carbonate108-32-73.03−0.011920184.185.240.60
E-caprolactone502-44-32.644−0.008719.7157.4110.8145.15
Bromobenzene108-86-12.568−0.009820.55.54.1105.3182.40
2-Methylfuran534-22-52.803−0.007617.32.87.489.7102.47
Heptane142-82-55.344−0.006315.300147.410.52
n-Butanal123-72-83.592−0.006615.610.16.290.561.61
n-Pentanal110-62-32.74−0.007715.79.45.8106.440.90
Acetone67-64-12.893−0.005915.510.477489.60
2-Butanone78-93-34.171−0.01171695.190.2117.99
Methyl isobutyl ketone108-10-13.819−0.007115.36.14.1125.8108.30
Isopropyl methyl ketone563-80-42.607−0.00977.234107105.90
Diisopropyl ketone565-80-03.273−0.0111----136.40
1-Decanol112-30-11.985−0.0124164.710191.813.64
Benzyl alcohol100-51-61.9770.1218 18.46.313.7103.6134.70
Tert-butanol75-65-02.059−0.012315.25.114.795.824.74
2-Methoxyethanol109-86-42.024−0.0136168.21579.340.79
2-Chlorophenol95-57-81.965−0.018620.35.513.9102.3243.04
Cyclohexanol108-93-02.359−0.011117.44.113.510634.34
1-Hexanol111-27-3--15.95.812.5124.920.03
1-Octanol111-87-5--16511.9157.717.70
1-Octene111-66-0--15.312.415813.26
2,2,4-Trimethylpentane540-84-1--14.100166.110.53
2-Methyl tetrahydrofuran96-47-9--16.954.3100.2128.02
2-Octanol123-96-6--16.14.911159.114.96
Benzene71-43-2--18.4 0289.4124.33
Cyclohexanone108-94-1--17.86.35.1104178.20
Decane124-18-5--15.700195.97.92
Dodecane112-40-3--1600228.68.30
Dodecanol112-53-8--1649.3224.512.11
Ethyl acetate141-78-6--15.85.37.2 98.5122.60
Ethylbenzene100-41-4--17.80.61.4123.1113.29
Hexadecane544-76-3--16.300294.15.32
Hexanal66-25-1--15.88.55.4120.223.49
Octane111-65-9--15.500163.511.10
Toluene108-88-3--181.42106.8110.60
Cyclohexane110-82-7--16.800.2108.729.84
Diethyl ketone96-22-0--15.87.64.7 106.4116.56
2-Octanone111-13-7--7.40003.5000 1.9157.30.823697
Table 4. Pearson correlation of DFT and HSP values with change in volume.
Table 4. Pearson correlation of DFT and HSP values with change in volume.
DFT n = 28 HSP n = 46
d Δ E δ D δ P δ H V M
Pearson correlation r 0.0732 0.0888 0.3690 0.2637 0.0030 0.4026
Only for alcohol n = 7 0.6883 0.5791
Table 5. Best hyperparameter for MLP regression model.
Table 5. Best hyperparameter for MLP regression model.
HyperparameterValue
Activation functiontanh
Solverlbfgs
Regularization  α 0.01
Number of hidden layers1
Size of hidden layers8
Table 6. Collection of substances with predicted volume change.
Table 6. Collection of substances with predicted volume change.
NameCASHSPPredicted
δ D δ P δ H V M Δ V pred
[ MPa 1 / 2 ] [ MPa 1 / 2 ] [ MPa 1 / 2 ] [ cm 3 mol ] [%]
1,4-Dioxane123-91-117.5 1.8985.785.33
1-Nitropropane108-03-216.6 12.35.589.5108.14
1-Pentanol71-41-015.95.9 13.9108.616.45
1-Propanol71-23-8166.817.475.17.97
2-Butanol78-92-215.85.714.59225.85
2-Phenoxy ethanol122-99-617.85.714.3124.758.69
Acetonitrile75-05-815.3 186.152.9103.39
Amyl acetate628-63-715.83.36.114866.19
Benzyl benzoate120-51-4205.15.2190.3106.41
Butyl Benzoate136-60-718.35.65.5178.176.37
Butyl diglycol acetate124-17-4164.18.2 208.211.71
Butyl glycol acetate112-07-215.37.56.8171.22.12
Chloroform67-66-317.83.15.780.5139.44
Diacetone alcohol123-42-215.88.210.8124.311.55
Diethyl ether60-29-714.52.9 4.6104.780.49
Diethylene glycol monomethyl ether111-77-316.27.812.6118.210.61
Di-isobutyl ketone108-83-8163.74.1177.433.78
Dimethyl formamide (DMF)68-12-217.413.711.377.4111.89
Dimethyl sulfoxide (DMSO)67-68-518.416.410.271.3157.91
Dipropylene glycol methyl ether112-28-715.55.711.2156.118.10
D-Limonene5989-27-517.21.84.3162.965.92
Ethyl lactate97-64-3167.612.511512.22
Ethylene carbonate96-49-11821.75.166155.83
Ethylene glycol monobutyl ether111-76-2165.112.313223.19
Gamma-butyrolactone (GBL)96-48-01816.67.476.5153.86
Glycerol carbonate931-40-817.925.517.483.2 66.08
Hexane110-54-314.900131.42.76
Isoamyl acetate123-92-215.33.17150.277.93
Isoamyl alcohol (3-methyl-1-butanol)123-51-315.85.213.3109.328.62
Isobutyl alcohol78-83-115.15.715.992.914.61
Isobutyl isobutyrate97-85-815.1 2.85.8169.862.45
Isophorone78-59-11785150.356.78
Isopropyl acetate108-21-414.94.58.2117.190.51
Isopropyl ether108-20-315.13.23.2141.852.29
M-cresol108-39-418.5 6.513.7105135.73
Methyl acetate79-20-915.57.27.679.8113.65
Table 7. Continuation of Table 6.
Table 7. Continuation of Table 6.
NameCASHSPPredicted
δ D δ P δ H V M Δ V pred
[ MPa 1 / 2 ] [ MPa 1 / 2 ] [ MPa 1 / 2 ] [ cm 3 mol ] [%]
Methyl cyclohexane108-87-21601128.27.76
Methyl ethyl ketone (MEK)78-93-31695.190.2105.02
Methyl isoamyl ketone110-12-3165.74.1141.365.96
Methylisobutyl carbinol108-11-215.43.312.3127.255.44
Methyln-propyl ketone107-87-9167.64.7107.393.98
Methylene dichloride (dichloromethane)75-09-2177.37.164.4182.18
N,N-dimethyl acetamide127-19-516.811.510.29380.03
N-butyl acetate123-86-415.83.76.3132.677.78
N-butyl propionate590-01-215.75.55.9149.347.53
N-methyl-2-pyrrolidone (NMP)872-50-41812.37.296.6152.42
N-propyl acetate109-60-415.34.37.6115.8 92.02
N-propyl propanoate106-36-515.55.65.7132.571.56
Propylene glycol monobutyl ether5131-66-815.34.59.213269.61
Propylene glycol monoethyl ether acetate54839-24-615.66.37.7 155.120.91
Propylene glycol monomethyl ether107-98-215.66.311.698.248.11
Propylene glycol monomethyl ether acetate108-65-615.65.69.8137.139.25
Propylene glycol monophenyl ether770-35-417.45.311.5143.248.62
P-xylene106-42-317.613.1 123.996.26
Sec-Butyl Acenineate105-46-4153.77.613487.88
Sulfolane (tetramethylene sulfone)126-33-018189.995.3100.59
T-butyl acetate540-88-5153.76134.882.23
Tetrahydrofuran (THF)109-99-916.85.7881.9 141.98
Tetrahydrofurfuryl alcohol97-99-417.88.212.997.4110.36
Table 8. Summary of optimal hyperparameters for baseline investigation.
Table 8. Summary of optimal hyperparameters for baseline investigation.
LassoMLPTree-Based
Alpha: 0.1 Alpha: 0.0001 Max depth: 16
Tolerance: 0.01 Solver: adamMax features: log2
Activation: tanhMin sample leaf: 2
Hidden layer sizes: (2,)Min sample split: 2
Learning rate init: 0.1
Max iterations: 3000
Table 9. Summary of optimal hyperparameters for extended database investigation.
Table 9. Summary of optimal hyperparameters for extended database investigation.
LassoMLPTree-Based
Alpha: 1Alpha: 0.001Max depth: 64
Tolerance: 0.0001Solver: a d a m Max features: s q r t
Activation: t a n h Min sample leaf: 8
Hidden layer sizes: (2,) Min sample split: 2
Learning rate init: 0.1
Max iterations: 10,000
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Boden, L.; Brumand-Poor, F.; Pleninger, L.; Schmitz, K. Enhancing Predictive Accuracy Under Data Scarcity: Modeling Molecular Interactions to Describe Sealing Material Compatibility with Bio-Hybrid Fuels. Physchem 2025, 5, 15. https://doi.org/10.3390/physchem5020015

AMA Style

Boden L, Brumand-Poor F, Pleninger L, Schmitz K. Enhancing Predictive Accuracy Under Data Scarcity: Modeling Molecular Interactions to Describe Sealing Material Compatibility with Bio-Hybrid Fuels. Physchem. 2025; 5(2):15. https://doi.org/10.3390/physchem5020015

Chicago/Turabian Style

Boden, Lukas, Faras Brumand-Poor, Linda Pleninger, and Katharina Schmitz. 2025. "Enhancing Predictive Accuracy Under Data Scarcity: Modeling Molecular Interactions to Describe Sealing Material Compatibility with Bio-Hybrid Fuels" Physchem 5, no. 2: 15. https://doi.org/10.3390/physchem5020015

APA Style

Boden, L., Brumand-Poor, F., Pleninger, L., & Schmitz, K. (2025). Enhancing Predictive Accuracy Under Data Scarcity: Modeling Molecular Interactions to Describe Sealing Material Compatibility with Bio-Hybrid Fuels. Physchem, 5(2), 15. https://doi.org/10.3390/physchem5020015

Article Metrics

Back to TopTop