Next Article in Journal
Chromium-for-Aluminum Substitution in Synthetic Serpentine
Previous Article in Journal
Self-Humidifying and Super-Protonic Conductivity of SPEEK-Based Composite Proton Exchange Membranes Incorporated by Functionalized MXene and Modified TiO2 Nanofillers
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hybrid Multimodal Surrogate Modeling and Uncertainty-Aware Co-Design for L-PBF Ti-6Al-4V with Nanomaterials-Informed Morphology Proxies

School of Mechanical Engineering, Nanjing University of Science and Technology, No. 200 Xiaolingwei Street, Nanjing 210094, China
*
Author to whom correspondence should be addressed.
Nanomaterials 2026, 16(8), 447; https://doi.org/10.3390/nano16080447
Submission received: 25 February 2026 / Revised: 30 March 2026 / Accepted: 1 April 2026 / Published: 8 April 2026
(This article belongs to the Section Nanofabrication and Nanomanufacturing)

Abstract

Reliable property prediction and process selection in laser powder bed fusion are hindered by small, set-level datasets in which key morphology descriptors are intermittently missing, limiting both generalization and actionable co-design. A hybrid multimodal surrogate strategy is introduced that couples engineered process physics features with morphology proxies through a deployable two-stage embedding module and gradient-boosted tree regressors. Set-resolved inputs are assembled from L-PBF parameters, linear energy density and related energy-density variants, pore and prior-β grain summary statistics, and stress–strain-derived descriptors, followed by missingness-aware feature filtering, median imputation, and 5-fold GroupKFold evaluation grouped by set_id, with morphology embeddings learned on training folds and predicted when absent. Across six targets, the final deployable models achieve an RMSE/R2 of 11.07 MPa/0.895 (yield), 13.88 MPa/0.873 (UTS), 0.677%/0.861 (elongation), and 2.38 GPa/0.663 (modulus), while roughness and hardness remain challenging (RMSE 2.31 μm and 16.54 HV; R2 about 0.12 and 0.11). These surrogates enable constraint-aware candidate generation that identifies a concise set of manufacturing recipes balancing strength and surface objectives under uncertainty-aware screening. The resulting framework provides a practical blueprint for multimodal, small-data additive manufacturing studies and can be extended to richer microstructure measurements and prospective validation to accelerate functional and biomedical alloy development.

1. Introduction

The laser powder bed fusion of Ti-6Al-4V enables complex geometries and functional surfaces, but reliable qualification and rapid recipe development are still constrained by the difficulty of predicting performance from process settings alone [1,2,3]. The underlying scientific challenge is the coupled process of the structure to property pathway, where process variables and engineered energy-input descriptors act as proxies for thermal histories that shape defect populations and microstructural evolution, and these latent states mediate mechanical and surface responses [4,5,6,7,8,9]. A practical solution is to build recipe-level surrogate models that learn this mapping using a consistent experimental unit and statistically defensible evaluation while integrating morphology and proxy information when available and quantifying uncertainty for decision-making [10,11]. In this study, the learning problem is explicitly defined at the set level (set_id), and the dataset is treated as intrinsically multimodal by combining process and engineered physics features with pore, prior-β grain, and stress–strain-derived proxy blocks where available.
Research directly aligned with deployable, recipe-level, multimodal property prediction remains limited, and much of the literature addresses narrower subproblems. Early representative work focused on predicting porosity from process parameters using statistical models with Bayesian inference (for example, Tapia and Elwany, 2015) [12]. Subsequent work developed Gaussian process formulations for porosity prediction as a function of process parameters, including spatial Gaussian process regression variants [13]. A widely used modern pattern for efficient exploration is to couple Gaussian process regression with Bayesian optimization to guide sampling of the L-PBF process space, including demonstrations on Ti-6Al-4V that combine rapid characterization with model-guided discovery of processing domains [14]. In parallel, the community continues to debate the reliability of scalar energy-density compressions as general design variables, and recent reviews emphasize that volumetric energy density can correlate with outcomes in restricted settings while failing as a universal predictor when different parameter combinations yield different melt-pool regimes [15,16]. These directions are informative, but they do not by themselves resolve the deployment gap created by incomplete morphology availability, small recipe counts, and the need for uncertainty-aware co-design across multiple targets.
The present work addresses the deployment gap by structuring the entire pipeline around recipe-level learning, leakage control, modality heterogeneity, and decision-time constraints, rather than assuming retrospective access to all measurements. First, each model is trained and evaluated under grouped cross-validation by set_id so that no set contributes to both training and test partitions within a fold, and every data-dependent transformation is fit on the training portion only and applied unchanged to the held-out portion. Second, multimodality is introduced through explicit set-aligned feature blocks, assembled by merging process and engineered physics descriptors with pooled pore, grain, and stress–strain-derived descriptor blocks, with missingness tracked as a structural property of the dataset rather than treated as an error. Third, the key methodological distinction relative to many morphology-augmented surrogates is the separation between an oracle regime and a deployable regime: morphology proxies are treated as informative but incompletely observed, so modality embeddings are constructed within training folds and then used either as oracle embeddings when present or as predicted embeddings inferred from process features when morphology is absent. This design explicitly avoids reporting oracle-only improvements as deployable gains and introduces an embedding predictability diagnostic as a necessary condition for deployable multimodal inference. Finally, the surrogate stack is integrated into a decision layer that uses fold ensembles to produce prediction means and dispersion estimates and applies conservative lower-confidence bounds for candidate screening and ranking under constraints.
The results sought are to establish a deployable surrogate workflow that is stable under small sample size and partially observed modalities, and that supports downstream design decisions without assuming unavailable measurements. Concretely, the objectives are to obtain target-wise, fold-aggregated performance under the fixed GroupKFold protocol, to quantify the oracle versus deployable gap for morphology information, and to determine which modality embeddings are predictable from process variables with non-trivial fidelity using out-of-fold diagnostics. In addition, the study aims to produce a reproducible co-design procedure that generates a finite candidate pool within an observed process envelope, computes engineered physics features consistently, propagates predicted embedding when required, and evaluates feasibility and ranking under uncertainty-aware constraints using a fixed candidate budget.
The significance is that the work provides a deployment-aligned template for recipe-level learning and uncertainty-aware co-design in L-PBF Ti-6Al-4V, where conclusions depend on strict leakage control, explicit treatment of missing modalities, and separation of scientific upper bounds from decision-time feasibility [17,18]. The oracle versus deployable comparison yields an interpretable bound on the value of morphology information and clarifies when additional characterization is necessary versus when morphology can be treated as a latent state recoverable from process descriptors for prospective screening [19,20,21]. By incorporating uncertainty into constraint satisfaction and ranking, the framework reduces the risk of brittle recipe selection and supports multi-objective tuning for applications in which bulk mechanical requirements must coexist with surface-relevant constraints. Although developed for L-PBF Ti-6Al-4V, the present framework is transferable to other process–structure–property systems with sparse, partially observed multimodal data, provided that system-specific descriptors and target variables are reformulated and the models are retrained and prospectively validated in the new domain.

2. Materials, Data Sources, and Preprocessing

A recipe-level dataset for L-PBF Ti-6Al-4V is assembled such that each manufacturing condition is treated as a single independent set (set_id, 1–42), and all endpoints are represented at this set level using arithmetic means consistent with the database convention. Multimodal descriptors (process/engineered physics and morphology/microstructure proxies) are aligned to set_id, while non-uniform modality/label availability is treated as intrinsic and managed through explicit missing-data handling and grouped cross-validation with fold-local preprocessing to prevent leakage, as provided in Appendix A.
All analyses were conducted in a Conda-managed Python 3.11.15 environment on Windows, using NumPy 2.4.3, pandas 3.0.1, SciPy 1.17.1, scikit-learn 1.8.0, statsmodels 0.14.6, matplotlib 3.10.8, seaborn 0.13.2, openpyxl 3.1.5, and pyarrow 23.0.1.
Figure 1 summarizes the leakage-safe data construction (Figure 1a) and provides a schematic of the deployable surrogate and uncertainty-aware co-design loop evaluated in later sections (Figure 1b,c).

2.1. Study Design and Set-Level Unit of Analysis (set_id)

The analytical unit is the set, indexed by set_id (range 1–42), where each set corresponds to a unique L-PBF manufacturing condition (recipe). All observations associated with the same set_id are treated as non-independent; this design prevents pseudoreplication arising from within-condition dependence when multiple specimens or repeated measurements exist for a given recipe.
All endpoints are represented at the set level. Mechanical and surface responses are collapsed to a single set-level target value using the arithmetic mean consistent with the database convention (suffix “__mean”; target aggregation defined in Section 2.3). Target completeness differs across endpoints, with roughness exhibiting partial label availability relative to the mechanical properties, motivating an explicit missing-data policy and fold construction that preserve the set_id grouping (Section 2.5 and Section 2.6; Figure 2c).
Multimodal descriptors are aligned to set_id prior to modeling. The feature space comprises (i) process inputs and engineered physics features and (ii) morphology and microstructure proxies derived from object-level pore and prior-β grain descriptors and, when available, stress–strain-derived descriptors. Within-set heterogeneity is not discarded; object-level distributions are represented through pooled moments and quantiles to retain dispersion and tail behavior in the input space (Section 2.4; Figure 2e,f). Leakage control is implemented procedurally by grouped cross-validation, where train/test splits are performed by set_id so that no set contributes to both training and evaluation within a fold (Section 2.6).

2.2. L-PBF Process Parameter Space and Engineered Physics Features

The primary covariates include L-PBF process parameters defining the processing window (for example, laser power P, scan speed v, hatch spacing h, and layer thickness t, where available). To provide compact, physically motivated representations of the process space, energy-density variants are constructed in addition to the raw parameters. In particular, the line energy density (LED) is defined as
L E D =   P v ,
where P denotes laser power, and v denotes scan speed. When geometric terms are available, a volumetric energy-density variant consistent with standard L-PBF practice is included,
V E D =   P v   h   t ,
where h is hatch spacing, and t is layer thickness. These descriptors are treated as nominal energy-input proxies that support comparisons across parameterizations and complement the base process variables; their inclusion alongside constituent parameters is an expected redundancy within the locked feature specification rather than a post hoc feature-selection choice. The resulting operating-space coverage and the distributional support of key variables are reported in Figure 2b, and the overall block structure of inputs feeding the surrogate models is summarized in Figure 2a. The conditional availability of h and t is handled under the missingness and feature-filtering policy described in Section 2.5; feature definitions and units are given in Appendix A.3.
In Figure 2, we explicitly distinguish output targets from model inputs. Among the inputs, laser power and scan speed are the primary raw controllable process variables visualized in the present dataset, whereas quantities such as P × v and LED are deterministic engineered descriptors derived from those variables and are included as supplementary covariates rather than as independent process controls. Pore-related quantities shown in Figure 2 are likewise not treated as raw process parameters; they are morphology/proxy inputs derived from post hoc characterization and are analyzed separately at the modality level in Figure 2c–f.

2.3. Target Definitions: Yield, UTS, Elongation, Modulus, Roughness, Hardness

Six set-level response variables quantify mechanical performance and surface quality: 0.2% offset yield strength (yield, MPa), ultimate tensile strength (UTS, MPa), total elongation to fracture (elongation, %), Young’s modulus (modulus, GPa), surface roughness (roughness, µm), and Vickers microhardness (hardness, HV). Each target is represented at the set level, consistent with the unit of analysis defined in Section 2.1, such that each row corresponds to a unique build condition and its associated measurement set. When replicate measurements or multiple measurement locations are available for a given set, the reported target is the within-set arithmetic mean, aligned with the database convention (suffix “__mean”) as defined in Equation (3),
y ¯ s = 1 n s j = 1 n s y s , j ,
where ys,j denotes the jth measurement for set s, and nsis the number of measurements available for that set and target.
The empirical distributions of the six targets, including skew and outliers, are summarized in Figure 2c. Dataset-level completeness differs by target, with roughness exhibiting partial label availability relative to the mechanical properties; this motivates explicit missing-data handling and evaluation procedures described in Section 2.5 and Section 2.6. Target definitions and units, together with the number of labeled sets per target, are reported in Table 1.

2.4. Morphology and Microstructure Proxies

Morphology and microstructure information is available as object-level measurements (for example, individual pore or grain objects) that vary in count across builds and imaging fields. To enable consistent learning at the set level, all morphology modalities are reduced to fixed-length set-level descriptors by statistical pooling over the object-level distributions within each set_id. This produces a tabular representation that can be merged with the L-PBF process variables and engineered physics features while preserving dominant distributional characteristics (central tendency, spread, tails, and extremes) relevant to defect populations and microstructural heterogeneity; modality-specific feature construction and aggregation rules are detailed in Appendix A.4 (Figure 2a,e).
Let { x i } i = 1 n denote an object-level scalar descriptor within one set, such as pore equivalent diameter, pore sphericity, pore volume, or grain aspect ratio, with nobjects detected for that set. The pooled set-level moments and quantiles are given in Equation (4).
μ = 1 n i = 1 n x i , σ = 1 n 1 i = 1 n ( x i μ ) 2 , Q p =   q u a n t i l e p ( { x i } ) .
For each primitive descriptor, the pooled feature block includes { n ,   ,   μ ,   σ ,   m i n ,   m a x ,   Q 50 ,   Q 90 ,   Q 95 ,   Q 99 } , together with derived transforms for heavy-tailed distributions (for example, l o g 10 ( ) variants of strictly positive measures). The resulting morphology features are therefore interpretable distribution summaries and can be directly analyzed for coverage and missingness (Figure 2e) and for redundancy against process and engineered features (Figure 2d). These pooled summaries also provide the input representation used by subsequent morphology-module experiments that construct low-dimensional morphology embeddings under fold-respecting procedures (Appendix C).

2.4.1. Pore Morphology Summary Features

Pore morphology proxies summarize the size, shape, and population structure of defect pores within each set. Object-level pore descriptors include geometric measures such as volume, equivalent diameter, surface area, and sphericity; these are pooled per set using the statistics above, yielding a fixed set-level pore feature vector. Tail-sensitive summaries (high quantiles) are retained to reflect the process relevance of extreme pores. Where pore counts are low or absent for a set, the resulting set-level pore block is treated as partially observed rather than forced to zero, and missingness is handled under the policy in Section 2.5.

2.4.2. Prior-β Grain Morphology Summary Features

Prior-β grain morphology proxies are constructed analogously from object-level grain measurements, capturing microstructural anisotropy and characteristic length scales. Representative descriptors include grain aspect ratio and equivalent diameter (or closely related size measures, depending on the raw measurement schema). These descriptors are pooled per set using the same moment and quantile statistics, forming a grain-summary block that is compatible with the unified set-level modeling table. Because grain measurements may be present for a subset of sets only, their availability is tracked explicitly and incorporated into missingness accounting (Figure 2e).

2.4.3. Stress–Strain-Derived Descriptors

When stress–strain curves are available, they provide an additional source of morphology-adjacent proxy descriptors that can be used when direct imaging modalities are incomplete. In this case, each curve is reduced to a compact feature representation using physically interpretable summaries derived from the curve shape, such as slope-based stiffness proxies, characteristic stress and strain landmarks, and scalar descriptors that capture hardening behavior. These descriptors are treated as a separate modality block and incorporated at the set level under the same unified feature table, with their missingness recorded and handled using the procedures in Section 2.5 (Figure 2e).
More modest or inconsistent gains from morphology augmentation for certain targets should not be interpreted as evidence that the prior-β grain descriptors are irrelevant. Rather, the prior-β grain block captures only one part of the latent microstructural state. Stress–strain-derived descriptors integrate the cumulative effects of prior-β morphology together with pore population, α/α′ lath morphology, α/β phase constitution, retained β, texture, residual stress, and lattice-defect/dislocation structure. Accordingly, only partial correlations between grain summaries and stress–strain descriptors are expected, and the physically relevant signal is better interpreted at the modality/latent-embedding level than at the level of simple pairwise raw-feature correlation.

2.5. Missingness, Imputation Policy, and Feature Filtering

The compiled set-level table is multimodal and partially observed because modality availability is non-uniform across sets, with certain sets lacking pore, grain, or stress–strain descriptors depending on upstream data availability. Missingness is treated as an intrinsic property of the study design rather than an error condition. Missingness is quantified at both the feature level and the modality-block level to separate sporadic missingness from structurally absent modalities (Figure 2e). Targets are also incompletely observed for specific properties; modeling for a given target is performed only on the labeled subset for that property (Figure 2c).
To control dimensionality and stabilize learning under small-N conditions, feature filtering is applied prior to model fitting using a coverage threshold defined as a minimum required non-null fraction across sets (default criterion: at least 0.70 non-null coverage). This coverage filtering is performed at the feature-definition stage, and the resulting retained feature list is used consistently across subsequent modeling blocks to prevent feature drift and preserve comparability (Figure 2d,e). Constant or near-constant columns are additionally removed to avoid degenerate predictors.
For remaining missing values within retained features, imputation is performed using a fold-respecting rule in which imputation parameters are estimated on the training portion of each fold and applied to the corresponding validation or test portion. Median imputation is adopted as the default policy because it is robust under heavy-tailed distributions and small-sample settings and does not impose parametric assumptions that are rarely defensible in sparse multimodal materials datasets. Denoting the training-set median for feature j in fold k by x ˜ j k , missing values are imputed as
x i j x i j , if   x i j   is   observed , x ˜ j k , if   x i j   is   missing .
When a model requires standardized inputs, standardization is likewise fit on the training portion of each fold only. With training mean μ j k and standard deviation σ j k , the standardized value is
z i j = x i j μ j k σ j k ,
with μ j k and σ j k computed exclusively from the training split. This fold-conditional preprocessing prevents information leakage by disallowing test-fold statistics from influencing training-fold preprocessing; details are provided in Appendix A.5.

2.6. Train/Test Protocol: GroupKFold by set_id and Leakage Controls

Model evaluation follows a 5-fold grouped cross-validation protocol in which all rows sharing the same set_id are assigned to the same fold. This prevents optimistic bias caused by within-set correlations by ensuring that no set contributes to both training and test partitions within a fold. Let G denote the set of unique set_id groups. For fold k , the training and test group sets G t r a i n k and G t e s t k satisfy Equation (7).
G t r a i n k G t e s t k = , k = 1 5 G t e s t k = G .
The normalized split file spans all 42 sets, with fold sizes of train/test = 33/9 for folds 1–2 and 34/8 for folds 3–5 (Figure 2f).
Leakage controls are enforced by performing every data-dependent operation strictly within each training fold and applying the learned transformation to the corresponding test fold; this includes imputation parameters, standardization parameters for scale-sensitive models, and any learned transformations used within the modeling pipeline. Hyperparameters are tuned via discrete grids using training-fold internal validation, and the selected best setting is recorded per fold and target.
For targets with partial label availability, evaluation is performed on the labeled subset within each fold’s held-out partition. For roughness, 28 of 42 sets are labeled; folds contribute to the reported metrics through the labeled test instances only, avoiding artificial inflation of performance from label handling. Performance is reported as the mean and standard deviation across folds for each target using consistent metrics to enable direct comparison across models and modality variants (Figure 3a).

3. Surrogate Modeling Framework

Surrogate models are formulated to predict each set-level target using the leakage-resistant 5-fold GroupKFold protocol grouped by set_id (Section 2.6), with fold-local preprocessing for operations that depend on data statistics (Section 2.5); details are provided in Appendix B.1.
The modeling workflow proceeds from (i) tabular-only baselines using L-PBF process descriptors and engineered physics features to (ii) hybrid multimodal surrogates that append set-aligned morphology and proxy descriptor blocks (pores, prior-β grains, and stress–strain descriptors) under a consistent set-level unit of analysis. The locked modeling, preprocessing, and decision-layer settings adopted throughout the study are summarized in Table 2.

3.1. Baseline Tabular Models and CPU-Only Training Setup

Surrogate models are trained to predict each set-level target from the L-PBF process descriptors and engineered physics features defined in Section 2.2, evaluated under the GroupKFold protocol in Section 2.6. A suite of tabular regressors spanning linear, kernel, bagging, and boosted-tree families establishes a CPU-feasible reference for each property: ridge regression, k-nearest neighbors’ regression, RBF-kernel support vector regression, random forest regression, extremely randomized trees, histogram-based gradient boosting, and gradient-boosted decision trees; details are provided in Appendix B.2.
All experiments are executed under a CPU-only workflow on an Intel(R) Xeon(R) Silver 4214 CPU @ 2.20 GHz (no GPU acceleration). Models are refit from scratch within each fold. Targets are modeled independently (one regressor per property) due to differences in label availability and noise structure and to allow for target-specific model selection under the same evaluation protocol.
Preprocessing steps that use fold-dependent statistics (imputation and any scaling) are fit on the training portion only and applied unchanged to the held-out portion. Coverage-based feature filtering is applied at the feature-definition stage using the full set-level table to remove sparse or degenerate columns, while fold-wise imputation and scaling remain strictly confined to training-only statistics (Section 2.5). Baseline performance is reported as mean and standard deviation across folds; the best-performing baseline per target is identified under this fixed protocol and carried forward as the reference for later modality deltas (Table 3).

3.2. Hybrid Multimodal Feature Construction

Hybrid multimodal surrogates extend the tabular baseline by concatenating set-level morphology and proxy descriptors derived from additional modalities (Section 2.4). The unified design matrix is assembled by aligning and merging four set-level blocks by set_id: (i) tabular process and engineered physics descriptors, (ii) pore morphology summary features, (iii) prior-β grain morphology summary features, and (iv) stress–strain-derived descriptors (when available) retained as a proxy block; details are provided in Appendix A.6. Modality coverage and missingness are tracked explicitly (Figure 2e).
Because modality blocks differ in dimensionality and sparsity, hybrid construction applies controlled feature screening and fold-safe imputation. Candidate columns are screened by non-null coverage, and degenerate columns are removed; the remaining missing entries are imputed using fold-specific training medians (Section 2.5). Hybrid variants are constructed as modality ablations to isolate incremental contributions, including tabular-only, tabular + pores, tabular + grains, tabular + pores + grains, tabular + stress, and tabular + all-morphology. These variants are trained and evaluated under the same GroupKFold protocol as the tabular baselines, enabling direct comparisons under identical leakage controls and CPU constraints (Table 3; Figure 3a,b).
Two integration settings are supported within the same framework. In the direct-fusion setting, pooled morphology statistics are appended as explicit numeric features to enable cross-modal interactions in downstream learners. In the deployable two-stage setting, modality embeddings are learned within each training fold and then used as oracle embeddings when present or predicted from the tabular block when absent, separating the representational value of morphology from deployment-time availability constraints.

3.3. Hyperparameter Selection and Evaluation Metrics

Hyperparameter selection followed a leakage-controlled workflow consistent with the 5-fold GroupKFold protocol by set_id, such that candidate configurations were trained using training data only within each fold and the held-out fold was accessed only for final scoring.
All data-dependent operations that can transmit distributional information (including imputation, standardization, and any coverage-based filtering rules) were confined to training-fold computations and then applied unchanged to the corresponding held-out fold to preserve the leakage barrier.
Hyperparameters were tuned via discrete grids using training-fold internal validation, with the selected setting recorded per fold and per target to maintain auditability under small-N conditions; full tuning grids and statistical reporting details are given in Appendix B.2 and Appendix B.4.
Table 2 formalizes the invariant methodological assumptions adopted throughout the study, including the recipe-level unit of analysis (set_id), the grouped cross-validation design, and strictly fold-local preprocessing operations that preclude information leakage. By locking these protocol elements across all model families, the empirical comparisons reported in subsequent sections isolate the effect of the surrogate and uncertainty modules from confounding variation in data handling. In addition, the morphology and co-design components are treated as version-controlled, archived configurations, ensuring that both predictive evaluations and downstream design recommendations are reproducible under an identical experimental specification. For boosted-tree libraries, early stopping was used only when supported by the installed package API; otherwise, a fixed estimator budget was used and model selection proceeded through the declared discrete grid to preserve version-robust reproducibility. Evaluation was reported using complementary error and goodness-of-fit metrics to capture both absolute deviation and explained variance across targets with different scales. Root mean squared error (RMSE) was used as the primary scale-dependent metric due to its sensitivity to large deviations, mean absolute error (MAE) was reported as a robust companion metric, and the coefficient of determination (R2) was reported relative to a constant-predictor baseline.
Using targets y i and predictions y ^ i over n test instances, the metrics are
R M S E = 1 n i = 1 n ( y i y ^ i ) 2 ,
M A E = 1 n i = 1 n y i y ^ i ,
R 2 = 1 i = 1 n ( y i y ^ i ) 2 i = 1 n ( y i y ¯ ) 2 ,
y ¯ = 1 n i = 1 n y i .
All metrics were computed on the held-out fold and aggregated across the five folds as mean ± standard deviation to capture both expected performance and partition sensitivity in the small-N regime.
For the roughness target, only 28 of 42 sets are labeled; evaluation therefore uses labeled instances in each fold’s held-out partition only, and folds contribute to reported metrics through those labeled test instances.
The main comparative results are summarized in Table 3; model settings and the locked feature schema are documented in Table 2 to diagnose systematic bias and heteroscedastic error patterns not visible in scalar metrics alone.

3.4. Uncertainty Estimation Strategy

Predictive uncertainty was estimated using fold ensembles induced by the GroupKFold partitions. For each target, one model per fold was trained using the selected architecture and target-specific feature variant; at inference, each input yields an ensemble of fold-trained predictions { y ^ k ( x ) } k = 1 K with K = 5 .
The predictive mean was defined as the ensemble mean,
μ ( x ) = 1 K k = 1 K y ^ k ( x ) ,
and epistemic spread was summarized using the ensemble standard deviation,
σ ( x ) = 1 K 1 k = 1 K ( y ^ k ( x ) μ ( x ) ) 2 .
These summaries are deployable in the sense that they require only the trained fold models and do not assume access to additional calibration labels at inference time.
Uncertainty was used in two roles: (i) surrogate-level diagnostics via coverage-style calibration checks (Figure 3b), where predicted uncertainty is assessed against empirical error behavior across nominal quantiles, and (ii) decision-making in co-design to penalize overly optimistic candidates through a conservative lower-confidence bound (LCB) in Equation (14).
L C B ( x ) = μ ( x ) z   σ ( x ) ,
where z controls conservativeness.
The co-design recommendation table (Table 4) reports mean predictions and uncertainty summaries to keep the ranking and screening logic transparent and auditable.

3.5. Optimizer Ablation Protocol

An optimizer ablation quantified sensitivity of training dynamics and generalization performance to optimizer choice under an identical model architecture, identical feature set, and the same 5-fold GroupKFold protocol by set_id, so that observed differences reflect optimization behavior rather than data partition effects. The study compared AdamW, Muon, and a hybrid strategy in which Muon was applied selectively to a subset of parameters (hidden layers) while AdamW was retained for remaining parameter groups; additional optimizer-ablation settings, learning-rate sweeps, and selection rules are detailed in Appendix B.3.
The optimizer study was framed as a robustness evaluation for a fixed neural tabular regressor, with identical splits and identical preprocessing across optimizers to isolate optimizer effects. For each target property, training was repeated across multiple random seeds for each optimizer to separate optimizer-induced variance from partition-induced variance. Within each fold, each optimizer used the same initialization scheme, stopping criterion, and maximum training budget; performance was recorded on the held-out fold using RMSE as the primary metric, with MAE and R2 tracked as secondary diagnostics.
Learning-rate operating regions were characterized via grid sweeps for each optimizer. A discrete set of learning rates was evaluated for each target, and the best-performing learning rate was selected based on fold-aggregated validation performance, subject to stability screening (excluding configurations that diverged or produced degenerate predictions).
The resulting score landscapes were summarized, best achievable operating points per optimizer and target were summarized as mean ± standard deviation across runs, and run-to-run dispersion at the selected learning rate was reported via boxplots (Figure 4a). To avoid confounding from feature-definition differences, the optimizer ablation used a locked feature schema and a fixed preprocessing pipeline, with median imputation and any scaling computed on training-only partitions and applied with frozen parameters to validation/test partitions following the leakage controls in Section 2.5 and Section 2.6.

4. Morphology Module: Two-Stage Embedding and Deployable Multimodal Inference

Morphology and microstructure proxies (pore statistics, prior-β grain descriptors, and stress–strain-derived signatures) carry mechanistically relevant information, yet their availability is non-uniform across sets, creating a fundamental train-time-versus-deploy-time information mismatch. A two-stage morphology module is therefore used to separate an oracle regime, in which proxies are observed at inference, from a deployable regime, in which low-dimensional proxy embeddings are learned within training folds and then predicted from process-derived inputs when proxies are absent.

4.1. Oracle Morphology vs. Predicted Morphology Concept

Morphology and microstructure proxies, including pore morphology, prior-β grain morphology, and stress–strain-derived descriptors, are informative but incompletely observed across sets. This creates a practical distinction between a train-time setting in which proxy descriptors are available for supervised learning and a deploy-time setting in which these descriptors may be absent for a new candidate process recipe. The morphology module is therefore formulated as a two-stage mechanism that separates (i) the informational value of morphology-conditioned inference when proxy descriptors are observed from (ii) the feasibility of producing morphology-conditioned predictions when proxy descriptors are missing.
Two inference regimes are considered. In the oracle regime, proxy descriptors are treated as directly observed inputs at inference time. This regime estimates an upper bound on achievable surrogate performance when multimodal measurements are available and is used to quantify the potential benefit of morphology-aware learning. In the deployable regime, proxy descriptors are not assumed to be observed for candidate designs. Instead, proxy embeddings are first predicted from the available tabular/process features, and the predicted embeddings are then used as substitutes for the missing proxy inputs in the final property surrogate. This yields a deployable multimodal inference path that can be applied to co-design candidate generation, where only process parameters and engineered physics features are available.
Formally, let x denote the tabular/process feature vector (including engineered physics features), and let m denote a modality-specific proxy feature block (pores, grains, or stress). The oracle surrogate is defined in Equation (15).
y = f ( [ x , m ] ) ,
where , denotes concatenation. The deployable two-stage surrogate is defined by Equations (16) and (17).
z = g ( x ) ,
and a property surrogate predicts the target using the predicted embedding,
y = f ( [ x , z ] ) .
This separation makes explicit the train-time versus deploy-time information structure and avoids reporting oracle-only gains as deployable improvements. The comparison between oracle and deployable variants is summarized in the morphology-module results table and is used downstream to select target-specific deployable variants for co-design.

4.2. PCA Embedding Construction on Training Folds

Raw proxy feature blocks are high-dimensional and heterogeneous across modalities (for example, tens of pore summary features, a smaller set of grain descriptors, and a larger stress–strain descriptor set). To obtain compact representations that are easier to predict from tabular inputs and less prone to overfitting under small-sample conditions, each proxy modality is mapped to a low-dimensional embedding using principal component analysis (PCA). PCA fits strictly within each training fold to preserve leakage control, and the learned transformation is then applied to the corresponding held-out fold.
For each modality, a training-fold morphology matrix is constructed by selecting the modality-specific feature columns, coercing to numeric types, and applying training-only imputation. Let M t r a i n k R n k × d denote the training-fold matrix for fold k . PCA is fit to M t r a i n k , and the number of retained components q is chosen using a cumulative explained-variance criterion with an upper cap to avoid excessive flexibility under small n . Specifically, the smallest q is selected such that cumulative explained variance exceeds 95%, subject to q q m a x . The resulting training-fold embedding is
Z t r a i n k = P C A k   ( M t r a i n k ) ,
and the held-out embedding is obtained by applying the same transformation to the test-fold matrix M t e s t k ,
Z t e s t k = P C A k   ( M t e s t k ) .
This fold-specific fitting ensures that no distributional information from held-out sets influences the embedding basis. Because proxy availability differs by modality and by set, modality-specific usability checks are applied per fold. If a modality has insufficient observed values or degenerate variance after preprocessing in each fold, the modality embedding is treated as unavailable for that fold to avoid unstable PCA solutions. The final embedding dimensionalities used in experiments are therefore reported explicitly (per modality) alongside the downstream property-surrogate results. The PCA embeddings constitute the interface between proxy observation and deployable inference; they serve as the targets for morphology prediction in the deployable pipeline and as conditioning variables for the oracle pipeline. A necessary condition for deployable multimodal inference is that morphology embeddings can be estimated from the available process-derived feature set with non-trivial fidelity [22,23,24]. Embedding predictability is therefore quantified directly by training fold-respecting predictors from tabular/process features to each PCA embedding coordinate and reporting out-of-fold values. This diagnostic is reported at the embedding level, not only at the downstream property level, because a two-stage pipeline can fail either due to weak embedding predictability or due to limited incremental utility of morphology even when predicted accurately.
Let x s denote the tabular/process feature vector for set s , and let z s m , k R q m k denote the PCA embedding for morphology modality m constructed within training fold k (Section 4.2), with fold-specific PCA fit on the training partition only and then applied to the held-out fold. Because PCA fits within each fold, embedding coordinates are fold-local; coordinate-wise predictability is interpreted as a within-fold diagnostic rather than a globally fixed latent axis.
For each fold k, a predictor g m k is trained on ( x s , z s m , k ) : s G train k and evaluated on G test k . Predictability is summarized using the coefficient of determination,
R z , j 2 ( m , k ) = 1 s G test k z s , j m , k z ^ s , j m , k 2 s G test k z s , j m , k z ¯ j m , k 2 ,
These diagnostics enable modality-wise screening for deployable inference; modalities with consistently low embedding predictability indicate limited learnability of the corresponding embeddings from the available process covariates under the present dataset and coverage, implying that oracle-only improvements should not be interpreted as deployable gains. This diagnostic is referenced in the morphology-module results (Figure 5, morphology module panel(a)) and in the selection of deployable variants used for co-design (Section 4.4).

4.3. Deployable Two-Stage Pipeline Used in Co-Design

The final co-design workflow requires property prediction for candidate process recipes before any physical build, which precludes direct observation of pores, grains, or stress–strain descriptors [25,26,27]. The deployable surrogate therefore uses a two-stage inference pathway in which morphology information is represented by predicted PCA embeddings derived from process variables.

4.3.1. First-Stage Embedding Prediction

For each morphology modality selected for a given target, a modality-specific predictor is trained to map tabular/process features to the modality embedding,
z ^ s m , k = g m k ( x s ) ,
with g m k trained under the same GroupKFold protocol as the property surrogates. Training-only preprocessing is applied within each fold, including median imputation and any scaling required by the chosen model class (Section 2.5). Importantly, the PCA used to define z m , k is fit on the training partition only (Section 4.2), so the embedding predictor does not access held-out sets beyond strict cross-validation allowances.

4.3.2. Second-Stage Property Prediction

The predicted embeddings are concatenated with the tabular/process features to form the deployable multimodal input, and a target-specific surrogate is trained to predict the mechanical or surface property of interest,
y ^ s k =   f k   [   x s ,   z ^ s m 1 , k , , z ^ s m M , k   ] .
Model selection at this stage is target-specific and constrained to deployable variants, meaning that oracle-only inputs are excluded when choosing the final surrogate used in design-space exploration. When multiple modality combinations are feasible, the best deployable variant is selected per target based on mean cross-validated RMSE (and associated R 2 ), with the corresponding configuration recorded for reproducibility.

4.4. Uncertainty Summaries for Design-Time Ranking

Co-design requires not only point predictions but also a notion of predictive uncertainty to avoid brittle recommendations. For each candidate, the ensemble of fold-trained surrogates produces a distribution of predictions; the empirical mean serves as the primary estimate and the empirical standard deviation across the fold models serves as a conservative uncertainty proxy. These summaries are used to implement risk-aware selection and constraint handling in candidate screening, where feasibility is evaluated under uncertainty-aware criteria rather than single-point estimates. This deployable two-stage surrogate is used to generate the final recommendation table for co-design (Table 4) and to support the Pareto and risk-aware analyses in the co-design section (Figure 6).

5. Results

Performance evidence is organized to characterize the multimodal set-level learning problem, establish leakage-controlled baselines, and evaluate the deployable value of morphology augmentation through oracle versus predicted embeddings, culminating in uncertainty-aware co-design recommendations.

5.1. Dataset and Feature Characterization

Table 1 summarizes the final set-level dataset used for model development, including the unique set_id groups (1–42), GroupKFold fold membership, and target-specific label availability. The dataset is intrinsically multimodal at the set level, combining (i) L-PBF process parameters and engineered physics features and (ii) morphology and microstructure proxies derived from pore statistics, prior-β grain morphology, and stress–strain descriptors where available.
Figure 2 provides a compact characterization of the learning problem. Figure 2a depicts the set-level unit of analysis and the partitioning of inputs into process/physics features and morphology-derived blocks alongside the six targets. Figure 2b visualizes the coverage of the process parameter space and engineered energy-input descriptors, documenting the observed operating ranges represented in the set-level table. Figure 2c reports the empirical target distributions (yield strength, ultimate tensile strength, elongation, elastic modulus, surface roughness, microhardness), highlighting differences in scale, dispersion, and tail behavior that motivate target-wise treatment and inform interpretation of error magnitudes.
Figure 2 provides a compact characterization of the learning problem with explicit separation between outputs and inputs. Figure 2a shows the empirical distributions of the six set-level response variables. Figure 2b summarizes representative input-feature distributions; importantly, this panel combines the raw process variables retained in the present dataset with deterministic engineered descriptors derived from them and selected proxy features used by the multimodal surrogate. Figure 2c–f then characterize the input space further through modality-wise missingness, output–input correlation structure, modality coverage across set_id, and representative morphology-feature distributions. This organization is intended to distinguish clearly between the predicted outputs and the heterogeneous input blocks used in the surrogate framework.
Target completeness differs by endpoint; in particular, roughness has partial label availability relative to mechanical properties, and labeled-subset evaluation is used for targets with incomplete labeling. This target-dependent effective sample size is a dataset property that must be carried forward when comparing results across targets.

5.2. Baseline Performance Comparison

Baseline regressors are evaluated under the fixed 5-fold GroupKFold protocol to establish a CPU-only reference for each target. Table 3 reports fold-aggregated performance (mean ± standard deviation) using RMSE as the primary metric, with MAE and R2 reported as complementary diagnostics, and records the best-performing baseline per target under this protocol. Across the six targets, baseline results indicate that strong nonparametric learners provide competitive accuracy under small-n conditions, while target difficulty differs substantially. In particular, the baselines exhibit robust performance for strength and ductility targets (yield strength, UTS, elongation) and elastic modulus, whereas surface roughness and microhardness show weaker baseline performance.
This contrast is consistent with the combination of target distributional characteristics and the limited observability of morphology-dependent information in tabular-only features, especially under partial modality coverage. For partially labeled targets, reported headline values are computed across the subset of folds that contribute labeled test instances, and this reporting convention is maintained to preserve auditability under target-dependent completeness.
Table 3 provides the target-wise, fold-aggregated error statistics computed under the fixed 5-fold GroupKFold protocol, thereby enabling direct comparison across endpoints under a common leakage-controlled evaluation design. Consistent with target-dependent completeness, partially labeled endpoints are evaluated on the labeled subset of held-out sets rather than by imputing labels, preserving the auditability of reported metrics. Under this protocol, strength and ductility targets exhibit substantially higher explained variance than surface roughness and hardness, motivating subsequent sections that test whether additional morphology/proxy information can produce systematic gains beyond process/physics descriptors alone.
The baseline outcomes motivate the subsequent multimodal modeling stages, which incorporate morphology and microstructure proxies to test for systematic improvements beyond tabular process descriptors alone (Section 5.3, Section 5.4 and Section 5.5; Figure 3 and Figure 5; Table 3 and Table 4).

5.3. Final Hybrid Surrogate Performance, Parity Plots, and Global Feature Importance

Table 2 documents the final surrogate family and locked training settings used for the reported models, including the CPU-only implementation, the feature-construction variant adopted per target, and the final hyperparameter choices used in cross-validation.
Table 3 reports the corresponding predictive performance (RMSE, MAE, and R2) aggregated across the five GroupKFold splits, enabling direct comparison to the tabular-only baselines under an identical evaluation protocol.
Figure 3 consolidates the primary performance and interpretability evidence for the final hybrid surrogate. Figure 3a documents the GroupKFold evaluation protocol and the placement of model selection within each training fold, while Figure 3a summarizes target-wise performance and fold-to-fold spread using the same metric definitions reported in Table 3. This organization supports a direct assessment of whether incorporating morphology-derived information yields target-specific gains beyond process/physics features alone, rather than attributing improvements to procedural differences. Parity plots in Figure 3 provide an essential diagnostic complement to scalar metrics in the small-n regime. Predicted values are plotted against measured values for held-out folds, with the identity line used to assess systematic bias. The fold-aggregated error statistics shown in the panel insets correspond to the values tabulated in Table 3.
This parity-based view supports inspection of regime-dependent error structure (including outliers and tail behavior evident in the target distributions) that can be obscured by aggregate RMSE alone, and it provides a fold-respecting check against overfitting through consistency of held-out behavior; additional residual-based, modality-sensitivity, and robustness checks are provided in Appendix B.5. Global feature attribution for the final surrogate is summarized in Figure 3c. The ranked importances are drawn from the final gradient-boosted model family and each feature is associated with its modality block (process/physics versus morphology-derived descriptors).
This analysis is used as a mechanistic plausibility audit verifying that physically motivated energy-input descriptors and morphology summaries appear among dominant drivers, while remaining explicitly non-causal in interpretation. The feature-importance view is therefore treated as a global diagnostic to contextualize hybrid performance and to motivate the morphology-module analyses that follow.

5.4. Optimizer Stability and Learning Dynamics

Figure 4 evaluates the effect of optimizer choice on training stability and convergence behavior in the low-data regime, using the fixed experimental protocol defined in Section 3.5.
The primary objective of this analysis is robustness: determining whether reported surrogate accuracy is attributable to modeling choices rather than idiosyncratic training dynamics. Figure 4a reports fold-aggregated validation trajectories at the selected operating point for each optimizer and target, shown as the median validation loss across GroupKFold splits with interquartile bands. To avoid overinterpreting late-epoch variance after early stopping, only epochs with full 5-fold support are displayed. Under this aggregation, the dominant reduction in validation loss occurs during the early training phase for all targets, while the remaining late-stage variability is modest and is more pronounced for UTS and elongation than for yield. This pattern should not be interpreted as evidence of a distinct second convergence phase. Rather, it is consistent with the greater sensitivity of UTS and elongation to partially observed post-yield microstructural state variables, including α/β phase balance, α/α′ lath morphology, retained β, texture, grain-boundary α, residual stress, and lattice-defect/dislocation state. Accordingly, optimizer selection is based primarily on the fold-aggregated learning-rate operating regions and best-operating-point summaries in Figure 4b,c, rather than on the terminal value of any single trajectory.
Figure 4a reports stability distributions across repeated runs as boxplots of the validation metric for each optimizer variant, directly diagnosing sensitivity to initialization and run-to-run variability. Learning-rate operating regions are mapped as target-by-learning-rate heatmaps for each optimizer, identifying stable windows and revealing whether tuning tolerance is unusually narrow.
Best operating points are summarized for each optimizer–target pair by reporting the top mean performance with its variability, annotated with the corresponding selected learning rate. The optimizer ablation is used as a sensitivity analysis supporting reproducibility of the reported surrogate results and as justification for the final default optimizer and learning-rate setting adopted in Table 2.
Consistent with the broader leakage controls, the optimizer study is conducted on a locked feature schema and fixed preprocessing pipeline, with median imputation and any required scaling computed on training-only partitions and applied to held-out partitions using frozen parameters.

5.5. Morphology Module Outcomes: Oracle vs. Predicted Embeddings and Their Impact

Figure 5 reports the outcomes of the two-stage morphology module designed to separate training-time information from deploy-time feasibility. The central comparison is between oracle morphology embeddings, where morphology descriptors are computed directly from measured pore and grain summaries and then embedded within each training fold, and predicted morphology embeddings, where the same embedding coordinates are inferred from process and engineered physics features alone. This distinction is critical for deployment, because morphology measurements are not guaranteed to be available for prospective designs, while process parameters and engineered physics features are always available.
The morphology module and the two-stage training logic are introduced, clearly distinguishing the oracle pathway from the deployable predicted pathway. Figure 5a reports the embedding predictability diagnostics, quantified by fold-wise R2 between oracle embedding coordinates and their process-predicted counterparts; this panel establishes which morphology modalities can be reconstructed with meaningful fidelity from process variables and which remain weakly predictable. Figure 5b evaluates the downstream impact on property prediction by comparing tabular-only surrogates to hybrid surrogates augmented with oracle embeddings and with predicted embeddings, using the same GroupKFold protocol as in Figure 3. Improvements attributable to oracle embeddings bound the maximum achievable gain from morphology information under perfect availability, while improvements from predicted embeddings quantify what is realizable under deployable conditions. Figure 5c summarizes the resulting target-dependent pattern, highlighting cases where morphology information adds measurable value and cases where performance is dominated by tabular process and engineered features.

5.6. Co-Design Recommendations Under Constraints and Uncertainty

Table 4 reports the constrained co-design recommendations produced by the deployable surrogate stack, including the selected process parameter sets, predicted property means, and uncertainty summaries used for risk-aware ranking. Recommendations are generated from a candidate pool sampled within the observed process envelope and evaluated using fold ensembles to obtain both central tendency and dispersion estimates for each target. Constraint handling is performed by screening candidates against minimum performance thresholds and by ranking feasible solutions with uncertainty-aware criteria, ensuring that reported designs reflect both expected performance and predictive confidence.
Predicted targets are reported as fold-ensemble mean ± standard deviation (GroupKFold-safe fold ensembles). Yield LCB and Roughness UCB correspond to the conservative confidence bounds used in the uncertainty-aware selection analysis. Designs are ordered by the composite score used for ranking feasible candidates.
Figure 6 visualizes the co-design outcomes and the logic of recommendation selection. Figure 6a presents the Pareto front for the primary tradeoff axis, showing the candidate cloud, Pareto-optimal subset, and highlighted top recommendations. Figure 6b reports constraint satisfaction rates, comparing all candidates to Pareto-optimal and top-ranked subsets to demonstrate the tightening effect of constraints and ranking. Figure 6c provides the validation view where measured outcomes are available, plotting predicted versus measured performance for the evaluated designs with identity reference and summary metrics. Figure 6d summarizes the recommended recipe set in an interpretable parameter-space view, enabling rapid comparison of power, scan speed, hatch spacing, layer thickness, and derived energy descriptors across top candidates. Figure 6e reports the risk-aware selection analysis, contrasting mean performance with uncertainty or lower-confidence bounds to justify the final recommended subset. Figure 6f presents a local robustness analysis around the selected recipe, quantifying sensitivity to small perturbations in key process variables and indicating whether recommendations lie in locally stable regions rather than fragile optima.

6. Discussion, Conclusions and Data Availability

Interpretation is cast in a hierarchical process–structure–property paradigm, in which process settings and engineered energy-input descriptors index thermal history, with defect and microstructure proxies mediating the observed mechanical and surface responses [28,29,30].
The oracle–deployable comparison is used to delineate morphology information that is recoverable from process variables from irreducible signal that warrants targeted characterization, while uncertainty-aware selection is emphasized to avoid brittle optima under sparse and heterogeneous measurement coverage.

6.1. Materials-Science Interpretation of Dominant Drivers

The results are consistent with a hierarchical process–structure–property framing in which L-PBF process settings and engineered energy-input descriptors act as first-order proxies for the thermal histories that shape defect formation and microstructural evolution, which then mediate macroscopic properties. Within this interpretation, controllable inputs (laser power, scan speed, hatch spacing, layer thickness) together with linear energy density and volumetric energy-density variants provide a compact description of nominal energy input that is physically consistent with established links to melt-pool behavior and solidification conditions, without implying direct measurement of melt-pool stability or thermal gradients. These conditions are reflected downstream through pore population summaries (size, volume fraction, morphology) and prior-β grain morphology (aspect ratio and characteristic length scales), while stress–strain-derived descriptors serve as morphology-adjacent signatures of the combined imprint of microstructure, defects, and residual stress state.
This hierarchy is aligned with the target-dependent value of morphology-derived information observed in the modeling outcomes. Defect-sensitive properties, particularly yield strength and fatigue-relevant surface proxies, are expected to respond to pore volume statistics and sphericity distributions because pores act as stress concentrators and reduce the effective load-bearing area. Where the morphology module indicates usable embedding predictability, the predicted-embedding pathway suggests that parts of this defect-related signal are recoverable from process variables in the present dataset, consistent with the role of energy input and scan strategy in controlling lack-of-fusion and keyhole regimes. More modest or target-specific gains from morphology augmentation should not be interpreted as evidence that the prior-β grain descriptors are irrelevant; rather, the grain-summary block captures only one part of the latent microstructural state, whereas the stress–strain-derived block reflects the integrated response of prior-β morphology together with pores, phase constitution, texture, residual stress, and lattice-defect/dislocation substructure, so only partial correlations between the two are expected.

6.2. What the Multimodal Gains Mean for Nanomaterials-Enabled Functional Surfaces and AM Process Tuning

The multimodal outcomes carry two implications for functional surfaces and process tuning under the present feature and label regime [31,32,33,34]. First, the oracle-versus-deployable comparison provides an explicit bound on how much morphology information can improve property prediction when morphology is fully available versus when it must be inferred from deployable inputs. Oracle morphology embeddings represent the idealized upper bound in which pore and grain descriptors are available at inference time, whereas predicted morphology embeddings correspond to the deployable setting in which morphology is inferred from process settings. The gap between these conditions quantifies how much of the morphology signal is structurally learnable from process variables alone in the current dataset; a small gap is consistent with treating morphology as an implicit latent state recoverable from process descriptors for prospective screening, whereas a large gap indicates irreducible morphology information not encoded in process variables and therefore motivates targeted characterization of a minimal morphology panel that most improves decision-making.
Second, for surface-facing objectives as represented here by surface condition constraints and roughness-linked proxies, the co-design and risk-aware selection workflow supports joint tuning of bulk mechanical objectives and surface-relevant objectives under uncertainty. Even when direct surface or bio-proxy measurements are sparse, uncertainty-aware ranking reduces the likelihood of selecting candidates that appear optimal only due to model variance. The constraint-satisfaction analysis and Pareto visualization formalize tradeoffs central to functional implants and engineered interfaces, where high strength must coexist with acceptable roughness and surface condition constraints [35,36,37]. Practically, the workflow identifies process windows expected to deliver compliant bulk properties while maintaining surface-relevant bounds and highlights regimes where predicted uncertainty is high and additional experiments would be maximally informative.

6.3. Conclusions

This work presents a deployment-oriented surrogate modeling framework for L-PBF Ti-6Al-4V that explicitly reflects the constraints of small, set-level experimental datasets and partially observed morphology descriptors. By enforcing recipe-level grouping (GroupKFold by set_id) and fold-respecting preprocessing, the study provides a leakage-resistant basis for comparing process-only and multimodal predictors.
A key contribution is the principled separation of oracle multimodal inference (morphology available at decision time) from a deployable setting in which morphology must be inferred from process-accessible inputs via a two-stage embedding strategy. Across targets, the resulting models achieve strong performance for primary mechanical properties while underscoring that roughness and hardness remain difficult, likely reflecting label sparsity and missing or weakly captured surface-state determinants. Finally, the framework operationalizes uncertainty through fold-ensemble variability and integrates this signal into a constraint-aware co-design workflow, enabling conservative screening and recommendation of candidate recipes within the feasible process envelope. Collectively, the study advances a reproducible and practically actionable template for data-scarce AM optimization, bridging multimodal learning and decision-making under uncertainty. More broadly, the present framework is consistent with recent efforts in other safety-critical engineering domains to combine physically informed modeling, hybrid learning, and calibrated uncertainty estimation to improve deployment trustworthiness [37].
Although developed here for L-PBF Ti-6Al-4V, the framework is general to other process–structure–property systems with small, partially observed multimodal datasets. Extension to a new alloy or manufacturing route would require redefinition of the relevant process descriptors, morphology/microstructure proxies, and targets, followed by system-specific retraining and prospective validation of the surrogate and embedding models.

6.4. Data and Code Availability

The base dataset underpinning this study is publicly available and can be obtained from Zenodo (record 6587905): https://zenodo.org/records/6587905 (accessed on 31 March 2026) [38]. Derived datasets and research artifacts produced in the course of this work, including curated set-level master tables, engineered physics features (e.g., line energy density and energy-density variants), morphology and microstructure summary features (pore and prior-β grain descriptors), stress–strain-derived descriptors, finalized feature filters, GroupKFold split definitions, trained-model outputs (fold-level predictions), and aggregated performance summaries, have not been deposited in a public repository. These derived materials are available from the corresponding author upon reasonable request and subject to any applicable data-use, privacy, or third-party restrictions.
The code used to generate the results reported in this study is not publicly available at this time, owing to practical constraints related to project-specific dependencies, environment configuration, and associated research artifacts. However, the code can be made available by the corresponding author upon reasonable request, subject to any applicable institutional, licensing, and third-party restrictions.

Author Contributions

R.B.H. led the study, including methodology development, implementation of computational framework, data processing and curation, model training and evaluation, uncertainty analysis, visualization, and preparation of the original manuscript. X.P. and G.C. provided overall conceptual guidance, shaped the research direction and experimental design, and contributed to methodological refinement, interpretation of results, and manuscript revision; X.P. served as the corresponding author and oversaw project administration. X.S. and Y.T. contributed to investigation and validation activities, supported data preparation and quality control, and participated in technical review of the analysis and results. X.H. contributed to investigation and validation, assisted with data organization and verification, and contributed to manuscript review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (No. 12372334) and the Southern Xinjiang Key Industry Innovation and Development Support Project of Xinjiang (No. 2019DB011).

Data Availability Statement

No new data were created or analyzed in this study. The data used in this study are publicly available on Zenodo (record 6587905).

Acknowledgments

The authors gratefully acknowledge the support and facilities provided by the School of Mechanical Engineering at the Nanjing University of Science and Technology. Special thanks are extended to the corresponding author, Pan Xuchao, for his essential supervision, expert guidance on physics and mathematical modeling, and leadership throughout the manuscript revision process.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Dataset, Feature Engineering, and Modality Metadata

Appendix A.1. Scope and Rationale

This appendix specifies the dataset schema, feature construction rules, modality definitions, and missing-data handling in a level of detail sufficient for strict computational reproducibility. The emphasis is placed on (i) unambiguous identifier logic at the set level, (ii) explicit feature semantics and units for both process-derived and morphology-derived descriptors, and (iii) consistent policies for filtering and imputation under partially observed multimodal measurements.

Dataset Schema and Identifiers

The analysis is conducted at the set level, indexed by a unique identifier denoted set_id, such that each set corresponds to a single experimental condition under which L-PBF process parameters, engineered physics descriptors, morphology or microstructure proxies, and property targets are coherently assigned. Raw inputs are heterogeneous and may be represented by multiple modality-specific files per set; consequently, a deterministic mapping from each raw file to set_id is enforced prior to aggregation. After parsing and mapping, all modalities are reduced to a single fixed-length set-level representation by modality-specific summarization operators and then joined into a unified master table keyed by set_id. This design ensures that each learning instance represents a complete experimental unit and prevents inadvertent replication of the same set across multiple rows.

Appendix A.2. Target Definitions and Unit Conventions

Six targets are modeled as set-level scalar responses: yield strength (MPa), ultimate tensile strength (MPa), elongation to failure (%), elastic modulus (GPa), surface roughness (µm), and Vickers microhardness (HV). Targets are defined by explicit column identifiers within the processed master table, and the associated unit conventions are treated as fixed. Any transformation applied to targets, including clipping, winsorization, log transformation, or outlier removal, must be declared explicitly to maintain interpretability of performance metrics and comparability across studies; when no such transformations are applied, this absence is stated to preclude ambiguity. Targets with incomplete labeling are retained under a labeled-subset protocol, in which folds are evaluated using only those sets with valid target values, while preserving the set_id grouping constraint described in Appendix A.6.

Appendix A.3. Tabular and Process-Derived Features with Engineered Physics Descriptors

Tabular predictors comprise the retained L-PBF process parameters and a set of engineered physics descriptors derived deterministically from those parameters. Process variables are retained subject to a coverage criterion, whereby a feature is preserved only if its non-null fraction meets a pre-specified threshold (reported below). Engineered descriptors include line-energy density and energy-density variants, which provide physically motivated compressions of the process space and facilitate cross-feature comparisons. For clarity and reproducibility, engineered features are defined by explicit formulae and units. A representative formulation for line-energy density is L E D = P / v , where P denotes laser power and v denotes scan speed, with units consistent with the input parameterization. A representative volumetric energy density variant may be expressed as V E D = P / ( v   h   t ) , where h denotes hatch spacing, and t denotes layer thickness. Because such descriptors can be highly collinear with their constituent process parameters, collinearity is treated as an expected property of the feature space rather than an anomaly; downstream learning therefore emphasizes robust cross-validated performance under a fixed feature specification rather than relying on ad hoc feature pruning.

Appendix A.4. Morphology and Microstructure Proxies: Pore Morphology, Prior-β Grain Morphology, and Stress–Strain Descriptors

Morphology and microstructure proxies are represented as modality-specific feature blocks extracted from raw modality files and summarized to the set level. Each modality is identified through a modality-aware inventory step that assigns files to one of the following categories: pore morphology, prior-β grain morphology, and stress–strain-derived descriptors. After modality identification and set_id mapping, each modality is summarized by a consistent family of distributional operators applied to per-object or per-curve measurements to produce a fixed-length descriptor vector per set. Specifically, for a raw measurement vector x within a set and modality, the summarization operator returns a standardized collection of statistics including sample count n, mean μ, standard deviation σ, extrema (min, max), and selected quantiles (for example, median and upper-tail quantiles), computed using a consistent quantile definition across all sets. Feature names encode the modality, the underlying physical quantity, and the aggregation operator, thereby enabling immediate traceability from any derived feature to its modality origin and statistical meaning.
For pore morphology, the raw measurements typically include pore volume, equivalent diameter, surface area, and sphericity, among others; the set-level representation is obtained by aggregating these distributions and, where appropriate, incorporating stabilized transforms (such as log-scale variants for heavy-tailed size distributions). For prior-β grain morphology, the raw measurements commonly include grain aspect ratio and equivalent diameter; these are summarized using the same aggregation family to preserve cross-modality consistency. For stress–strain-derived descriptors, curve-derived proxies are treated as a morphology-adjacent modality when they are used as input predictors; the resulting descriptors are computed deterministically from curve traces and summarized per set, yielding a stable set-level block that can be concatenated with process and morphology features under the locked specification. When a modality is absent for a subset of sets, the absence is treated explicitly as missingness in the corresponding feature block rather than as an implicit exclusion of those sets from the study.

Appendix A.5. Missingness Characterization, Imputation Policy, and Feature Filtering

The multimodal structure implies non-uniform modality availability across sets, with certain sets lacking pore, grain, or stress–strain descriptors depending on upstream data availability. Missingness is quantified at both the feature level and the modality-block level to separate sporadic missingness from structurally absent modalities. Feature filtering is applied prior to model fitting using a coverage threshold defined as a minimum required non-null fraction across sets (for example, a criterion of at least 0.70 non-null coverage), ensuring that the retained feature set does not contain systematically unobserved predictors. For the remaining missing values within retained features, imputation is performed using a fold-respecting rule in which imputation parameters are estimated on the training portion of each fold and applied to the corresponding validation or test portion. Median imputation is adopted as the default policy because it is robust under heavy-tailed distributions and small-sample settings and does not impose parametric assumptions that are rarely defensible in sparse multimodal materials datasets. This fold-conditional approach prevents information leakage by disallowing test-fold statistics from influencing training-fold preprocessing.

Appendix A.6. Final Locked Feature Sets and Reproducible Evaluation Protocol

All reported models are evaluated under locked feature specifications that define the exact set of retained predictors and the modality composition of each variant. Variants are defined by the concatenation of tabular/process features with one or more morphology blocks, including tab-only, tab plus pores, tab plus grains, tab plus pores and grains, and a combined multimodal variant containing all eligible morphology blocks. The locked specification includes both the prefiltering coverage criterion and the resulting postfilter feature list, thereby eliminating feature drift and ensuring that results can be reproduced exactly from the same processed master table. Model assessment follows a group-aware cross-validation protocol in which splits are constructed by GroupKFold with five folds, using set_id as the grouping variable, and where any hyperparameter selection, preprocessing parameter estimation, and imputation are conducted strictly within the training partition of each fold. This protocol enforces leakage control by construction, since no set_id appears simultaneously in training and test partitions within a fold, and it preserves the correct experimental unit of analysis throughout the pipeline.

Appendix B. Modeling Protocol, Ablations, and Additional Diagnostics

This appendix specifies the modeling workflow in sufficient detail to enable independent reproduction while preserving concision in the main text. Emphasis is placed on leakage-safe evaluation for small-n, transparent model selection, and reporting practices that remain stable under partially missing modalities and targets.

Appendix B.1. Cross-Validation Protocol and Leakage Controls

All predictive models are evaluated under a 5-fold GroupKFold protocol in which the grouping variable is the set-level identifier, set_id, such that no set contributes samples to both training and test partitions within a fold. The normalized split file contains five folds spanning all 42 sets, with fold sizes of train/test = 33/9 for folds 1–2 and 34/8 for folds 3–5. Leakage controls are enforced by performing every data-dependent operation strictly within each training fold and then applying the learned transformation to the corresponding test fold; this includes median imputation parameters, standardization parameters for scale-sensitive models, and any feature filtering rules based on non-null coverage. For the roughness target, only 28 of 42 sets are labeled; evaluation is therefore performed on the labeled subset within each fold’s held-out partition, and folds contribute to the reported metrics through the labeled test instances only (thereby preventing any imputed label leakage and avoiding artificial inflation of performance).

Appendix B.2. Models and Hyperparameter Grids

Baseline comparisons include linear, kernel, neighborhood, and tree-based regressors, selected to represent widely used tabular learning families under CPU constraints. Hyperparameters are tuned via discrete grids using training-fold internal validation, and the selected best setting is recorded per fold and target. The Ridge regressor is tuned over alpha ∈ {0.1, 1.0, 10.0}. The SVR with RBF kernel is tuned over C ∈ {1, 10} and gamma ∈ {scale, 0.01}. KNN regression is tuned over n_neighbors ∈ {3, 7, 15}. Random Forest regression is tuned using a large forest (n_estimators = 2000) with max_depth ∈ {None, 5} and max_features ∈ {sqrt}. ExtraTrees regression is tuned using a large forest (n_estimators = 4000) with max_depth ∈ {None, 5} and max_features ∈ {sqrt}. Histogram-based Gradient Boosting is tuned with max_iter = 2000 and learning_rate ∈ {0.01, 0.05, 0.1}. XGBoost regression is tuned over learning_rate ∈ {0.02, 0.05, 0.1}, max_depth ∈ {3, 5}, and min_child_weight ∈ {1, 5}. LightGBM regression is tuned using n_estimators = 6000, learning_rate ∈ {0.02, 0.05}, max_depth ∈ {3, 5}, and min_child_samples ∈ {5, 20}. CatBoost regression is tuned using iterations = 6000, learning_rate ∈ {0.02, 0.05}, depth ∈ {4, 6, 8}, and l2_leaf_reg ∈ {10, 30}. Early stopping for boosting libraries is applied only when supported by the installed package API; otherwise, a fixed estimator budget is used, and model selection proceeds through the discrete grid described above to ensure version-robust reproducibility.

Appendix B.3. Optimizer Ablation Details

The optimizer study evaluates training robustness for a fixed neural tabular regressor under multiple optimizers, with identical data splits and identical preprocessing across optimizers to isolate optimizer effects. The compared optimizers are AdamW, Muon (all-2D variant), and a hybrid strategy that applies Muon to selected submodules while retaining AdamW for the remaining parameters, thereby testing whether curvature-aware updates improve stability on small-n tabular regression. A log-spaced learning-rate sweep is conducted over a compact range spanning approximately 1 × 10−4 to 4 × 10−2, with weight decay included as a controlled regularizer; the selection rule is “fair” in the sense that each optimizer is permitted to operate at its fold-wise best learning rate, and fold-wise performance distributions are then summarized to quantify both central tendency and dispersion. Stability is reported using per-target boxplots of the validation metric across runs, and operating regions are summarized as target-by-learning-rate response maps to identify plateaus of stable training. The recommended final optimizer setting from the fairness study is AdamW with learning rate 1 × 10−3 and weight decay 1 × 10−4, which is used as the default for subsequent neural training runs when neural baselines are included.

Appendix B.4. Metrics and Statistical Reporting

Predictive accuracy is quantified using RMSE, MAE, and coefficient of determination R2, computed on held-out test partitions within each fold. For a target vector y and predictions y ^ over n held-out samples, RMSE is defined as 1 n i = 1 n ( y i y ^ i ) 2 , MAE as 1 n i = 1 n y i y ^ i , and R 2 as 1 i ( y i y ^ i ) 2 i ( y i y ¯ ) 2 . Reported headline values are the mean and standard deviation of fold-wise metrics computed across the five folds (or across the subset of folds contributing labeled test instances for partially labeled targets). Comparative statements between the final hybrid surrogate and the strongest baseline are based on fold-level deltas (final minus baseline) for RMSE and R2, and confidence intervals are computed via bootstrap resampling of folds with replacement using a percentile-based interval; the replicate count and random seed used for bootstrap resampling are recorded in the exported results bundle for exact reproducibility.

Appendix B.5. Additional Plots and Checks

Supplementary diagnostics are used to assess failure modes beyond aggregate scalar metrics. First, parity plots are examined per target to identify systematic bias, slope compression, and heteroscedastic regions, with particular attention to targets exhibiting weak R2 (for example, hardness under purely tabular baselines). Second, residual-versus-predicted plots are generated to diagnose heteroscedasticity and leverage points that can dominate RMSE in small-n settings. Third, modality sensitivity checks are conducted by rerunning model selection under alternative feature-set definitions (tabular-only versus tabular plus individual morphology modalities) to quantify whether performance gains arise from genuine information content rather than feature-count inflation. Finally, robustness to feature filtering is evaluated qualitatively by confirming that the selected surrogate remains stable under reasonable non-null coverage thresholds (for example, 0.70 as the default), and any deviations are documented to prevent overinterpretation of single-threshold artifacts.

Appendix C. Morphology Module and Co-Design Methodology

This appendix specifies the two-stage morphology strategy and the uncertainty-aware co-design pipeline in a deployment-feasible form. The objective is to formalize a procedure that remains statistically defensible under small sample size, multimodal heterogeneity, and partially missing modality coverage, while preserving reproducibility through explicit training, leakage control, and selection rules.

Appendix C.1. Morphology Embedding Construction

Morphology information is represented through modality-specific low-dimensional embeddings derived from set-level summarized features for pores, prior-β grain morphology, and stress–strain-derived descriptors. Embeddings are constructed using principal component analysis (PCA) fit exclusively on training folds within the GroupKFold protocol, and then applied to the corresponding held-out fold; PCA is never fit on any data that includes the test fold, preventing leakage through global covariance structure. Dimensionality is controlled by a cumulative explained-variance threshold of 0.95 together with a hard cap of eight components, which constrains model flexibility and mitigates overfitting in the low-n regime. Under this procedure, the effective dimensionality collapses to one component per modality in the reported run (pores: 1, grains: 1, stress: 1), consistent with strong redundancy after aggregation and with the need for stable downstream regression. This embedding choice prioritizes (i) dimensionality discipline when the number of sets is small, (ii) fold-stable representations, and (iii) interpretability of modality contribution through a compact latent axis.

Appendix C.2. Embedding Predictability Model

A supervised predictor is trained to infer morphology embeddings from tabular and process inputs, enabling morphology-aware inference at decision time when morphology measurements are unavailable. The embedding predictability model is trained and evaluated under the same GroupKFold partitions and preprocessing discipline as the primary surrogates: preprocessing statistics are estimated on training folds and applied to test folds, and any feature filtering or imputation is performed without access to the held-out fold. Predictability is quantified by out-of-fold R2 computed separately for each modality embedding and then summarized across folds; these metrics characterize the extent to which process conditions encode morphology variation and provide a deployment-relevant diagnostic of whether predicted morphology embeddings are informative or should be down-weighted in decision making.

Appendix C.3. Oracle, Predicted, and Deployable Variants

Three model variants are defined to separate scientific upper bounds from deployment-feasible configurations. The oracle variant uses embeddings computed directly from measured morphology features and therefore assumes that full morphology characterization is available at inference time; this variant defines an optimistic ceiling but is not generally deployable. The predicted variant replaces measured embeddings with embeddings predicted from process parameters, maintaining a morphology-informed representation without requiring morphology measurements for new designs. The deployable variant restricts all inputs to those available at decision time and uses predicted embeddings only for targets whose surrogate specification requires morphology information; targets for which morphology does not improve performance or is not required remain purely tabular. Selection of the deployable variant per target follows an explicit exclusion rule that disallows oracle models and chooses the best non-oracle configuration under cross-validated performance, ensuring that reported co-design decisions are consistent with the deployment constraint rather than retrospective availability of measurements.

Appendix C.4. Candidate Generation

Candidate recipes are generated by sampling within a physically plausible design space anchored to the observed process-parameter ranges, with optional controlled expansion if exploration beyond the experimental envelope is intended and explicitly justified. A fixed candidate budget is used for tractability, with 5000 candidates generated in the reported run. The design vector is defined by the filtered tabular and process feature set retained after coverage-based filtering, and engineered physics features are computed deterministically from sampled process parameters using the same feature definitions as in model training to preserve schema consistency. When a target’s deployable surrogate requires morphology information, candidate morphology embeddings are obtained by applying the embedding predictability model to candidate process features, thereby producing morphology-informed inputs without relying on unmeasured modalities. Feasibility safeguards are applied by enforcing bounds and admissibility checks on sampled variables, preventing candidates that violate physically meaningful limits or exceed the modeled operating space by construction.

Appendix C.5. Uncertainty-Aware Constraint Satisfaction

Co-design is performed under predictive uncertainty using an ensemble derived from the fold-trained models. For each candidate, the ensemble mean μ is computed as the average prediction across the fold models, and the ensemble uncertainty σ is computed as the standard deviation across fold predictions, representing sensitivity to data resampling and model estimation in the small-n regime. Constraints are enforced through conservative risk-aware rules: for “must exceed” constraints, feasibility is assessed by μ − zσ ≥ τ, and for “must be below” constraints, feasibility is assessed by μ + zσ ≤ τ, where τ is the threshold and z sets the conservativeness. Because stringent conservativeness can eliminate feasible designs in partially observed and noisy regimes, an adaptive relaxation strategy is employed in which z and, where applicable, quantile requirements are iteratively reduced until a non-empty feasible set appears, while preserving a monotone relaxation schedule and logging each iteration. This procedure yields a transparent trade-off between reliability and feasibility, rather than an opaque single-shot thresholding that may fail silently.

Appendix C.6. Recommendation Selection and Table 4 Construction

Recommendations are selected from the feasible set using multi-objective ranking appropriate to the design task. When competing objectives are present, Pareto filtering is applied to identify non-dominated candidates, followed by secondary ranking criteria that emphasize risk-adjusted performance under the same uncertainty-aware constraint rules. When objectives can be scalarized, a composite score is used with explicit weights and with penalties for uncertainty or constraint proximity if risk aversion is required. Table 4 reports the final recommendation set with process recipe variables defining each candidate, predicted means for the relevant targets, corresponding uncertainty estimates, explicit constraint pass/fail indicators under the adopted risk rule, and a rank or identifier supporting traceability. Physical validation requires fabrication of the recommended recipes within the specified bounds, measurement of the same mechanical and surface endpoints under consistent protocols, and quantitative comparison of measured outcomes against predictive means and conservative bounds to evaluate both accuracy and uncertainty calibration in a deployment-aligned setting.

References

  1. Cao, S.; Zou, Y.; Lim, C.V.S.; Wu, X. Review of laser powder bed fusion (LPBF) fabricated Ti-6Al-4V: Process, post-process treatment, microstructure, and property. Light Adv. Manuf. 2021, 2, 1. [Google Scholar] [CrossRef]
  2. Ruckh, E.; Lambert-Garcia, R.; Hocine, S.; Getley, A.C.; Iantaffi, C.; Bhatt, A.; Fitzpatrick, M.; Marussi, S.; Farndell, A.; Schubert, T.; et al. Process mapping of Ti-6Al-4V laser powder bed fusion using in situ high-speed synchrotron x-ray imaging. Addit. Manuf. 2025, 113, 105007. [Google Scholar] [CrossRef]
  3. Hassanin, H.; El-Sayed, M.A.; Ahmadein, M.; Alsaleh, N.A.; Ataya, S.; Ahmed, M.M.Z.; Essa, K. Optimising surface roughness and density in titanium fabrication via laser powder bed fusion. Micromachines 2023, 14, 1642. [Google Scholar] [CrossRef]
  4. Wang, J.; Zhu, R.; Liu, Y.; Zhang, L. Online detection of balling phenomena for laser powder bed fusion using molten pool morphology. Adv. Powder Mater. 2023, 2, 100137. [Google Scholar] [CrossRef]
  5. Xu, X.; Xie, Z.; Wu, M.; Ma, C. Effects of laser process parameters on melt pool thermodynamics, surface morphology and residual stress of laser powder bed-fused TiAl-based composites. Metals 2025, 15, 1234. [Google Scholar] [CrossRef]
  6. Chowdhury, S.; Yadaiah, N.; Prakash, C.; Ramakrishna, S.; Dixit, S.; Gupta, L.R.; Buddhi, D. Laser powder bed fusion: A state-of-the-art review of the technology, materials, properties & defects, and numerical modelling. J. Mater. Res. Technol. 2022, 20, 2109–2172. [Google Scholar] [CrossRef]
  7. Dilip, J.J.S.; Zhang, S.; Teng, C.; Zeng, K.; Robinson, C.; Pal, D.; Stucker, B. Influence of processing parameters on the evolution of melt pool, porosity, and microstructures in Ti-6Al-4V alloy parts fabricated by selective laser melting. Prog. Addit. Manuf. 2017, 2, 157–167. [Google Scholar] [CrossRef]
  8. Qian, C.; Zhang, K.; Zhu, J.; Liu, Y.; Liu, Y.; Liu, J.; Liu, J.; Yang, Y.; Wang, H. Effect of processing parameters on the defects distribution of Ti–6Al–4V alloy by selective laser melting. AIP Adv. 2024, 14, 025246. [Google Scholar] [CrossRef]
  9. Correa-Gómez, E.; Castro-Espinosa, H.; Caballero-Ruiz, A.; García-López, E.; Garcia Garcia, G. Effect of process parameters on the roughness and tensile behavior of parts manufactured by the metals LPBF process. Eng. Rep. 2024, 6, e12904. [Google Scholar] [CrossRef]
  10. Hertlein, N.; Deshpande, S.; Venugopal, V.; Kumar, M.; Anand, S. Prediction of selective laser melting part quality using hybrid Bayesian network. Addit. Manuf. 2020, 32, 101089. [Google Scholar] [CrossRef]
  11. Tapia, G.; Khairallah, S.A.; Matthews, M.J.; King, W.E.; Elwany, A. Gaussian process-based surrogate modeling framework for process planning in laser powder-bed fusion additive manufacturing of 316L stainless steel. Int. J. Adv. Manuf. Technol. 2018, 94, 3591–3603. [Google Scholar] [CrossRef]
  12. Tapia, G.; Elwany, A.H. Prediction of porosity in SLM parts using a MARS statistical model and Bayesian inference. In Proceedings of the Solid Freeform Fabrication Symposium (SFF), Austin, TX, USA, 10–12 August 2015; pp. 1205–1219. Available online: https://utw10945.utweb.utexas.edu/sites/default/files/2015/2015-98-Tapia.pdf (accessed on 1 April 2025).
  13. Tapia, G.; Elwany, A.H.; Sang, H. Prediction of porosity in metal-based additive manufacturing using spatial Gaussian process models. Addit. Manuf. 2016, 12, 282–290. [Google Scholar] [CrossRef]
  14. Montalbano, T.; Nimer, S.; Daffron, M.; Croom, B.; Ghosh, S.; Storck, S. Machine learning enabled discovery of new L-PBF processing domains for Ti-6Al-4V. Addit. Manuf. 2025, 98, 104632. [Google Scholar] [CrossRef]
  15. Venkatachalam, E.; Sundararajan, D. A review on the impact of volumetric energy density on morphological and mechanical behavior in laser powder bed fusion steel alloys. Weld. World 2025, 69, 929–956. [Google Scholar] [CrossRef]
  16. de Leon Nope, G.V.; Perez-Andrade, L.I.; Corona-Castuera, J.; Espinosa-Arbelaez, D.G.; Muñoz-Saldaña, J.; Alvarado-Orozco, J.M. Study of volumetric energy density limitations on the IN718 mesostructure and microstructure in laser powder bed fusion process. J. Manuf. Process. 2021, 64, 1261–1272. [Google Scholar] [CrossRef]
  17. Wu, R.; Wang, H.; Chen, H.-T.; Carneiro, G. Deep multimodal learning with missing modality: A survey. arXiv 2024, arXiv:2409.07825. [Google Scholar] [CrossRef]
  18. Qing, J.; Couckuyt, I.; Dhaene, T. A robust multi-objective Bayesian optimization framework considering input uncertainty. J. Glob. Optim. 2022, in press. [Google Scholar] [CrossRef]
  19. Rheude, T.; Eils, R.; Wild, B. Cohort-Based Active Modality Acquisition. Under Review as a Conference Paper at ICLR 2026 (submitted 19 September 2025; Modified 11 February 2026). Available online: https://openreview.net/pdf?id=QK8Mbgvtat (accessed on 1 April 2025).
  20. Valancius, M.; Lennon, M.; Oliva, J.B. Acquisition Conditioned Oracle for Nongreedy Active Feature Acquisition. In Proceedings of the 41st International Conference on Machine Learning (ICML 2024), Vienna, Austria, 21–27 July 2024; Volume 235, pp. 48957–48975. Available online: https://par.nsf.gov/servlets/purl/10543467 (accessed on 1 April 2025).
  21. Senthilnathan, A.; Nath, P.; Mahadevan, S.; Witherell, P. Surrogate modeling of microstructure prediction in additive manufacturing. Comput. Mater. Sci. 2025, 247, 113536. [Google Scholar] [CrossRef]
  22. Noguchi, S.; Inoue, J. Stochastic characterization and reconstruction of material microstructures for establishment of process-structure-property linkage using the deep generative model. Phys. Rev. E 2021, 104, 025302. [Google Scholar] [CrossRef]
  23. Wang, Y.; Cui, Z.; Li, Y. Distribution-consistent modal recovering for incomplete multimodal learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Paris, France, 2–6 October 2023; pp. 22025–22034. Available online: https://openaccess.thecvf.com/content/ICCV2023/papers/Wang_Distribution-Consistent_Modal_Recovering_for_Incomplete_Multimodal_Learning_ICCV_2023_paper.pdf (accessed on 1 April 2025).
  24. Zeng, Z.; Peng, Z.; Yang, X.; Shen, W. Missing as masking: Arbitrary cross-modal feature reconstruction for incomplete multimodal brain tumor segmentation. In Proceedings of the MICCAI 2024, Marrakesh, Morocco, 6–10 October 2024; Available online: https://papers.miccai.org/miccai-2024/paper/0067_paper.pdf (accessed on 1 April 2025).
  25. Slotwinski, J.; Cooke, A.; Moylan, S. Mechanical Properties Testing for Metal Parts Made via Additive Manufacturing: A Review of the State of the Art of Mechanical Property Testing; NISTIR 7847; National Institute of Standards and Technology (NIST): Gaithersburg, MD, USA, 2012. Available online: https://nvlpubs.nist.gov/nistpubs/ir/2012/NIST.IR.7847.pdf (accessed on 1 April 2025).
  26. Cepeda-Jiménez, C.M.; Potenza, F.; Magalini, E.; Luchin, V.; Molinari, A.; Pérez-Prado, M.T. Effect of energy density on the microstructure and texture evolution of Ti-6Al-4V processed by laser powder bed fusion. Mater. Charact. 2019, 163, 110238. [Google Scholar] [CrossRef]
  27. Feng, S.; Chen, Z.; Bircher, B.; Ji, Z.; Nyborg, L.; Bigot, S. Predicting laser powder bed fusion defects through in-process monitoring data and machine learning. Mater. Des. 2022, 222, 111115. [Google Scholar] [CrossRef]
  28. DebRoy, T.; Wei, H.L.; Zuback, J.S.; Mukherjee, T.; Elmer, J.W.; Milewski, J.O.; Beese, A.M.; Wilson-Heid, A.; De, A.; Zhang, W. Additive manufacturing of metallic components—Process, structure and properties. Prog. Mater. Sci. 2018, 92, 112–224. [Google Scholar] [CrossRef]
  29. King, W.E.; Anderson, A.T.; Ferencz, R.M.; Hodge, N.E.; Kamath, C.; Khairallah, S.A.; Rubenchik, A.M. Laser powder bed fusion additive manufacturing of metals; physics, computational, and materials challenges. Appl. Phys. Rev. 2015, 2, 041304. [Google Scholar] [CrossRef]
  30. Le Roux, S.; Salem, M.; Shabbar, M.; Vicharapu, B.; Muthu, N.; Moussaoui, K.; Hor, A.; Velay, V. Effect of process parameters on surface integrity in laser powder bed fusion of Ti-6Al-4V alloy. Sci. Rep. 2025, 15, 43786. [Google Scholar] [CrossRef] [PubMed]
  31. Gaikwad, A.; Giera, B.; Guss, G.M.; Forien, J.-B.; Matthews, M.J.; Rao, P. Heterogeneous sensing and scientific machine learning for quality assurance in laser powder bed fusion—A single-track study. Addit. Manuf. 2020, 36, 101659. [Google Scholar] [CrossRef]
  32. Bhatt, A.; Huang, Y.; Leung, C.L.A.; Soundarapandiyan, G.; Marussi, S.; Shah, S.M.; Atwood, R.C.; Fitzpatrick, M.E.; Tiwari, M.K.; Lee, P.D. In situ characterisation of surface roughness and its amplification during multilayer single-track laser powder bed fusion additive manufacturing. Addit. Manuf. 2024, 77, 103809. [Google Scholar] [CrossRef]
  33. Dareh Baghi, A.; Nafisi, S.; Hashemi, R.; Ebendorff-Heidepriem, H.; Ghomashchi, R. A new approach to empirical optimization of laser powder bed fusion process for Ti6Al4V parts. J. Mater. Eng. Perform. 2023, 32, 9472–9488. [Google Scholar] [CrossRef]
  34. Fullington, D.; Yangue, E.; Bappy, M.M.; Liu, C.; Tian, W. Leveraging small-scale datasets for additive manufacturing process modeling and part certification: Current practice and remaining gaps. J. Manuf. Syst. 2024, 75, 306–321. [Google Scholar] [CrossRef]
  35. Wennerberg, A.; Albrektsson, T.; Chrcanovic, B. Long-term clinical outcome of implants with different surface modifications. Eur. J. Oral Implantol. 2018, 11, S123–S136. [Google Scholar] [PubMed]
  36. Pegues, J.; Roach, M.; Williamson, R.S.; Shamsaei, N. Surface roughness effects on the fatigue strength of additively manufactured Ti-6Al-4V. Int. J. Fatigue 2018, 116, 543–552. [Google Scholar] [CrossRef]
  37. Hossain, R.B.; Al Hasan, M.M.; Khan, M.I.; Ahmed, M.; Lin, Y.; Pan, X. A physics-informed hybrid ensemble for robust and high-fidelity temperature forecasting in PMSMs. World Electr. Veh. J. 2026, 17, 133. [Google Scholar] [CrossRef]
  38. Luo, Q.; Lu, Y.; Simpson, T.W.; Beese, A.M. Effect of processing parameters on pore structures, grain features, and mechanical properties in Ti-6Al-4V by laser powder bed fusion. Addit. Manuf. 2022, 56, 102915. [Google Scholar] [CrossRef]
Figure 1. Deployment-aligned workflow for hybrid multimodal surrogate modeling and uncertainty-aware co-design in L-PBF Ti-6Al-4V. (a) Data construction and split-integrity pipeline linking recipe-level process inputs, bulk mechanical targets, and nanomaterials/surface proxy readouts in a multimodal dataset with missing modalities allowed under leakage-safe evaluation. (b) Hybrid surrogate architecture combining a robust tabular expert and an optional morphology/surface expert through a confidence-gated fusion module to produce calibrated multi-target predictions with uncertainty. (c) Co-design and validation loop for candidate generation, Pareto and feasibility screening, uncertainty-aware ranking, top-design selection, and experimental validation when measurements are available.
Figure 1. Deployment-aligned workflow for hybrid multimodal surrogate modeling and uncertainty-aware co-design in L-PBF Ti-6Al-4V. (a) Data construction and split-integrity pipeline linking recipe-level process inputs, bulk mechanical targets, and nanomaterials/surface proxy readouts in a multimodal dataset with missing modalities allowed under leakage-safe evaluation. (b) Hybrid surrogate architecture combining a robust tabular expert and an optional morphology/surface expert through a confidence-gated fusion module to produce calibrated multi-target predictions with uncertainty. (c) Co-design and validation loop for candidate generation, Pareto and feasibility screening, uncertainty-aware ranking, top-design selection, and experimental validation when measurements are available.
Nanomaterials 16 00447 g001
Figure 2. Dataset characterization for multimodal L-PBF Ti-6Al-4V surrogate modeling, with explicit separation between outputs and inputs. (a) Empirical distributions of the six output targets: yield strength, ultimate tensile strength, elongation, elastic modulus, surface roughness, and Vickers microhardness. (b) Representative input-feature distributions, including the primary raw process variables retained in the present dataset (laser power and scan speed), deterministic engineered descriptors derived from them (e.g., P × v , LED), and representative proxy features used as model inputs. (c) Feature-block missingness by modality (tabular/process, pores, grains, stress–strain). (d) Spearman rank-correlation matrix spanning outputs and selected input features. (e) Modality availability across set_id, highlighting non-uniform proxy coverage. (f) Representative pore- and grain-morphology feature distributions (shown on appropriate scales) to illustrate within-dataset variability and heavy-tailed behavior.
Figure 2. Dataset characterization for multimodal L-PBF Ti-6Al-4V surrogate modeling, with explicit separation between outputs and inputs. (a) Empirical distributions of the six output targets: yield strength, ultimate tensile strength, elongation, elastic modulus, surface roughness, and Vickers microhardness. (b) Representative input-feature distributions, including the primary raw process variables retained in the present dataset (laser power and scan speed), deterministic engineered descriptors derived from them (e.g., P × v , LED), and representative proxy features used as model inputs. (c) Feature-block missingness by modality (tabular/process, pores, grains, stress–strain). (d) Spearman rank-correlation matrix spanning outputs and selected input features. (e) Modality availability across set_id, highlighting non-uniform proxy coverage. (f) Representative pore- and grain-morphology feature distributions (shown on appropriate scales) to illustrate within-dataset variability and heavy-tailed behavior.
Nanomaterials 16 00447 g002
Figure 3. Model evaluation, uncertainty quantification, and interpretability under GroupKFold. (a) GroupKFold protocol showing test-fold assignment by set_id. (b) Out-of-fold parity plots for the six targets, with predictive uncertainty overlaid. (c) Split-conformal uncertainty calibration (GroupKFold-safe), reporting empirical versus nominal coverage with summary calibration metrics. (d) Global feature importance aggregated across folds, stratified by modality (process/tabular, pores, grains, stress–strain).
Figure 3. Model evaluation, uncertainty quantification, and interpretability under GroupKFold. (a) GroupKFold protocol showing test-fold assignment by set_id. (b) Out-of-fold parity plots for the six targets, with predictive uncertainty overlaid. (c) Split-conformal uncertainty calibration (GroupKFold-safe), reporting empirical versus nominal coverage with summary calibration metrics. (d) Global feature importance aggregated across folds, stratified by modality (process/tabular, pores, grains, stress–strain).
Nanomaterials 16 00447 g003
Figure 4. Optimizer stability and learning-rate sensitivity for the neural tabular regressor under the fixed 5-fold GroupKFold protocol grouped by set_id. (a) Fold-aggregated validation trajectories at the selected learning-rate operating point for each optimizer and target. Solid lines show the median validation loss across folds, and shaded bands show the interquartile range. Only epochs with full 5-fold support are shown to avoid overinterpreting late-stage variance after early stopping. (b) Learning-rate operating regions in score space for each optimizer across the six targets, showing the range of stable and competitive learning-rate choices. (c) Best learning-rate operating point per optimizer and target, reported as fold-aggregated mean performance with variability, with labels indicating the selected learning rate.
Figure 4. Optimizer stability and learning-rate sensitivity for the neural tabular regressor under the fixed 5-fold GroupKFold protocol grouped by set_id. (a) Fold-aggregated validation trajectories at the selected learning-rate operating point for each optimizer and target. Solid lines show the median validation loss across folds, and shaded bands show the interquartile range. Only epochs with full 5-fold support are shown to avoid overinterpreting late-stage variance after early stopping. (b) Learning-rate operating regions in score space for each optimizer across the six targets, showing the range of stable and competitive learning-rate choices. (c) Best learning-rate operating point per optimizer and target, reported as fold-aggregated mean performance with variability, with labels indicating the selected learning rate.
Nanomaterials 16 00447 g004
Figure 5. Surface-relevant proxy measurements and surrogate performance. (a) Wettability (contact angle) and a surface-chemistry proxy reported by row, illustrating monotonic ordering across experimental runs. (b) Parity plots for the surface surrogate across the six targets, summarizing predictive fidelity under the same evaluation protocol. (c) Distributions of bio- and surface-proxy readouts alongside key process/physics descriptors (e.g., speed, power, LED), highlighting the dynamic range and coverage used for downstream screening.
Figure 5. Surface-relevant proxy measurements and surrogate performance. (a) Wettability (contact angle) and a surface-chemistry proxy reported by row, illustrating monotonic ordering across experimental runs. (b) Parity plots for the surface surrogate across the six targets, summarizing predictive fidelity under the same evaluation protocol. (c) Distributions of bio- and surface-proxy readouts alongside key process/physics descriptors (e.g., speed, power, LED), highlighting the dynamic range and coverage used for downstream screening.
Nanomaterials 16 00447 g005
Figure 6. Co-design under constraints and uncertainty. (a) Pareto front for yield versus roughness, highlighting Pareto-optimal and selected designs. (b) Constraint-satisfaction rates for all candidates, Pareto-optimal designs, and the top-ranked subset. (c) Validation (predicted versus measured) for designs with measurements, with summary error metrics. (d) Normalized multi-target profiles for the top designs. (e) Risk-aware trade-off using confidence bounds (LCB/UCB) for yield and roughness. (f) Local process-parameter sensitivity around the candidate set, indicating robustness and dominant drivers.
Figure 6. Co-design under constraints and uncertainty. (a) Pareto front for yield versus roughness, highlighting Pareto-optimal and selected designs. (b) Constraint-satisfaction rates for all candidates, Pareto-optimal designs, and the top-ranked subset. (c) Validation (predicted versus measured) for designs with measurements, with summary error metrics. (d) Normalized multi-target profiles for the top designs. (e) Risk-aware trade-off using confidence bounds (LCB/UCB) for yield and roughness. (f) Local process-parameter sensitivity around the candidate set, indicating robustness and dominant drivers.
Nanomaterials 16 00447 g006
Table 1. Dataset and GroupKFold split summary.
Table 1. Dataset and GroupKFold split summary.
No.MetricValue
1Total rows (recipes)42
2Independent sets (set_id)42
3set_id range(1, 42)
4Labeled sets: Yield strength (MPa)42/42
5Labeled sets: Ultimate tensile strength (MPa)42/42
6Labeled sets: Elongation (%)42/42
7Labeled sets: Elastic modulus (GPa)42/42
8Labeled sets: Roughness (µm)28/42
9Labeled sets: Hardness (HV)42/42
10GroupKFold fold 1: train sets33
11GroupKFold fold 1: test sets9
12GroupKFold fold 2: train sets33
13GroupKFold fold 2: test sets9
14GroupKFold fold 3: train sets34
15GroupKFold fold 3: test sets8
16GroupKFold fold 4: train sets34
17GroupKFold fold 4: test sets8
18GroupKFold fold 5: train sets34
19GroupKFold fold 5: test sets8
Table 2. Locked modeling, preprocessing, and decision-layer settings.
Table 2. Locked modeling, preprocessing, and decision-layer settings.
No.ComponentSettingSpecification
1Global preprocessingUnit of independence (grouping)set_id
2Global preprocessingSplit protocolGrouped K-fold cross-validation (set-level; manual_group_kfold_like)
3Global preprocessingMissing-data handlingMedian imputation for numeric features (fit on training data only)
4Global preprocessingFeature scalingStandardization applied only where required, fit on training data only
5Morphology moduleReproducibility controlLocked configuration
6Co-designReproducibility controlLocked configuration
Table 3. Fold-aggregated predictive performance under 5-fold GroupKFold evaluation.
Table 3. Fold-aggregated predictive performance under 5-fold GroupKFold evaluation.
TargetModelRMSE (Mean ± Std)MAE (Mean ± Std)R2 (Mean)
Yield strength (MPa)XGBoost11.069 ± 8.6777.358 ± 3.8200.895
Ultimate tensile strength (MPa)XGBoost13.883 ± 12.0608.473 ± 3.9370.873
Elongation (%)XGBoost0.677 ± 0.4990.489 ± 0.2870.861
Elastic modulus (GPa)XGBoost2.379 ± 0.7961.681 ± 0.3380.663
Surface roughness (µm)XGBoost2.313 ± 0.8161.951 ± 0.5810.121
Vickers hardness (HV)XGBoost16.537 ± 4.65813.755 ± 3.8250.114
Table 4. Uncertainty-aware co-design recommendations under constraints.
Table 4. Uncertainty-aware co-design recommendations under constraints.
RankP (W)V (mm/s)LED (J/m)Yield (MPa)Yield LCB (MPa)UTS (MPa)Elong. (%)Modulus (GPa)Roughness (µm)Roughness UCB (µm)Hardness (HV)Score
1131.8781.8297.21011.3 ± 7.21005.81082.0 ± 5.79.28 ± 0.23111.35 ± 1.8218.02 ± 1.9719.50365.9 ± 3.80.902
2238.11167.7237.41018.4 ± 6.81013.31080.3 ± 9.29.33 ± 0.27107.21 ± 1.8218.52 ± 1.0319.29365.6 ± 4.00.876
3289.2999.0190.41018.6 ± 11.71009.81079.8 ± 10.89.51 ± 0.26107.77 ± 1.4818.52 ± 1.5119.66376.5 ± 7.90.866
4200.4842.6228.81010.6 ± 7.51005.01083.3 ± 5.69.22 ± 0.23114.18 ± 0.4317.09 ± 1.4118.15379.1 ± 8.60.856
5207.4850.0194.81011.0 ± 8.01005.01090.7 ± 4.59.27 ± 0.40113.08 ± 0.9618.52 ± 1.1219.36393.5 ± 3.10.856
6143.31530.9215.61023.1 ± 12.71013.61080.0 ± 10.19.36 ± 0.21106.32 ± 2.2616.93 ± 1.4818.05389.8 ± 7.30.853
7376.81294.3261.51013.9 ± 4.51010.51081.4 ± 1.68.43 ± 0.28111.22 ± 0.8416.86 ± 1.1417.72369.6 ± 2.20.835
8332.1446.1251.41005.9 ± 7.41000.31074.9 ± 3.39.12 ± 0.11107.32 ± 1.9619.10 ± 1.0519.89370.5 ± 3.40.782
9340.1868.1445.41013.1 ± 12.11004.01081.7 ± 9.68.18 ± 0.12109.34 ± 0.6717.50 ± 1.4018.55371.5 ± 5.20.765
10434.2941.0296.61006.1 ± 2.21004.41073.2 ± 4.58.99 ± 0.23107.43 ± 1.7817.16 ± 1.2218.08383.7 ± 5.80.764
11196.3900.1327.81004.3 ± 11.7995.61075.5 ± 5.99.24 ± 0.23108.43 ± 1.6116.46 ± 1.8317.83385.3 ± 4.60.742
12327.51456.3362.21006.2 ± 9.0999.51078.5 ± 4.78.80 ± 0.49107.47 ± 1.8118.08 ± 1.5819.27381.2 ± 5.70.740
13376.71330.1412.51003.8 ± 10.8995.81072.2 ± 6.19.18 ± 0.26108.89 ± 1.0117.42 ± 0.7017.94387.4 ± 5.60.702
14256.0679.0322.61005.8 ± 12.4996.51076.3 ± 8.38.92 ± 0.28107.11 ± 1.8618.12 ± 1.8319.49369.6 ± 6.60.682
15369.01288.4343.4994.9 ± 8.0988.91073.0 ± 3.69.23 ± 0.27113.12 ± 1.4419.48 ± 0.8620.13370.2 ± 5.10.640
16493.2863.6274.11002.6 ± 8.3996.41069.4 ± 4.98.77 ± 0.33107.71 ± 1.8417.60 ± 1.6518.84370.9 ± 4.60.620
17213.91138.3288.3991.6 ± 5.0987.91070.2 ± 2.28.95 ± 0.12112.86 ± 1.6118.46 ± 1.7319.75385.8 ± 7.10.597
18481.51346.8232.0998.8 ± 4.1995.71058.3 ± 3.69.29 ± 0.23110.85 ± 0.9817.70 ± 2.2319.37371.9 ± 1.30.573
19449.6496.5194.3996.3 ± 11.1987.91074.8 ± 5.78.80 ± 0.54106.83 ± 1.7416.13 ± 1.5817.31365.7 ± 4.60.573
20430.81469.2391.6994.7 ± 5.8990.41070.2 ± 4.18.77 ± 0.23113.82 ± 0.6516.23 ± 1.1117.06381.9 ± 2.50.562
21478.51378.9464.6985.6 ± 8.1979.61070.4 ± 4.39.18 ± 0.25109.08 ± 1.3818.70 ± 1.6119.91369.7 ± 4.30.555
22212.0559.5265.71002.4 ± 8.2996.31059.1 ± 6.29.29 ± 0.20107.19 ± 1.8417.93 ± 1.0518.71378.3 ± 2.20.546
23367.51042.8372.1984.2 ± 9.7976.91069.0 ± 2.79.16 ± 0.18113.30 ± 0.8017.98 ± 1.2818.95365.1 ± 3.80.492
24444.9760.0458.1987.4 ± 7.2982.11065.3 ± 6.29.17 ± 0.34110.75 ± 0.7618.07 ± 1.8219.44381.0 ± 5.70.489
25445.71414.2324.1991.7 ± 8.3985.51068.4 ± 3.78.13 ± 0.18110.20 ± 0.4918.64 ± 1.2919.60386.6 ± 13.60.486
26268.91437.9317.8991.4 ± 7.2986.01058.5 ± 4.89.34 ± 0.43107.27 ± 1.6317.63 ± 2.1019.21378.2 ± 7.40.448
27365.2997.4234.21002.0 ± 8.4995.61056.5 ± 6.68.72 ± 0.38107.45 ± 1.8418.37 ± 1.4919.48379.6 ± 6.80.436
28342.01452.0265.3989.6 ± 5.1985.81056.7 ± 4.99.32 ± 0.23112.47 ± 0.7418.45 ± 1.9319.90381.2 ± 5.30.435
29294.1918.6461.1984.9 ± 6.7979.81067.1 ± 2.08.25 ± 0.12107.73 ± 1.6916.66 ± 0.9617.38364.8 ± 3.90.431
30271.11088.6223.5996.6 ± 12.8987.01067.4 ± 6.98.23 ± 0.18113.47 ± 0.2318.90 ± 1.5620.07369.1 ± 6.80.426
31493.21420.2447.5987.3 ± 5.8982.91065.8 ± 2.67.81 ± 0.12110.14 ± 0.5317.47 ± 1.7818.81370.8 ± 6.00.417
32267.8606.1278.9988.6 ± 3.1986.21054.5 ± 4.09.06 ± 0.37110.51 ± 0.8517.05 ± 1.3818.08365.9 ± 1.80.410
33254.71484.7351.3981.9 ± 10.1974.31071.3 ± 3.18.15 ± 0.02113.31 ± 0.4317.75 ± 2.0219.26382.1 ± 9.70.397
34201.9825.4242.9992.0 ± 3.7989.21052.0 ± 3.98.97 ± 0.12107.15 ± 1.9617.13 ± 1.6318.35383.7 ± 9.30.393
35486.2819.4273.5986.6 ± 12.1977.51069.7 ± 4.88.13 ± 0.24112.68 ± 1.0816.21 ± 1.3217.20380.5 ± 6.70.381
36319.31233.0296.3983.2 ± 3.3980.81054.0 ± 2.89.32 ± 0.38112.56 ± 0.4716.33 ± 0.5316.74372.6 ± 3.00.365
37223.31289.8202.8982.0 ± 8.9975.31068.5 ± 4.17.95 ± 0.19111.79 ± 1.0015.49 ± 0.9016.16375.1 ± 7.50.352
38321.3625.7255.1993.4 ± 5.9989.01047.5 ± 2.39.07 ± 0.36107.20 ± 1.8118.68 ± 1.5619.85372.6 ± 3.90.346
39378.6666.1309.3985.6 ± 2.2984.01050.6 ± 4.59.20 ± 0.17106.93 ± 1.8617.10 ± 0.7717.68366.5 ± 4.10.343
40212.1809.2404.8982.4 ± 8.1976.31055.6 ± 5.19.35 ± 0.28107.32 ± 1.8718.15 ± 1.3319.15379.9 ± 4.00.339
41458.1715.4415.5991.8 ± 6.2987.21059.1 ± 4.88.09 ± 0.22113.53 ± 0.4218.87 ± 1.6120.07371.3 ± 3.40.331
42236.1713.3333.4981.6 ± 4.6978.21051.6 ± 2.09.20 ± 0.39107.25 ± 1.6918.26 ± 1.3619.29376.6 ± 7.60.311
43278.11472.9338.9988.8 ± 7.1983.61051.2 ± 6.49.10 ± 0.30107.14 ± 1.9518.26 ± 1.3819.29374.5 ± 5.70.311
44356.8944.6347.8976.7 ± 16.8964.11069.6 ± 10.99.05 ± 0.19112.60 ± 0.5118.80 ± 2.1820.44370.4 ± 3.20.311
45200.3437.8252.4992.0 ± 7.8986.21047.0 ± 2.59.15 ± 0.26113.69 ± 0.4017.93 ± 1.1918.83384.2 ± 9.90.311
46344.91082.9403.6977.2 ± 8.2971.01067.7 ± 2.27.96 ± 0.34107.44 ± 1.8417.42 ± 1.4118.48386.6 ± 10.60.309
47411.21573.7393.9974.9 ± 10.9966.81065.1 ± 10.29.03 ± 0.36109.44 ± 0.5318.16 ± 1.7119.44388.3 ± 6.10.309
48343.11564.8254.9997.4 ± 5.3993.41048.6 ± 4.17.88 ± 0.28109.39 ± 0.9318.86 ± 1.5119.99381.8 ± 7.00.299
49179.5498.2268.9976.0 ± 12.4966.71064.3 ± 7.89.17 ± 0.31113.74 ± 0.8117.80 ± 1.6519.04372.1 ± 2.20.289
50155.21131.5437.1973.5 ± 5.2969.61056.0 ± 5.69.26 ± 0.26113.09 ± 0.6017.32 ± 1.9318.77382.4 ± 8.20.259
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hossain, R.B.; Pan, X.; Chang, G.; Su, X.; Tao, Y.; Han, X. Hybrid Multimodal Surrogate Modeling and Uncertainty-Aware Co-Design for L-PBF Ti-6Al-4V with Nanomaterials-Informed Morphology Proxies. Nanomaterials 2026, 16, 447. https://doi.org/10.3390/nano16080447

AMA Style

Hossain RB, Pan X, Chang G, Su X, Tao Y, Han X. Hybrid Multimodal Surrogate Modeling and Uncertainty-Aware Co-Design for L-PBF Ti-6Al-4V with Nanomaterials-Informed Morphology Proxies. Nanomaterials. 2026; 16(8):447. https://doi.org/10.3390/nano16080447

Chicago/Turabian Style

Hossain, Rifath Bin, Xuchao Pan, Geng Chang, Xin Su, Yu Tao, and Xinyi Han. 2026. "Hybrid Multimodal Surrogate Modeling and Uncertainty-Aware Co-Design for L-PBF Ti-6Al-4V with Nanomaterials-Informed Morphology Proxies" Nanomaterials 16, no. 8: 447. https://doi.org/10.3390/nano16080447

APA Style

Hossain, R. B., Pan, X., Chang, G., Su, X., Tao, Y., & Han, X. (2026). Hybrid Multimodal Surrogate Modeling and Uncertainty-Aware Co-Design for L-PBF Ti-6Al-4V with Nanomaterials-Informed Morphology Proxies. Nanomaterials, 16(8), 447. https://doi.org/10.3390/nano16080447

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop