Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (929)

Search Parameters:
Keywords = calibration-independent

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 1707 KiB  
Article
A Structural Causal Model Ontology Approach for Knowledge Discovery in Educational Admission Databases
by Bern Igoche Igoche, Olumuyiwa Matthew and Daniel Olabanji
Knowledge 2025, 5(3), 15; https://doi.org/10.3390/knowledge5030015 - 4 Aug 2025
Viewed by 77
Abstract
Educational admission systems, particularly in developing countries, often suffer from opaque decision processes, unstructured data, and limited analytic insight. This study proposes a novel methodology that integrates structural causal models (SCMs), ontological modeling, and machine learning to uncover and apply interpretable knowledge from [...] Read more.
Educational admission systems, particularly in developing countries, often suffer from opaque decision processes, unstructured data, and limited analytic insight. This study proposes a novel methodology that integrates structural causal models (SCMs), ontological modeling, and machine learning to uncover and apply interpretable knowledge from an admission database. Using a dataset of 12,043 records from Benue State Polytechnic, Nigeria, we demonstrate this approach as a proof of concept by constructing a domain-specific SCM ontology, validate it using conditional independence testing (CIT), and extract features for predictive modeling. Five classifiers, Logistic Regression, Decision Tree, Random Forest, K-Nearest Neighbors (KNN), and Support Vector Machine (SVM) were evaluated using stratified 10-fold cross-validation. SVM and KNN achieved the highest classification accuracy (92%), with precision and recall scores exceeding 95% and 100%, respectively. Feature importance analysis revealed ‘mode of entry’ and ‘current qualification’ as key causal factors influencing admission decisions. This framework provides a reproducible pipeline that combines semantic representation and empirical validation, offering actionable insights for institutional decision-makers. Comparative benchmarking, ethical considerations, and model calibration are integrated to enhance methodological transparency. Limitations, including reliance on single-institution data, are acknowledged, and directions for generalizability and explainable AI are proposed. Full article
(This article belongs to the Special Issue Knowledge Management in Learning and Education)
Show Figures

Figure 1

27 pages, 7785 KiB  
Article
Estimation of Potato Growth Parameters Under Limited Field Data Availability by Integrating Few-Shot Learning and Multi-Task Learning
by Sen Yang, Quan Feng, Faxu Guo and Wenwei Zhou
Agriculture 2025, 15(15), 1638; https://doi.org/10.3390/agriculture15151638 - 29 Jul 2025
Viewed by 243
Abstract
Leaf chlorophyll content (LCC), leaf area index (LAI), and above-ground biomass (AGB) are important growth parameters for characterizing potato growth and predicting yield. While deep learning has demonstrated remarkable advancements in estimating crop growth parameters, the limited availability of field data often compromises [...] Read more.
Leaf chlorophyll content (LCC), leaf area index (LAI), and above-ground biomass (AGB) are important growth parameters for characterizing potato growth and predicting yield. While deep learning has demonstrated remarkable advancements in estimating crop growth parameters, the limited availability of field data often compromises model accuracy and generalizability, impeding large-scale regional applications. This study proposes a novel deep learning model that integrates multi-task learning and few-shot learning to address the challenge of low data in growth parameter prediction. Two multi-task learning architectures, MTL-DCNN and MTL-MMOE, were designed based on deep convolutional neural networks (DCNNs) and multi-gate mixture-of-experts (MMOE) for the simultaneous estimation of LCC, LAI, and AGB from Sentinel-2 imagery. Building on this, a few-shot learning framework for growth prediction (FSLGP) was developed by integrating simulated spectral generation, model-agnostic meta-learning (MAML), and meta-transfer learning strategies, enabling accurate prediction of multiple growth parameters under limited data availability. The results demonstrated that the incorporation of calibrated simulated spectral data significantly improved the estimation accuracy of LCC, LAI, and AGB (R2 = 0.62~0.73). Under scenarios with limited field measurement data, the multi-task deep learning model based on few-shot learning outperformed traditional mixed inversion methods in predicting potato growth parameters (R2 = 0.69~0.73; rRMSE = 16.68%~28.13%). Among the two architectures, the MTL-MMOE model exhibited superior stability and robustness in multi-task learning. Independent spatiotemporal validation further confirmed the potential of MTL-MMOE in estimating LAI and AGB across different years and locations (R2 = 0.37~0.52). These results collectively demonstrated that the proposed FSLGP framework could achieve reliable estimation of crop growth parameters using only a very limited number of in-field samples (approximately 80 samples). This study can provide a valuable technical reference for monitoring and predicting growth parameters in other crops. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

14 pages, 2191 KiB  
Article
AI-Based Ultrasound Nomogram for Differentiating Invasive from Non-Invasive Breast Cancer Masses
by Meng-Yuan Tsai, Zi-Han Yu and Chen-Pin Chou
Cancers 2025, 17(15), 2497; https://doi.org/10.3390/cancers17152497 - 29 Jul 2025
Viewed by 215
Abstract
Purpose: This study aimed to develop a predictive nomogram integrating AI-based BI-RADS lexicons and lesion-to-nipple distance (LND) ultrasound features to differentiate mass-type ductal carcinoma in situ (DCIS) from invasive ductal carcinoma (IDC) visible on ultrasound. Methods: The final study cohort consisted of 170 [...] Read more.
Purpose: This study aimed to develop a predictive nomogram integrating AI-based BI-RADS lexicons and lesion-to-nipple distance (LND) ultrasound features to differentiate mass-type ductal carcinoma in situ (DCIS) from invasive ductal carcinoma (IDC) visible on ultrasound. Methods: The final study cohort consisted of 170 women with 175 pathologically confirmed malignant breast lesions, including 26 cases of DCIS and 149 cases of IDC. LND and AI-based features from the S-Detect system (BI-RADS lexicons) were analyzed. Rare features were consolidated into broader categories to enhance model stability. Data were split into training (70%) and validation (30%) sets. Logistic regression identified key predictors for an LND nomogram. Model performance was evaluated using receiver operating characteristic (ROC) curves, 1000 bootstrap resamples, and calibration curves to assess discrimination and calibration. Results: Multivariate logistic regression identified smaller lesion size, irregular shape, LND ≤ 3 cm, and non-hypoechoic echogenicity as independent predictors of DCIS. These variables were integrated into the LND nomogram, which demonstrated strong discriminative performance (AUC = 0.851 training; AUC = 0.842 validation). Calibration was excellent, with non-significant Hosmer-Lemeshow tests (p = 0.127 training, p = 0.972 validation) and low mean absolute errors (MAE = 0.016 and 0.034, respectively), supporting the model’s accuracy and reliability. Conclusions: The AI-based comprehensive nomogram demonstrates strong reliability in distinguishing mass-type DCIS from IDC, offering a practical tool to enhance non-invasive breast cancer diagnosis and inform preoperative planning. Full article
(This article belongs to the Section Methods and Technologies Development)
Show Figures

Figure 1

27 pages, 6584 KiB  
Article
Evaluating Geostatistical and Statistical Merging Methods for Radar–Gauge Rainfall Integration: A Multi-Method Comparative Study
by Xuan-Hien Le, Naoki Koyama, Kei Kikuchi, Yoshihisa Yamanouchi, Akiyoshi Fukaya and Tadashi Yamada
Remote Sens. 2025, 17(15), 2622; https://doi.org/10.3390/rs17152622 - 28 Jul 2025
Viewed by 340
Abstract
Accurate and spatially consistent rainfall estimation is essential for hydrological modeling and flood risk mitigation, especially in mountainous tropical regions with sparse observational networks and highly heterogeneous rainfall. This study presents a comparative analysis of six radar–gauge merging methods, including three statistical approaches—Quantile [...] Read more.
Accurate and spatially consistent rainfall estimation is essential for hydrological modeling and flood risk mitigation, especially in mountainous tropical regions with sparse observational networks and highly heterogeneous rainfall. This study presents a comparative analysis of six radar–gauge merging methods, including three statistical approaches—Quantile Adaptive Gaussian (QAG), Empirical Quantile Mapping (EQM), and radial basis function (RBF)—and three geostatistical approaches—external drift kriging (EDK), Bayesian Kriging (BAK), and Residual Kriging (REK). The evaluation was conducted over the Huong River Basin in Central Vietnam, a region characterized by steep terrain, monsoonal climate, and frequent hydrometeorological extremes. Two observational scenarios were established: Scenario S1 utilized 13 gauges for merging and 7 for independent validation, while Scenario S2 employed all 20 stations. Hourly radar and gauge data from peak rainy months were used for the evaluation. Each method was assessed using continuous metrics (RMSE, MAE, CC, NSE, and KGE), categorical metrics (POD and CSI), and spatial consistency indicators. Results indicate that all merging methods significantly improved the accuracy of rainfall estimates compared to raw radar data. Among them, RBF consistently achieved the highest accuracy, with the lowest RMSE (1.24 mm/h), highest NSE (0.954), and strongest spatial correlation (CC = 0.978) in Scenario S2. RBF also maintained high classification skills across all rainfall categories, including very heavy rain. EDK and BAK performed better with denser gauge input but required recalibration of variogram parameters. EQM and REK yielded moderate performance and had limitations near basin boundaries where gauge coverage was sparse. The results highlight trade-offs between method complexity, spatial accuracy, and robustness. While complex methods like EDK and BAK offer detailed spatial outputs, they require more calibration. Simpler methods are easier to apply across different conditions. RBF emerged as the most practical and transferable option, offering strong generalization, minimal calibration needs, and computational efficiency. These findings provide useful guidance for integrating radar and gauge data in flood-prone, data-scarce regions. Full article
Show Figures

Figure 1

33 pages, 4670 KiB  
Article
Universal Prediction of CO2 Adsorption on Zeolites Using Machine Learning: A Comparative Analysis with Langmuir Isotherm Models
by Emrah Kirtil
ChemEngineering 2025, 9(4), 80; https://doi.org/10.3390/chemengineering9040080 - 28 Jul 2025
Viewed by 217
Abstract
The global atmospheric concentration of carbon dioxide (CO2) has exceeded 420 ppm. Adsorption-based carbon capture technologies, offer energy-efficient, sustainable solutions. Relying on classical adsorption models like Langmuir to predict CO2 uptake presents limitations due to the need for case-specific parameter [...] Read more.
The global atmospheric concentration of carbon dioxide (CO2) has exceeded 420 ppm. Adsorption-based carbon capture technologies, offer energy-efficient, sustainable solutions. Relying on classical adsorption models like Langmuir to predict CO2 uptake presents limitations due to the need for case-specific parameter fitting. To address this, the present study introduces a universal machine learning (ML) framework using multiple algorithms—Generalized Linear Model (GLM), Feed-forward Multilayer Perceptron (DL), Decision Tree (DT), Random Forest (RF), Support Vector Machine (SVM), and Gradient Boosted Trees (GBT)—to reliably predict CO2 adsorption capacities across diverse zeolite structures and conditions. By compiling over 5700 experimentally measured adsorption data points from 71 independent studies, this approach systematically incorporates critical factors including pore size, Si/Al ratio, cation type, temperature, and pressure. Rigorous Cross-Validation confirmed superior performance of the GBT model (R2 = 0.936, RMSE = 0.806 mmol/g), outperforming other ML models and providing comparable performance with classical Langmuir model predictions without separate parameter calibration. Feature importance analysis identified pressure, Si/Al ratio, and cation type as dominant influences on adsorption performance. Overall, this ML-driven methodology demonstrates substantial promise for accelerating material discovery, optimization, and practical deployment of zeolite-based CO2 capture technologies. Full article
Show Figures

Figure 1

22 pages, 825 KiB  
Article
Conformal Segmentation in Industrial Surface Defect Detection with Statistical Guarantees
by Cheng Shen and Yuewei Liu
Mathematics 2025, 13(15), 2430; https://doi.org/10.3390/math13152430 - 28 Jul 2025
Viewed by 262
Abstract
Detection of surface defects can significantly elongate mechanical service time and mitigate potential risks during safety management. Traditional defect detection methods predominantly rely on manual inspection, which suffers from low efficiency and high costs. Some machine learning algorithms and artificial intelligence models for [...] Read more.
Detection of surface defects can significantly elongate mechanical service time and mitigate potential risks during safety management. Traditional defect detection methods predominantly rely on manual inspection, which suffers from low efficiency and high costs. Some machine learning algorithms and artificial intelligence models for defect detection, such as Convolutional Neural Networks (CNNs), present outstanding performance, but they are often data-dependent and cannot provide guarantees for new test samples. To this end, we construct a detection model by combining Mask R-CNN, selected for its strong baseline performance in pixel-level segmentation, with Conformal Risk Control. The former evaluates the distribution that discriminates defects from all samples based on probability. The detection model is improved by retraining with calibration data that is assumed to be independent and identically distributed (i.i.d) with the test data. The latter constructs a prediction set on which a given guarantee for detection will be obtained. First, we define a loss function for each calibration sample to quantify detection error rates. Subsequently, we derive a statistically rigorous threshold by optimization of error rates and a given guarantee significance as the risk level. With the threshold, defective pixels with high probability in test images are extracted to construct prediction sets. This methodology ensures that the expected error rate on the test set remains strictly bounded by the predefined risk level. Furthermore, our model shows robust and efficient control over the expected test set error rate when calibration-to-test partitioning ratios vary. Full article
Show Figures

Figure 1

26 pages, 3625 KiB  
Article
Deep-CNN-Based Layout-to-SEM Image Reconstruction with Conformal Uncertainty Calibration for Nanoimprint Lithography in Semiconductor Manufacturing
by Jean Chien and Eric Lee
Electronics 2025, 14(15), 2973; https://doi.org/10.3390/electronics14152973 - 25 Jul 2025
Viewed by 279
Abstract
Nanoimprint lithography (NIL) has emerged as a promising sub-10 nm patterning at low cost; yet, robust process control remains difficult because of time-consuming physics-based simulators and labeled SEM data scarcity. We propose a data-efficient, two-stage deep-learning framework here that directly reconstructs post-imprint SEM [...] Read more.
Nanoimprint lithography (NIL) has emerged as a promising sub-10 nm patterning at low cost; yet, robust process control remains difficult because of time-consuming physics-based simulators and labeled SEM data scarcity. We propose a data-efficient, two-stage deep-learning framework here that directly reconstructs post-imprint SEM images from binary design layouts and delivers calibrated pixel-by-pixel uncertainty simultaneously. First, a shallow U-Net is trained on conformalized quantile regression (CQR) to output 90% prediction intervals with statistically guaranteed coverage. Moreover, per-level errors on a small calibration dataset are designed to drive an outlier-weighted and encoder-frozen transfer fine-tuning phase that refines only the decoder, with its capacity explicitly focused on regions of spatial uncertainty. On independent test layouts, our proposed fine-tuned model significantly reduces the mean absolute error (MAE) from 0.0365 to 0.0255 and raises the coverage from 0.904 to 0.926, while cutting the labeled data and GPU time by 80% and 72%, respectively. The resultant uncertainty maps highlight spatial regions associated with error hotspots and support defect-aware optical proximity correction (OPC) with fewer guard-band iterations. Extending the current perspective beyond OPC, the innovatively model-agnostic and modular design of the pipeline here allows flexible integration into other critical stages of the semiconductor manufacturing workflow, such as imprinting, etching, and inspection. In these stages, such predictions are critical for achieving higher precision, efficiency, and overall process robustness in semiconductor manufacturing, which is the ultimate motivation of this study. Full article
Show Figures

Figure 1

23 pages, 3301 KiB  
Article
An Image-Based Water Turbidity Classification Scheme Using a Convolutional Neural Network
by Itzel Luviano Soto, Yajaira Concha-Sánchez and Alfredo Raya
Computation 2025, 13(8), 178; https://doi.org/10.3390/computation13080178 - 23 Jul 2025
Viewed by 278
Abstract
Given the importance of turbidity as a key indicator of water quality, this study investigates the use of a convolutional neural network (CNN) to classify water samples into five turbidity-based categories. These classes were defined using ranges inspired by Mexican environmental regulations and [...] Read more.
Given the importance of turbidity as a key indicator of water quality, this study investigates the use of a convolutional neural network (CNN) to classify water samples into five turbidity-based categories. These classes were defined using ranges inspired by Mexican environmental regulations and generated from 33 laboratory-prepared mixtures with varying concentrations of suspended clay particles. Red, green, and blue (RGB) images of each sample were captured under controlled optical conditions, and turbidity was measured using a calibrated turbidimeter. A transfer learning (TL) approach was applied using EfficientNet-B0, a deep yet computationally efficient CNN architecture. The model achieved an average accuracy of 99% across ten independent training runs, with minimal misclassifications. The use of a lightweight deep learning model, combined with a standardized image acquisition protocol, represents a novel and scalable alternative for rapid, low-cost water quality assessment in future environmental monitoring systems. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

17 pages, 3321 KiB  
Article
Multi-Objective Automated Machine Learning for Inversion of Mesoscopic Parameters in Discrete Element Contact Models
by Xu Ao, Shengpeng Hao, Yuyu Zhang and Wenyu Xu
Appl. Sci. 2025, 15(15), 8181; https://doi.org/10.3390/app15158181 - 23 Jul 2025
Viewed by 164
Abstract
Accurate calibration of mesoscopic contact model parameters is essential for ensuring the reliability of Particle Flow Code in Three Dimensions (PFC3D) simulations in geotechnical engineering. Trial-and-error approaches are often used to determine the parameters of the contact model, but they are time-consuming, labor-intensive, [...] Read more.
Accurate calibration of mesoscopic contact model parameters is essential for ensuring the reliability of Particle Flow Code in Three Dimensions (PFC3D) simulations in geotechnical engineering. Trial-and-error approaches are often used to determine the parameters of the contact model, but they are time-consuming, labor-intensive, and offer no guarantee of parameter validity or simulation credibility. Although conventional machine learning techniques have been applied to invert the contact model parameters, they are hampered by the difficulty of selecting the optimal hyperparameters and, in some cases, insufficient data, which limits both the predictive accuracy and robustness. In this study, a total of 361 PFC3D uniaxial compression simulations using a linear parallel bond model with varied mesoscopic parameters were generated to capture a wide range of rock and geotechnical material behaviors. From each stress–strain curve, eight characteristic points were extracted as inputs to a multi-objective Automated Machine Learning (AutoML) model designed to invert three key mesoscopic parameters, i.e., the elastic modulus (E), stiffness ratio (ks/kn), and degraded elastic modulus (Ed). The developed AutoML model, comprising two hidden layers of 256 and 32 neurons with ReLU activation function, achieved coefficients of determination (R2) of 0.992, 0.710, and 0.521 for E, ks/kn, and Ed, respectively, demonstrating acceptable predictive accuracy and generalizability. The multi-objective AutoML model was also applied to invert the parameters from three independent uniaxial compression tests on rock-like materials to validate its practical performance. The close match between the experimental and numerically simulated stress–strain curves confirmed the model’s reliability for mesoscopic parameter inversion in PFC3D. Full article
Show Figures

Figure 1

14 pages, 5730 KiB  
Article
Offline Magnetometer Calibration Using Enhanced Particle Swarm Optimization
by Lei Huang, Zhihui Chen, Jun Guan, Jian Huang and Wenjun Yi
Mathematics 2025, 13(15), 2349; https://doi.org/10.3390/math13152349 - 23 Jul 2025
Viewed by 166
Abstract
To address the decline in measurement accuracy of magnetometers due to process errors and environmental interference, as well as the insufficient robustness of traditional calibration algorithms under strong interference conditions, this paper proposes an ellipsoid fitting algorithm based on Dynamic Adaptive Elite Particle [...] Read more.
To address the decline in measurement accuracy of magnetometers due to process errors and environmental interference, as well as the insufficient robustness of traditional calibration algorithms under strong interference conditions, this paper proposes an ellipsoid fitting algorithm based on Dynamic Adaptive Elite Particle Swarm Optimization (DAEPSO). The proposed algorithm integrates three enhancement mechanisms: dynamic stratified elite guidance, adaptive inertia weight adjustment, and inferior particle relearning via Lévy flight, aiming to improve convergence speed, solution accuracy, and noise resistance. First, a magnetometer calibration model is established. Second, the DAEPSO algorithm is employed to fit the ellipsoid parameters. Finally, error calibration is performed based on the optimized ellipsoid parameters. Our simulation experiments demonstrate that compared with the traditional Least Squares Method (LSM) the proposed method reduces the standard deviation of the total magnetic field intensity by 54.73%, effectively improving calibration precision in the presence of outliers. Furthermore, when compared to PSO, TSLPSO, MPSO, and AWPSO, the sum of the absolute distances from the simulation data to the fitted ellipsoidal surface decreases by 53.60%, 41.96%, 53.01%, and 27.40%, respectively. The results from 60 independent experiments show that DAEPSO achieves lower median errors and smaller interquartile ranges than comparative algorithms. In summary, the DAEPSO-based ellipsoid fitting algorithm exhibits high fitting accuracy and strong robustness in environments with intense interference noise, providing reliable theoretical support for practical engineering applications. Full article
Show Figures

Figure 1

13 pages, 793 KiB  
Communication
Gamma-Ray Bursts Calibrated by Using Artificial Neural Networks from the Pantheon+ Sample
by Zhen Huang, Xin Luo, Bin Zhang, Jianchao Feng, Puxun Wu, Yu Liu and Nan Liang
Universe 2025, 11(8), 241; https://doi.org/10.3390/universe11080241 - 23 Jul 2025
Viewed by 137
Abstract
In this paper, we calibrate the luminosity relation of gamma−ray bursts (GRBs) by employing artificial neural networks (ANNs) to analyze the Pantheon+ sample of type Ia supernovae (SNe Ia) in a manner independent of cosmological assumptions. The A219 GRB dataset is used to [...] Read more.
In this paper, we calibrate the luminosity relation of gamma−ray bursts (GRBs) by employing artificial neural networks (ANNs) to analyze the Pantheon+ sample of type Ia supernovae (SNe Ia) in a manner independent of cosmological assumptions. The A219 GRB dataset is used to calibrate the Amati relation (Ep-Eiso) at low redshift with the ANN framework, facilitating the construction of the Hubble diagram at higher redshifts. Cosmological models are constrained with GRBs at high redshift and the latest observational Hubble data (OHD) via the Markov chain Monte Carlo numerical approach. For the Chevallier−Polarski−Linder (CPL) model within a flat universe, we obtain Ωm=0.3210.069+0.078h=0.6540.071+0.053w0=1.020.50+0.67, and wa=0.980.58+0.58 at the 1 −σ confidence level, which indicates a preference for dark energy with potential redshift evolution (wa0). These findings using ANNs align closely with those derived from GRBs calibrated using Gaussian processes (GPs). Full article
Show Figures

Figure 1

22 pages, 7569 KiB  
Article
Ancient Ship Structures: Ultimate Strength Analysis of Wooden Joints
by Albert Zamarin, Smiljko Rudan, Davor Bolf, Alice Lucchini and Irena Radić Rossi
J. Mar. Sci. Eng. 2025, 13(8), 1392; https://doi.org/10.3390/jmse13081392 - 22 Jul 2025
Viewed by 178
Abstract
This paper presents an analysis of the ultimate strength of wooden joints of the structures of ancient wooden ships. The aim is to contribute to the discussion about how joining technology and types of joints contributed to the transition from ‘shell-first’ to ‘frame-first’ [...] Read more.
This paper presents an analysis of the ultimate strength of wooden joints of the structures of ancient wooden ships. The aim is to contribute to the discussion about how joining technology and types of joints contributed to the transition from ‘shell-first’ to ‘frame-first’ construction, of which the latter is still traditional Mediterranean wooden shipbuilding technology. Historically, ship construction has consisted of two main structural types of elements: planking and stiffening. Therefore, two characteristic carvel planking joints and two longitudinal keel joints were selected for analysis. For planking, the joint details of the ship Uluburun (14th c. BC) and the ship Kyrenia (4th c. BC) were chosen, while two different types of scarf joints belonging to the ship Jules-Verne 9 (6th c. BC) and the ship Toulon 2 (1st c. AD) were selected. The capacity, i.e., the ultimate strength of the joint, is compared to the strength of the structure as if there was no joint. The analysis simulates the independent joint loading of each of the six numerical models in bending, tension, and compression until collapse. The results are presented as load-end-shortening curves, and the calculation was performed as a nonlinear FE analysis on solid elements using the LSDYNA explicit solver. Since wood is an anisotropic material, a large number of parameters are needed to describe the wood’s behaviour as realistically as possible. To determine all the necessary mechanical properties of two types of wood structural material, pine and oak, a physical experiment was used where results were compared with numerical calculations. This way, the material models were calibrated and used on the presented joints’ ultimate strength analysis. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

14 pages, 2822 KiB  
Article
Accuracy and Reliability of Smartphone Versus Mirrorless Camera Images-Assisted Digital Shade Guides: An In Vitro Study
by Soo Teng Chew, Suet Yeo Soo, Mohd Zulkifli Kassim, Khai Yin Lim and In Meei Tew
Appl. Sci. 2025, 15(14), 8070; https://doi.org/10.3390/app15148070 - 20 Jul 2025
Viewed by 346
Abstract
Image-assisted digital shade guides are increasingly popular for shade matching; however, research on their accuracy remains limited. This study aimed to compare the accuracy and reliability of color coordination in image-assisted digital shade guides constructed using calibrated images of their shade tabs captured [...] Read more.
Image-assisted digital shade guides are increasingly popular for shade matching; however, research on their accuracy remains limited. This study aimed to compare the accuracy and reliability of color coordination in image-assisted digital shade guides constructed using calibrated images of their shade tabs captured by a mirrorless camera (Canon, Tokyo, Japan) (MC-DSG) and a smartphone camera (Samsung, Seoul, Korea) (SC-DSG), using a spectrophotometer as the reference standard. Twenty-nine VITA Linearguide 3D-Master shade tabs were photographed under controlled settings with both cameras equipped with cross-polarizing filters. Images were calibrated using Adobe Photoshop (Adobe Inc., San Jose, CA, USA). The L* (lightness), a* (red-green chromaticity), and b* (yellow-blue chromaticity) values, which represent the color attributes in the CIELAB color space, were computed at the middle third of each shade tab using Adobe Photoshop. Specifically, L* indicates the brightness of a color (ranging from black [0] to white [100]), a* denotes the position between red (+a*) and green (–a*), and b* represents the position between yellow (+b*) and blue (–b*). These values were used to quantify tooth shade and compare them to reference measurements obtained from a spectrophotometer (VITA Easyshade V, VITA Zahnfabrik, Bad Säckingen, Germany). Mean color differences (∆E00) between MC-DSG and SC-DSG, relative to the spectrophotometer, were compared using a independent t-test. The ∆E00 values were also evaluated against perceptibility (PT = 0.8) and acceptability (AT = 1.8) thresholds. Reliability was evaluated using intraclass correlation coefficients (ICC), and group differences were analyzed via one-way ANOVA and Bonferroni post hoc tests (α = 0.05). SC-DSG showed significantly lower ΔE00 deviations than MC-DSG (p < 0.001), falling within acceptable clinical AT. The L* values from MC-DSG were significantly higher than SC-DSG (p = 0.024). All methods showed excellent reliability (ICC > 0.9). The findings support the potential of smartphone image-assisted digital shade guides for accurate and reliable tooth shade assessment. Full article
(This article belongs to the Special Issue Advances in Dental Materials, Instruments, and Their New Applications)
Show Figures

Figure 1

12 pages, 607 KiB  
Article
A Modified Two-Temperature Calibration Method and Facility for Emissivity Measurement
by Shufang He, Shuai Li, Caihong Dai, Jinyuan Liu, Yanfei Wang, Ruoduan Sun, Guojin Feng and Jinghui Wang
Materials 2025, 18(14), 3392; https://doi.org/10.3390/ma18143392 - 19 Jul 2025
Viewed by 236
Abstract
Measuring the emissivity of an infrared radiant sample with high accuracy is important. Previous studies reported on the multi- or two-temperature calibration methods, which used a reference blackbody (or blackbodies) to eliminate the background radiation, and assumed that the background radiation was independent [...] Read more.
Measuring the emissivity of an infrared radiant sample with high accuracy is important. Previous studies reported on the multi- or two-temperature calibration methods, which used a reference blackbody (or blackbodies) to eliminate the background radiation, and assumed that the background radiation was independent of temperature. However, in practical measurements, this assumption does not hold. To solve the above problems, this study proposes a modified two-temperature calibration method and facility. The two temperature points are set in a certain small interval based on the proposed calculation method; based on the indication of the approximation that the emissivities of the sample and the background radiations remain the same at these two temperatures, the emissivities can be calculated with measurement signals at these two temperatures, and a reference blackbody is not needed. An experimental facility was built up and three samples with emissivities around 0.100, 0.500, and 0.900 were measured in (8~14) μm. The relative expanded uncertainties were 9.6%, 4.0%, and 1.5% at 60 °C, respectively, and 8.8%, 5.8%, and 1.2% at 85 °C (k = 2), respectively. The experimental results showed consistency with the results obtained using other methods, indicating the effectiveness of the developed method. The developed method might be suitable for samples whose emissivities are temperature insensitive. Full article
(This article belongs to the Section Advanced Materials Characterization)
Show Figures

Figure 1

16 pages, 2566 KiB  
Article
Parameter Sensitivity Study of the Johnson–Cook Model in FEM Turning of Ti6Al4V Alloy
by Piotr Löschner, Piotr Niesłony and Szymon Kołodziej
Materials 2025, 18(14), 3351; https://doi.org/10.3390/ma18143351 - 17 Jul 2025
Viewed by 362
Abstract
The aim of this study was to analyse in detail the effect of varying the parameters of the Johnson–Cook (JC) material model on the results of a numerical simulation of the orthogonal turning process of the Ti6Al4V titanium alloy. The first step involved [...] Read more.
The aim of this study was to analyse in detail the effect of varying the parameters of the Johnson–Cook (JC) material model on the results of a numerical simulation of the orthogonal turning process of the Ti6Al4V titanium alloy. The first step involved an experimental study, including the recording of cutting force components and temperature, as well as the measurement of chip geometry, which was used to validate the FEM simulation. This was followed by a sensitivity analysis of the JC model with respect to five parameters, namely A, B, C, m, and n, each modified independently by ±20%. The effects of these changes on cutting forces, cutting zone temperature, stresses, and chip geometry were evaluated. The results showed that parameters A, B, and m had the greatest influence on the physical quantities analysed, while C and n are of secondary importance. The analysis highlighted the need for precise calibration of the JC model parameters, especially when modelling machining processes involving difficult-to-machine materials. The results provided practical guidance for optimising the selection of constitutive parameters in machining simulations. Full article
Show Figures

Figure 1

Back to TopTop