Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (5,141)

Search Parameters:
Keywords = error sensitivity

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 1998 KB  
Article
Analysis of the Measurement Uncertainties in the Characterization Tests of Lithium-Ion Cells
by Thomas Hußenether, Carlos Antônio Rufino Júnior, Tomás Selaibe Pires, Tarani Mishra, Jinesh Nahar, Akash Vaghani, Richard Polzer, Sergej Diel and Hans-Georg Schweiger
Energies 2026, 19(3), 825; https://doi.org/10.3390/en19030825 - 4 Feb 2026
Abstract
The transition to renewable energy systems and electric mobility depends on the effectiveness, reliability, and durability of lithium-ion battery technology. Accurate modeling and control of battery systems are essential to ensure safety, efficiency, and cost-effectiveness in electric vehicles and grid storage. In engineering [...] Read more.
The transition to renewable energy systems and electric mobility depends on the effectiveness, reliability, and durability of lithium-ion battery technology. Accurate modeling and control of battery systems are essential to ensure safety, efficiency, and cost-effectiveness in electric vehicles and grid storage. In engineering and materials science, battery models depend on physical parameters such as capacity, energy, state of charge (SOC), internal resistance, power, and self-discharge rate. These parameters are affected by measurement uncertainty. Despite the widespread use of lithium-ion cells, few studies quantify how measurement uncertainty propagates to derived battery parameters and affects predictive modeling. This study quantifies how uncertainty in voltage, current, and temperature measurements reduces the accuracy of derived parameters used for simulation and control. This work presents a comprehensive uncertainty analysis of 18650 format lithium-ion cells with nickel cobalt aluminum oxide (NCA), nickel manganese cobalt oxide (NMC), and lithium iron phosphate (LFP) cathodes. It applies the law of error propagation to quantify uncertainty in key battery parameters. The main result shows that small variations in voltage, current, and temperature measurements can produce measurable deviations in internal resistance and SOC. These findings challenge the common assumption that such uncertainties are negligible in practice. The results also highlight a risk for battery management systems that rely on these parameters for control and diagnostics. The results show that propagated uncertainty depends on chemistry because of differences in voltage profiles, kinetic limitations, and temperature sensitivity. This observation informs cell selection and testing for specific applications. Improved quantification and control of measurement uncertainty can improve model calibration and reduce lifetime and cost risks in battery systems. These results support more robust diagnostic strategies and more defensible warranty thresholds. This study shows that battery testing and modeling should report and propagate measurement uncertainty explicitly. This is important for data-driven and physics-informed models used in industry and research. Full article
Show Figures

Figure 1

19 pages, 609 KB  
Article
Regime-Switching Fischer–Margrabe Options Pricing with Liquidity Risk and Stochastic Volatility
by Priya Mittal, Dharmaraja Selvamuthu and Guglielmo D’Amico
Mathematics 2026, 14(3), 564; https://doi.org/10.3390/math14030564 - 4 Feb 2026
Abstract
This article presents a model for pricing an exchange option considering stochastic volatility and liquidity risk. The impact of liquidity risk on an asset price is considered by utilizing a liquidity discount process that is influenced by both market and asset-specific liquidity. Girsanov’s [...] Read more.
This article presents a model for pricing an exchange option considering stochastic volatility and liquidity risk. The impact of liquidity risk on an asset price is considered by utilizing a liquidity discount process that is influenced by both market and asset-specific liquidity. Girsanov’s theorem is applied to transform from the real-world probability measure to equivalent probability measures, such as the risk-neutral probability measure. The Feynman–Kac theorem is applied to transform the exchange option pricing formula into the vanilla option pricing formula. The analytical expression is derived through the characteristic function approach. The accuracy of the proposed formula is validated through comparisons with Monte Carlo simulation, where the relative error remains below 0.93% across different values of S(0) and τ. Furthermore, numerical experiments highlight that incorporating liquidity risk leads to higher option prices. As the maturity increases from 0.1 to 2.0, the percentage gap between the option prices increases from 1.65% to 20.2%. Finally, sensitivity analysis is conducted to examine the influence of various parameters and to demonstrate the impact of stochastic volatility and liquidity in exchange option valuation. Full article
(This article belongs to the Section E5: Financial Mathematics)
19 pages, 2099 KB  
Article
Construction Contract Price Prediction Model for Government Buildings Using a Deep Learning Technique: A Study from Thailand
by Kongkoon Tochaiwat and Anuwat Budda
Buildings 2026, 16(3), 651; https://doi.org/10.3390/buildings16030651 - 4 Feb 2026
Abstract
Government building projects are particularly complex due to their scale and number of end users, which makes construction prices time-consuming and prone to error. Machine learning is recognized for its ability to process large volumes of complex data quickly with high accuracy, but [...] Read more.
Government building projects are particularly complex due to their scale and number of end users, which makes construction prices time-consuming and prone to error. Machine learning is recognized for its ability to process large volumes of complex data quickly with high accuracy, but only a limited number of studies have applied Deep Learning in the early construction stage. Therefore, we aimed to evaluate the potential of Deep Learning to predict construction contract prices for government buildings. Factors were identified through a literature review and interviews with eight experts, and data were collected from 300 government construction projects obtained from Thailand’s Electronic Government Procurement (e-GP) database, the national centralized platform for transparent public bidding. By varying the number of parameters, 80 models were developed and tested. The best-performing model had a three-hidden-layer ratio of 128:64:32 with a Quadratic Loss Function, achieving an R2 of 0.918 and an RMSE of 2.022. The results showed 14 significant factors, with the top 5 being (1) usable area, (2) number of sanitary wares, (3) number of rooms, (4) height, and (5) number of elevators. Sensitivity analysis was subsequently conducted to enhance the explainability of the model. The findings demonstrate the potential of Deep Learning to enhance the accuracy of determining construction price and support more effective government budget planning and decision making. Full article
Show Figures

Graphical abstract

22 pages, 4910 KB  
Article
Tumor Detection and Characterization Using Microwave Imaging Technique—An Experimental Calibration Approach
by Anudev Jenardanan Nair, Suraksha Rajagopalan, Naveen Krishnan Radhakrishna Pillai, Massimo Donelli and Sreedevi K. Menon
Sensors 2026, 26(3), 1014; https://doi.org/10.3390/s26031014 - 4 Feb 2026
Abstract
Microwave imaging (MWI) is a non-invasive technique for visualizing the anomalies of biological tissues. The imaging process is accomplished by comparing the electrical parameters of healthy tissues and malignant tissues. This work introduces a microwave imaging system for tumor detection in breast tissue. [...] Read more.
Microwave imaging (MWI) is a non-invasive technique for visualizing the anomalies of biological tissues. The imaging process is accomplished by comparing the electrical parameters of healthy tissues and malignant tissues. This work introduces a microwave imaging system for tumor detection in breast tissue. The experiment is performed in a homogeneous background medium, where a high dielectric contrast material is used to mimic the tumor. The proposed imaging system is experimentally evaluated for multiple tumor locations and sizes using a horn antenna. Reflection coefficients obtained from the monostatic configuration of the horn antenna are used for image reconstruction. The evaluation metrics, such as localization error, absolute area error, DICE score, Intersection over Union (IoU), precision, accuracy, sensitivity and specificity, are computed from the reconstructed image. A modified version of the beamforming algorithm improves the quality of reconstructed images by providing a minimum accuracy of 96% for all test cases, with an evaluation time of less than 48 s. The proposed methodology shows promising results under a controlled environment and can be implemented for clinical applications after adequate biological studies. This methodology can be used to calibrate any antenna system or phantom, as it has high contrast in conductivity, leading to better imaging. The present study contributes to Sustainable Development Goal (SDG) 3 by ensuring healthy lives and promoting wellbeing for all ages. Full article
(This article belongs to the Special Issue Biomedical Imaging, Sensing and Signal Processing)
Show Figures

Graphical abstract

19 pages, 697 KB  
Article
Unsupervised TTL-Based Deep Learning for Anomaly Detection in SIM-Tagged Network Traffic
by Babe Haiba and Najat Rafalia
Computers 2026, 15(2), 107; https://doi.org/10.3390/computers15020107 - 4 Feb 2026
Abstract
The rise of SIM cloning, identity spoofing, and covert manipulation in mobile and IoT networks has created an urgent need for continuous post-registration verification. This work introduces an unsupervised deep learning framework for detecting behavioral anomalies in SIM-tagged network flows by modeling the [...] Read more.
The rise of SIM cloning, identity spoofing, and covert manipulation in mobile and IoT networks has created an urgent need for continuous post-registration verification. This work introduces an unsupervised deep learning framework for detecting behavioral anomalies in SIM-tagged network flows by modeling the intrinsic structure of benign behavioral descriptors (TTL, timing drift, payload statistics). A Temporal Deep Autoencoder (TDAE) combining Conv1D layers and an LSTM encoder is trained exclusively on normal traffic and used to identify deviations through reconstruction error, enabling one-class (label-free) training. For deployment, alarms are set using an unsupervised quantile threshold τα calibrated on benign traffic with a false-alarm budget; τ* is reported only as a diagnostic reference for model comparison. To ensure realism, a large-scale corpus of 3.6 million SIM-tagged flows was constructed by enriching public IoT traffic with pseudo-operator identifiers (synthetic SIM tags derived from device identifiers) and controlled anomaly injections. Cross-domain experiment transfer under SIM-grouped protocol: Training on clean Cassavia-like traffic and testing on attack-rich Guarascio-like flows yields a PR-AUC of 0.93 for the proposed Conv-LSTM Temporal Deep Autoencoder, outperforming Dense Autoencoder, Isolation Forest, One-Class SVM, and LOF baselines. Conversely, the reverse direction collapses to PR-AUC 0.5, confirming the absence of data leakage and the validity of one-class behavioral learning. Sensitivity analysis shows that performance is stable around the unsupervised quantile operating point. Overall, the proposed framework provides a lightweight, interpretable, and data-efficient behavioral verification layer for detecting cloned or unauthorized SIM activity, complementing existing registration mechanisms in next-generation telecom and IoT ecosystems. Full article
(This article belongs to the Special Issue Emerging Trends in Network Security and Applied Cryptography)
Show Figures

Figure 1

14 pages, 947 KB  
Article
High-Resolution OFDR with All Grating Fiber Combining Phase Demodulation and Cross-Correlation Methods
by Yanlin Liu, Yang Luo, Xiangpeng Xiao, Zhijun Yan, Yu Qin, Yichun Shen and Feng Wang
Sensors 2026, 26(3), 1004; https://doi.org/10.3390/s26031004 - 3 Feb 2026
Abstract
Spatial resolution is a critical parameter for optical frequency domain reflectometry (OFDR). Phase-sensitive OFDR (Φ-OFDR) measures strain by detecting phase variations between adjacent sampling points, having the potential to achieve the theoretical limitation of spatial resolution. However, the results of Φ-OFDR suffer from [...] Read more.
Spatial resolution is a critical parameter for optical frequency domain reflectometry (OFDR). Phase-sensitive OFDR (Φ-OFDR) measures strain by detecting phase variations between adjacent sampling points, having the potential to achieve the theoretical limitation of spatial resolution. However, the results of Φ-OFDR suffer from large fluctuations due to multiple types of noise, including coherent fading and system noise. This work presents an OFDR-based strain sensing method that combines phase demodulation with cross-correlation analysis to achieve high spatial resolution. In the phase demodulation, the frequency-shift averaging (FSAV) and rotating vector summation (RVS) algorithms are first employed to suppress coherent fading noise and achieve accurate strain localization. Then the cross-correlation approach with an adaptive window is proposed. Guided by the accurate strain boundary obtained from phase demodulation, the length and position of the cross-correlation window are automatically adjusted to fit for continuous and uniform strain regions. As a result, an accurate and complete strain distribution along the entire fiber is finally obtained. The experimental results show that, within a strain range of 100–700 με, the method achieves a spatial resolution of 0.27 mm for the strain boundary, with a root-mean-square error approaching 0.94%. The processing time reaches approximately 0.035 s, with a demodulation length of 1.6 m. The proposed approach offers precise spatial localization of the strain boundary and stable strain measurement, demonstrating its potential for high-resolution OFDR-based sensing applications. Full article
(This article belongs to the Special Issue FBG and UWFBG Sensing Technology)
33 pages, 21513 KB  
Article
A No-Reference Multivariate Gaussian-Based Spectral Distortion Index for Pansharpened Images
by Bishr Omer Abdelrahman Adam, Xu Li, Jingying Wu and Xiankun Hao
Sensors 2026, 26(3), 1002; https://doi.org/10.3390/s26031002 - 3 Feb 2026
Abstract
Pansharpening is a fundamental image fusion technique used to enhance the spatial resolution of remote sensing imagery; however, it inevitably introduces spectral distortions that compromise the reliability of downstream analyses. Existing no-reference (NR) quality assessment methods often fail to exclusively isolate these spectral [...] Read more.
Pansharpening is a fundamental image fusion technique used to enhance the spatial resolution of remote sensing imagery; however, it inevitably introduces spectral distortions that compromise the reliability of downstream analyses. Existing no-reference (NR) quality assessment methods often fail to exclusively isolate these spectral errors from spatial artifacts or lack sensitivity to specific radiometric inconsistencies. To address this gap, this paper proposes a novel No-Reference Multivariate Gaussian-based Spectral Distortion Index (MVG-SDI) specifically designed for pansharpened images. The methodology extracts a hybrid feature set, combining First Digit Distribution (FDD) features derived from Benford’s Law in the hyperspherical color space (HCS) and Color Moment (CM) features. These features are then used to fit Multivariate Gaussian (MVG) models to both the original multispectral and fused images, with spectral distortion quantified via the Mahalanobis distance between their statistical parameters. Experiments on the NBU dataset showed that the MVG-SDI correlates more strongly with standard full-reference benchmarks (such as SAM and CC) than existing NR methods like QNR. Tests with simulated distortions confirmed that the proposed index remains stable and accurate even when facing specific spectral degradations like hue shifts or saturation changes. Full article
(This article belongs to the Special Issue Remote Sensing Image Fusion and Object Tracking)
16 pages, 615 KB  
Article
Multimodal Large Language Model for Fracture Detection in Emergency Orthopedic Trauma: A Diagnostic Accuracy Study
by Sadık Emre Erginoğlu, Nuri Koray Ülgen, Nihat Yiğit, Ali Said Nazlıgül and Mehmet Orçun Akkurt
Diagnostics 2026, 16(3), 476; https://doi.org/10.3390/diagnostics16030476 - 3 Feb 2026
Viewed by 21
Abstract
Background: Rapid and accurate fracture detection is critical in emergency departments (EDs), where high patient volume and time pressure increase the risk of diagnostic error, particularly in radiographic interpretation. Multimodal large language models (LLMs) with image-recognition capability have recently emerged as general-purpose [...] Read more.
Background: Rapid and accurate fracture detection is critical in emergency departments (EDs), where high patient volume and time pressure increase the risk of diagnostic error, particularly in radiographic interpretation. Multimodal large language models (LLMs) with image-recognition capability have recently emerged as general-purpose tools for clinical decision support, but their diagnostic performance within routine emergency department imaging workflows in orthopedic trauma remains unclear. Methods: In this retrospective diagnostic accuracy study, we included 1136 consecutive patients referred from the ED to orthopedics between 1 January and 1 June 2025 at a single tertiary center. Given the single-center, retrospective design, the findings should be interpreted as hypothesis-generating and may not be fully generalizable to other institutions. Emergency radiographs and clinical data were processed by a multimodal LLM (2025 version) via an official API using a standardized, deterministic prompt. The model’s outputs (“Fracture present”, “No fracture”, or “Uncertain”) were compared with final diagnoses established by blinded orthopedic specialists, which served as the reference standard. Diagnostic agreement was analyzed using Cohen’s kappa (κ), sensitivity, specificity, accuracy, and 95% confidence intervals (CIs). False-negative (FN) cases were defined as instances where the LLM reported “no acute fracture” but the specialist identified a fracture. The evaluated system is a general-purpose multimodal LLM and was not trained specifically on orthopedic radiographs. Results: Overall, the LLM showed good diagnostic agreement with orthopedic specialists, with concordant results in 808 of 1136 patients (71.1%; κ = 0.634; 95% CI: 68.4–73.7). The model achieved balanced performance with sensitivity of 76.9% and specificity of 66.8%. The highest agreement was observed in knee trauma (91.7%), followed by wrist (78.8%) and hand (69.6%). False-negative cases accounted for 184 patients (16.2% of the total cohort), representing 32.4% of all LLM-negative assessments. Most FN fractures were non-displaced (82.6%), and 17.4% of FN cases required surgical treatment. Ankle and foot regions showed the highest FN rates (30.4% and 17.4%, respectively), reflecting the anatomical and radiographic complexity of these areas. Positive predictive value (PPV) and negative predictive value (NPV) were 69.4% and 74.5%, respectively, with likelihood ratios indicating moderate shifts in post-test probability. Conclusions: In an emergency department-to-orthopedics consultation cohort reflecting routine clinical workflow, a multimodal LLM demonstrated moderate-to-good diagnostic agreement with orthopedic specialists, broadly within the range reported in prior fracture-detection AI studies; however, these comparisons are indirect because model architectures, training strategies, datasets, and endpoints differ across studies. However, its limited ability to detect non-displaced fractures—especially in anatomically complex regions like the ankle and foot—carries direct patient safety implications and confirms that specialist review remains indispensable. At present, such models may be explored as hypothesis-generating triage or decision-support tools, with mandatory specialist confirmation, rather than as standalone diagnostic systems. Prospective, multi-center studies using high-resolution imaging and anatomically optimized algorithms are needed before routine clinical adoption in emergency care. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence in Orthopedics)
Show Figures

Figure 1

15 pages, 2355 KB  
Article
Pipeline Defect Detection Based on Improved YOLOv11
by Zhiqiang Li, Weimin Shi and Lei Sun
Processes 2026, 14(3), 530; https://doi.org/10.3390/pr14030530 - 3 Feb 2026
Viewed by 40
Abstract
Underground utility tunnels face corrosion, cracks, and leakage after long-term use, endangering urban safety. Traditional methods have strong subjectivity, high miss rates, and poor real-time performance, failing refined management needs. This paper proposes an attention-enhanced YOLOv11 rather than YOLOv10 because its C3k2 backbone [...] Read more.
Underground utility tunnels face corrosion, cracks, and leakage after long-term use, endangering urban safety. Traditional methods have strong subjectivity, high miss rates, and poor real-time performance, failing refined management needs. This paper proposes an attention-enhanced YOLOv11 rather than YOLOv10 because its C3k2 backbone and dynamic anchor head already surpass YOLOv10 by 1.8% mAP for pipeline defect detection in utility tunnels. It uses homomorphic filtering to improve low-light image quality; replaces the last two C3k2 modules of the original YOLOv11 with a Multi-Scale Feature Aggregation Module to capture micro-cracks via expanded receptive fields; introduces a bidirectional weighted feature pyramid network in the neck (with C2PSA/BRA attention) for cross-scale feature fusion and background suppression, which yields both fine-grained micro-crack sensitivity and global false-target suppression; and adopts DIoU loss in the detection head to reduce slender defect localization errors. Experiments on 5000 utility tunnel defect images show the improved algorithm achieves 93.2% precision, 92.4% recall, and 92.6% mAP—outperforming the original YOLOv11, Faster R-CNN, and YOLOv5. Ablation experiments confirm module effectiveness, cutting relative error by 75% compared with the baseline. This algorithm can accurately identify multiple types of defects in complex utility tunnel environments, providing technical support for the safe and efficient operation and maintenance of urban infrastructure. Full article
(This article belongs to the Special Issue Process Engineering: Process Design, Control, and Optimization)
Show Figures

Figure 1

16 pages, 2445 KB  
Article
Prediction and Engineering Verification of Building Settlement in Loess High-Fill Areas
by Zhipeng Huo, Xukun Han and Yafei He
Buildings 2026, 16(3), 638; https://doi.org/10.3390/buildings16030638 - 3 Feb 2026
Viewed by 39
Abstract
To identify suitable settlement prediction methods for buildings constructed in loess high-fill areas, this study analyzes settlement monitoring data from a residential building in the Yan’an New District. Considering the pronounced compressibility of loess high fills and the depth-sensitive characteristics of differential settlement, [...] Read more.
To identify suitable settlement prediction methods for buildings constructed in loess high-fill areas, this study analyzes settlement monitoring data from a residential building in the Yan’an New District. Considering the pronounced compressibility of loess high fills and the depth-sensitive characteristics of differential settlement, four prediction approaches—the exponential curve method, the hyperbolic method, the code-based method, and the finite element method—are employed to forecast settlement. Fitting functions are established for each method to extrapolate the ultimate settlement. The research results indicate that, within the monitoring period, the exponential curve method and the hyperbolic method exhibit higher agreement with the measured settlement data, with prediction errors of 2.56% and 0.62%, respectively. These errors are significantly lower than those of the code-based method and the finite element method, which reach 45.6% and 77.4%, respectively. In particular, the hyperbolic method, through adaptive parameter iteration, controls the discrepancies between calculated and measured settlements at all monitoring points within a range of 0.62% to 8.89%. This method is therefore capable of more accurately capturing the settlement evolution characteristics, consistent with the long-term creep behavior of loess high-fill foundations, and provides a feasible reference method and practical decision-making support for building settlement prediction and engineering design in loess high-fill areas. Full article
(This article belongs to the Section Building Structures)
Show Figures

Figure 1

16 pages, 416 KB  
Article
An Adaptive IoT-Based ForecastingFramework for Structural and Environmental Risk Detection in Tailings Dams
by Raul Rabadán-Arroyo, Ester Simó, Francesc Aguiló-Gost, Francisco Hernández-Ramírez and Xavier Masip-Bruin
Electronics 2026, 15(3), 658; https://doi.org/10.3390/electronics15030658 - 3 Feb 2026
Viewed by 125
Abstract
Tailings dams represent one of the most environmentally sensitive infrastructures in the mining industry. To address the need for continuous and accurate monitoring, this paper presents an adaptive forecasting framework that combines Internet of Things (IoT) technologies with machine learning (ML) models to [...] Read more.
Tailings dams represent one of the most environmentally sensitive infrastructures in the mining industry. To address the need for continuous and accurate monitoring, this paper presents an adaptive forecasting framework that combines Internet of Things (IoT) technologies with machine learning (ML) models to detect early signs of structural and ecological risks. The proposed system architecture is modular and scalable and enables the automated training, selection, and deployment of predictive models for multivariate sensor data. Each sensor data flow is independently analyzed by using a configurable set of algorithms (including linear, convolutional, recurrent, and residual models). The framework is deployed via containers with a CI/CD pipeline and includes real-time visualization through Grafana dashboards. A use case involving tiltmeters and piezometers in an operational tailing dam shows the system’s high predictive accuracy, with mean relative errors below 4% across all variables (in fact, many of them have a mean relative error below 1%). These results highlight the potential of the proposed solution to improve structural and environmental safety in mining operations. Full article
(This article belongs to the Special Issue Empowering IoT with AI: AIoT for Smart and Autonomous Systems)
Show Figures

Figure 1

23 pages, 15113 KB  
Article
Analysis of Underwater Single-Photon LiDAR Signals: A Comprehensive Study on Multi-Parameter Coupling Effects
by Ceyuan Wang, Shijie Liu, Shouzheng Zhu, Wenhang Yang, Chenhui Hu, Yuwei Chen, Chunlai Li and Jianyu Wang
Appl. Sci. 2026, 16(3), 1508; https://doi.org/10.3390/app16031508 - 2 Feb 2026
Viewed by 171
Abstract
Underwater laser signal attenuation challenges conventional detection, while single-photon LiDAR (SPL) with high sensitivity shows promise. Existing underwater SPL studies primarily focus on isolated parameters, while the coupled effects of environmental and system parameters remain insufficiently investigated. In this work, a 532 nm [...] Read more.
Underwater laser signal attenuation challenges conventional detection, while single-photon LiDAR (SPL) with high sensitivity shows promise. Existing underwater SPL studies primarily focus on isolated parameters, while the coupled effects of environmental and system parameters remain insufficiently investigated. In this work, a 532 nm underwater SPL system was developed to systematically explore multi-parameter coupling mechanisms in laboratory water tanks, including air and three turbidity levels, three detection distances, four laser energy levels, three integration times, and seven targets. This provides quantitative guidance for optimizing SPL systems in complex underwater environments. The results show that the SPL system maintained sub-nanosecond ranging precision, with the standard deviation (SD) of the ranging measurement at 50 cm being 0.0117 ns under low turbidity (0.11 m−1) with 50% laser energy, while under high turbidity (4.2 m−1) conditions, it increased to 0.0338 ns. At 100 cm, the SD was 0.0187 ns in low turbidity and rose to 0.0877 ns in high turbidity. Furthermore, the inversion error of the highly reflectivity minerals was kept within 3%, and the inversion value of reflectivity decreased exponentially with the increase of turbidity. Moreover, there is an important discovery for the phenomenon of the forward shift of photon flight time detected for highly reflectivity targets. Longer integration times effectively enhanced the signal-to-noise ratio (SNR) under severe attenuation, whereas excessive laser energy risked detector saturation. These findings provide a systematic characterization of how multifactor coupling governs SPL signal dynamics. The results validate the feasibility of SPL for complex underwater detection and offer theoretical insights and technical guidance for future marine applications in resource exploration, environmental monitoring, and national security. Full article
(This article belongs to the Section Optics and Lasers)
Show Figures

Figure 1

21 pages, 3287 KB  
Article
Probabilistic Prediction of Oversized Rock Fragments in Bench Blasting Using Gaussian Process Regression: A Comparative Study with Empirical and Multivariate Regression Analysis Models
by Kesalopa Gaopale, Takashi Sasaoka, Akihiro Hamanaka and Hideki Shimada
Algorithms 2026, 19(2), 120; https://doi.org/10.3390/a19020120 - 2 Feb 2026
Viewed by 68
Abstract
Oversized rock fragments (boulders) produced during bench blasting adversely affect the efficiency of mining downstream processes such as loading, hauling, and crushing, thus leading to regularly requiring costly secondary breakage and the use of mechanized rock breakers. This study presents a probabilistic framework [...] Read more.
Oversized rock fragments (boulders) produced during bench blasting adversely affect the efficiency of mining downstream processes such as loading, hauling, and crushing, thus leading to regularly requiring costly secondary breakage and the use of mechanized rock breakers. This study presents a probabilistic framework for forecasting boulder size in surface mining operations by employing Gaussian Process Regression (GPR), benchmarked against the Kuznetsov–Cunningham–Ouchterlony (KCO) empirical fragmentation model and a Multivariate Regression Analysis (MVRA) equation. The research study has analyzed blasting datasets, comprising Geological Strength Index (GSI), number of holes (NH), hole depth (HD), maximum charge per delay (MCPD), total explosive mass (TEM), and boulder size determined by Split-Desktop image analysis. Eight Gaussian Process Regression kernels—squared exponential, rational quadratic, matern with ν = 3/2, and matern with ν = 5/2, both with and without automatic relevance determination (ARD)—were assessed. The GPR model with the ARD matern 3/2 kernel attained superior validation performance of R2 = 0.9016 and RMSE = 4.2482, outperforming the KCO and MVRA models, which displayed significant prediction errors for boulder size. In addition, the sensitivity analysis results demonstrated that GSI and HD were the most influential parameters on boulder size, followed by NH, MCPD, and TEM, accordingly. The findings indicate that GPR, especially when employing ARD matern kernels, precisely estimates the boulder size, and thus can serve as a viable method for optimizing blast design and facilitate efficient boulder management in surface mining operations. Full article
31 pages, 3706 KB  
Article
Adaptive Planning Method for ERS Point Layout in Aircraft Assembly Driven by Physics-Based Data-Driven Surrogate Model
by Shuqiang Xu, Xiang Huang, Shuanggao Li and Guoyi Hou
Sensors 2026, 26(3), 955; https://doi.org/10.3390/s26030955 - 2 Feb 2026
Viewed by 55
Abstract
In digital-measurement-assisted assembly of large aircraft components, the spatial layout of Enhanced Reference System (ERS) points determines coordinate transformation accuracy and stability. To address manual layout limitations—specifically low efficiency, occlusion susceptibility, and physical deployment limitations—this paper proposes an adaptive planning method under engineering [...] Read more.
In digital-measurement-assisted assembly of large aircraft components, the spatial layout of Enhanced Reference System (ERS) points determines coordinate transformation accuracy and stability. To address manual layout limitations—specifically low efficiency, occlusion susceptibility, and physical deployment limitations—this paper proposes an adaptive planning method under engineering constraints. First, based on the Guide to the Expression of Uncertainty in Measurement (GUM) and weighted least squares, an analytical transformation sensitivity model is constructed. Subsequently, a multi-scale sample library generated via Monte Carlo sampling trains a high-precision BP neural network surrogate model, enabling millisecond-level sensitivity prediction. Combining this with ray-tracing occlusion detection, a weighted genetic algorithm optimizes transformation sensitivity, spatial uniformity, and station distance within feasible ground and tooling regions. Experimental results indicate that the method effectively avoids occlusion. Specifically, the Registration-Induced Error (RIE) is controlled at approximately 0.002 mm, and the Registration-Induced Loss Ratio (RILR) is maintained at about 10%. Crucially, comparative verification reveals an RIE reduction of approximately 40% compared to a feasible uniform baseline, proving that physics-based data-driven optimization yields superior accuracy over intuitive geometric distribution. By ensuring strict adherence to engineering constraints, this method offers a reliable solution that significantly enhances measurement reliability, providing solid theoretical support for automated digital twin construction. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

19 pages, 2370 KB  
Article
PTMs_Closed_Search: Multiple Post-Translational Modification Closed Search Using Reduced Search Space and Transferred FDR
by Yury Yu. Strogov, Sergey A. Spirin, Mark V. Ivanov, Maria A. Kulebyakina, Anastasia Yu. Efimenko and Oleg I. Klychnikov
Proteomes 2026, 14(1), 7; https://doi.org/10.3390/proteomes14010007 - 2 Feb 2026
Viewed by 64
Abstract
Background: Currently, post-translational modification (PTM) search in MS/MS data is performed using either open modification search (OMS) or closed search (CS) algorithms. The OMS method allows for the determination of many PTMs and unknown mass-shifts in one run. In contrast, closed search [...] Read more.
Background: Currently, post-translational modification (PTM) search in MS/MS data is performed using either open modification search (OMS) or closed search (CS) algorithms. The OMS method allows for the determination of many PTMs and unknown mass-shifts in one run. In contrast, closed search algorithms are more sensitive but limited in the number of PTMs that can be specified in one search. Methods: In this manuscript, we propose an optimized Python algorithm based on the IdentiPy search engine that performs an automated sequential search for each PTM based on previous annotations from public databases and customized protein lists. We also determined the sufficient size of the search space to increase the significance of false discovery rate (FDR) estimation. We modified the FDR calculation algorithm by implementing a spline approximation of the ratio of the modified decoys, and by calculating error propagation to filter out unstable data and determine the cutoff value. Results: The results of this pipeline for a test dataset were comparable to previously published data in terms of the number of unmodified peptides and proteins. Additionally, we identified 13 different types of peptide PTMs and achieved an increase in relative protein coverage. Our filtration method based on spline transferred FDR showed a superior number of identified peptides compared to separate FDR. Conclusions: Our developed pipeline can be used as a standalone application or as a module of multiple PTM search in data analysis platforms. Full article
(This article belongs to the Section Proteome Bioinformatics)
Show Figures

Figure 1

Back to TopTop