Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (578)

Search Parameters:
Keywords = iterative thresholding

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
30 pages, 4764 KB  
Article
Training-Free and Environment-Robust Human Motion Segmentation with Commercial WiFi Device: An Image Perspective
by Xu Wang, Linghua Zhang and Feng Shu
Appl. Sci. 2026, 16(1), 373; https://doi.org/10.3390/app16010373 (registering DOI) - 29 Dec 2025
Abstract
WiFi sensing relies on capturing channel state information (CSI) fluctuations induced by human activities. Accurate motion segmentation is crucial for applications ranging from intrusion detection to activity recognition. However, prevailing methods based on variance, correlation coefficients, or deep learning are often constrained by [...] Read more.
WiFi sensing relies on capturing channel state information (CSI) fluctuations induced by human activities. Accurate motion segmentation is crucial for applications ranging from intrusion detection to activity recognition. However, prevailing methods based on variance, correlation coefficients, or deep learning are often constrained by complex threshold-setting procedures and dependence on high-quality sample data. To address these limitations, this paper proposes a training-free and environment-independent motion segmentation system using commercial WiFi devices from an image-processing perspective. The system employs a novel quasi-envelope to characterize CSI fluctuations and an iterative segmentation algorithm based on an improved Otsu thresholding method. Furthermore, a dedicated motion detection algorithm, leveraging the grayscale distribution of variance images, provides a precise termination criterion for the iterative process. Real-world experiments demonstrate that our system achieves an E-FPR of 0.33% and an E-FNR of 0.20% in counting motion events, with average temporal errors of 0.26 s and 0.29 s in locating the start and end points of human activity, respectively, confirming its effectiveness and robustness. Full article
Show Figures

Figure 1

18 pages, 3217 KB  
Article
Multilayer Perceptron, Radial Basis Function, and Generalized Regression Networks Applied to the Estimation of Total Power Losses in Electrical Systems
by Giovana Gonçalves da Silva, Ronald Felipe Marca Roque, Moisés Arreguín Sámano, Neylan Leal Dias, Ana Claudia de Jesus Golzio and Alfredo Bonini Neto
Mach. Learn. Knowl. Extr. 2026, 8(1), 4; https://doi.org/10.3390/make8010004 - 26 Dec 2025
Viewed by 134
Abstract
This paper presents an Artificial Neural Network (ANN) approach for estimating total real and reactive power losses in electrical power systems. Three network architectures were explored: the Multilayer Perceptron (MLP), the Radial Basis Function (RBF) network, and the Generalized Regression Neural Network (GRNN). [...] Read more.
This paper presents an Artificial Neural Network (ANN) approach for estimating total real and reactive power losses in electrical power systems. Three network architectures were explored: the Multilayer Perceptron (MLP), the Radial Basis Function (RBF) network, and the Generalized Regression Neural Network (GRNN). The main advantage of the proposed methodology lies in its ability to rapidly compute power loss values throughout the system. ANN models are especially effective due to their capacity to capture the nonlinear characteristics of power systems, thus eliminating the need for iterative procedures. The applicability and effectiveness of the approach were evaluated using the IEEE 14-bus test system and compared with the continuation power flow method, which estimates losses using conventional numerical techniques. The results indicate that the ANN-based models performed well, achieving mean squared error (MSE) values below the predefined threshold during both training and validation (0.001). Notably, the networks accurately estimated the total power losses within the expected range, with residuals on the order of 10−4. Among the models tested, the RBF network showed slightly superior performance in terms of error metrics, requiring fewer centers to meet the established criteria compared to the MLP and GRNN models (11 centers). However, the GRNN achieved the shortest processing time; even so, all three networks produced satisfactory and consistent results, particularly in identifying the critical points of electrical power systems, which is of fundamental importance for ensuring system stability and operational reliability. Full article
(This article belongs to the Section Learning)
Show Figures

Graphical abstract

16 pages, 6189 KB  
Article
Research on Edge Feature Extraction Methods for Device Monitoring Based on Cloud–Edge Collaboration
by Lei Chen, Longxin Cui, Dongliang Zou, Yakun Wang, Peiquan Wang and Wenxuan Shi
Vibration 2026, 9(1), 2; https://doi.org/10.3390/vibration9010002 - 21 Dec 2025
Viewed by 163
Abstract
Enterprises in industries such as coking and metallurgy possess extensive industrial equipment requiring real-time monitoring and timely fault detection. Transmitting all monitoring data to servers or cloud platforms for processing presents challenges, including substantial data volumes, high latency, and significant bandwidth consumption, thereby [...] Read more.
Enterprises in industries such as coking and metallurgy possess extensive industrial equipment requiring real-time monitoring and timely fault detection. Transmitting all monitoring data to servers or cloud platforms for processing presents challenges, including substantial data volumes, high latency, and significant bandwidth consumption, thereby compromising the monitoring system’s real-time performance and stability. This paper proposes a cloud–edge collaborative approach for edge feature extraction in equipment monitoring. A three-tier collaborative architecture is established: “edge pre-processing-cloud optimization-edge iteration”. At the edge, lightweight time-domain and frequency-domain feature extraction modules are employed based on equipment structure and failure mechanisms to rapidly pre-process and extract features from monitoring data (e.g., equipment vibration), substantially reducing uploaded data volume. The cloud node constructs a diagnostic feature library through threshold self-learning and data-driven model training, then disseminates optimized feature extraction parameters to the edge node via this threshold learning mechanism. The edge node dynamically iterates its feature extraction capabilities based on updated parameters, enhancing the capture accuracy of critical fault features under complex operating conditions. Verification and demonstration applications were conducted using an enterprise’s online equipment monitoring system as the experimental scenario. The results indicate that the proposed method reduces data transmission volume by 98.21% and required bandwidth by 98.25% compared to pure cloud-based solutions, while effectively enhancing the monitoring system’s real-time performance. This approach significantly improves equipment monitoring responsiveness, reduces demands on network bandwidth and data transmission, and provides an effective technical solution for equipment health management within industrial IoT environments. Full article
Show Figures

Figure 1

28 pages, 7867 KB  
Article
Efficiency and Running Time Robustness in Real Metro Automatic Train Operation Systems: Insights from a Comprehensive Comparative Study
by María Domínguez, Adrián Fernández-Rodríguez, Asunción P. Cucala and Antonio Fernández-Cardador
Sustainability 2025, 17(24), 11371; https://doi.org/10.3390/su172411371 - 18 Dec 2025
Viewed by 157
Abstract
Automatic Train Operation (ATO) systems are widely deployed in metro networks to improve punctuality, service regularity, and ultimately the sustainability of rail operation. Although eco-driving optimisation has been extensively studied, no previous work has provided a systematic, side-by-side comparison of the two ATO [...] Read more.
Automatic Train Operation (ATO) systems are widely deployed in metro networks to improve punctuality, service regularity, and ultimately the sustainability of rail operation. Although eco-driving optimisation has been extensively studied, no previous work has provided a systematic, side-by-side comparison of the two ATO control philosophies most commonly implemented in metro systems worldwide: (i) Type 1, based on speed holding followed by a single terminal coasting at a kilometre point, and (ii) Type 2, which uses speed thresholds to apply either continuous speed holding or iterative coasting–remotoring cycles. These strategies differ fundamentally in their control logic and may lead to distinct operational and energetic behaviours. This paper presents a comprehensive comparison of these two ATO philosophies using a high-fidelity train movement simulator and Pareto-front optimisation via a multi-objective particle swarm algorithm. 40 interstations of a real metro line were evaluated under realistic comfort and operational constraints, and robustness was assessed through sensitivity to three different passenger-load variations (empty train, nominal load and full load). Results show that, once nominal profiles are implemented, Type 1 has up to 5% variability in running times, and Type 2 has up to 20% variability in energy consumption. In conclusion, a new ATO deployment combining both strategies could better balance energy efficiency and timetable robustness in metro operations. Full article
(This article belongs to the Section Sustainable Transportation)
Show Figures

Figure 1

31 pages, 14355 KB  
Article
Deconstructing Seokguram Grotto: Revisiting the Schematic Design
by Chaeshin Yoon and Yongchan Kwon
Buildings 2025, 15(24), 4546; https://doi.org/10.3390/buildings15244546 - 16 Dec 2025
Viewed by 431
Abstract
While the Seokguram Grotto is celebrated in art history for its sculptural mastery, its architectural identity as a constructed stone dome—distinct from excavated caves—remains under-researched. Existing studies have largely relied on geometric analyses based on irrational numbers, which lack a historical basis. This [...] Read more.
While the Seokguram Grotto is celebrated in art history for its sculptural mastery, its architectural identity as a constructed stone dome—distinct from excavated caves—remains under-researched. Existing studies have largely relied on geometric analyses based on irrational numbers, which lack a historical basis. This study aims to reconstruct the logical design process of Seokguram by distinguishing between architectural planning and the realities of construction. Methodologically, we employ the concept of design constraints to analyze the grotto’s dimensional system and scene perception. We identify external constraints, such as the recorded dimensions of the Bodhgaya Buddha and cosmological symbolism (rectangular antechamber and circular posterior), and internal constraints, specifically the need for complete visual coordination between the Buddha’s head and the detached nimbus stone. Our analysis reveals that the designers negotiated these constraints through an iterative process. Key findings demonstrate that the pedestal’s height and position were adjusted, and the arched headstone was strategically designed as a threshold to ensure the perfect alignment of the Buddha and the nimbus from the viewer’s perspective. Furthermore, contrary to previous hypotheses proposing the use of irrational numbers (e.g., √2), this study proves that the grotto follows a proportional system based on integer modules (with 12 cheok as the main module) and binary division, which facilitated practical construction. In conclusion, Seokguram is not merely a product of aesthetic intuition but a masterpiece of rational design. In contrast to the vertical transcendence of Western Cathedrals, Seokguram Grotto embodies tectonics of empathy, prioritizing human-scale intimacy and visual harmony. Full article
Show Figures

Figure 1

31 pages, 11484 KB  
Article
Towards Heart Rate Estimation in Complex Multi-Target Scenarios: A High-Precision FMCW Radar Scheme Integrating HDBS and VLW
by Xuefei Dong, Yunxue Liu, Jinwei Wang, Shie Wu, Chengyou Wang and Shiqing Tang
Sensors 2025, 25(24), 7629; https://doi.org/10.3390/s25247629 - 16 Dec 2025
Viewed by 289
Abstract
Non-contact heart rate estimation technology based on frequency-modulated continuous wave (FMCW) radar has garnered extensive attention in single-target scenarios, yet it remains underexplored in multi-target environments. Accurate discrimination of multiple targets and precise estimation of their heart rates constitute key challenges in the [...] Read more.
Non-contact heart rate estimation technology based on frequency-modulated continuous wave (FMCW) radar has garnered extensive attention in single-target scenarios, yet it remains underexplored in multi-target environments. Accurate discrimination of multiple targets and precise estimation of their heart rates constitute key challenges in the multi-target domain. To address these issues, we propose a novel scheme for multi-target heart rate estimation. First, a high-precision distance-bin selection (HDBS) method is proposed for target localization in the range domain. Next, multiple-input multiple-output (MIMO) array processing is combined with the Root-multiple signal classification (Root-MUSIC) algorithm for angular domain estimation, enabling accurate discrimination of multiple targets. Subsequently, we propose an efficient method for interference suppression and vital sign extraction that cascades variational mode decomposition (VMD), local mean decomposition (LMD), and wavelet thresholding (WT) termed as VLW, which enables high-quality heartbeat signal extraction. Finally, to achieve high-precision and super-resolution heart rate estimation with low computational burden, an improved fast iterative interpolated beamforming (FIIB) algorithm is proposed. Specifically, by leveraging the conjugate symmetry of real-valued signals, the improved FIIB algorithm reduces the execution time by approximately 60% compared to the standard version. In addition, the proposed scheme provides sufficient signal-to-noise ratio (SNR) gain through low-complexity accumulation in both distance and angle estimation. Six experimental scenarios are designed, incorporating densely arranged targets and front-back occlusion, and extensive experiments are conducted. Results show this scheme effectively discriminates multiple targets in all tested scenarios with a mean absolute error (MAE) below 2.6 beats per minute (bpm), demonstrating its viability as a robust multi-target heart rate estimation scheme in various engineering fields. Full article
Show Figures

Figure 1

20 pages, 6782 KB  
Article
Optimizing Interdisciplinary Referral Pathways for Chronic Obstructive Pulmonary Disease Management Across Cardiology and Pulmonology Specialties in the Kingdom of Saudi Arabia
by Majdy M. Idrees, Yahya Z. Habis, Ibrahim Jelaidan, Waleed Alsowayan, Osama Almogbel, Abdalla M. Alasiri, Faisal Al-Ghamdi, Abeer Bakhsh and Faris Alhejaili
J. Clin. Med. 2025, 14(24), 8865; https://doi.org/10.3390/jcm14248865 - 15 Dec 2025
Viewed by 300
Abstract
Background: Chronic obstructive pulmonary disease (COPD) is a progressive respiratory condition with significant economic burden, morbidity, and mortality rates worldwide. In the Kingdom of Saudi Arabia (KSA), 4.2% of adults 40 years and older have COPD, with a higher prevalence in men and [...] Read more.
Background: Chronic obstructive pulmonary disease (COPD) is a progressive respiratory condition with significant economic burden, morbidity, and mortality rates worldwide. In the Kingdom of Saudi Arabia (KSA), 4.2% of adults 40 years and older have COPD, with a higher prevalence in men and older populations. Key risk factors include smoking, air pollution, occupational exposures, and genetics. COPD coexists with cardiovascular disease (CVD) often, making diagnosis and management more difficult. This study proposes two referral algorithms to optimize care for COPD patients with coexisting CVD in the KSA. Methods: A nine-member cardiopulmonary task force reviewed pertinent literature, guidelines, and held virtual meetings from April to August 2025. Every algorithmic component was iteratively refined; consensus was reached when at least 80% of participants agreed, and items not reaching this threshold were revised until full agreement was reached. Results: According to the cardiology-to-pulmonology algorithm, patients who have unidentified respiratory symptoms or COPD risk factors undergo spirometry assessment and, if confirmed, are referred to pulmonology for diagnostic confirmation, phenotyping, and treatment, including triple fixed-dose combination therapy (TFDC) when necessary. On the other hand, the pulmonology-to-cardiology algorithm directs the evaluation of CVD risk factors and comorbidities using clinical evaluation, electrocardiogram, echocardiography, and biomarker testing, for cardiology referral. Conclusions: By establishing bidirectional referral pathways, morbidity and healthcare burden can be decreased, early detection can be improved, and multidisciplinary management can be strengthened. Future research should assess the feasibility, cost-effectiveness, and real-world impact within KSA’s healthcare system. Full article
(This article belongs to the Section Respiratory Medicine)
Show Figures

Figure 1

15 pages, 769 KB  
Study Protocol
Mixed-Methods Usability Evaluation of a Detachable Dual-Propulsion Wheelchair Device for Individuals with Spinal Cord Injury: Study Protocol
by Dongheon Kang, Seon-Deok Eun and Jiyoung Park
Disabilities 2025, 5(4), 115; https://doi.org/10.3390/disabilities5040115 - 12 Dec 2025
Viewed by 221
Abstract
Manual wheelchair users with spinal cord injury (SCI) often experience upper-limb strain and pain due to repetitive propulsion. A detachable dual-propulsion add-on device has been developed to mitigate this issue by offering an alternative propulsion mechanism, but its user acceptability and practical benefits [...] Read more.
Manual wheelchair users with spinal cord injury (SCI) often experience upper-limb strain and pain due to repetitive propulsion. A detachable dual-propulsion add-on device has been developed to mitigate this issue by offering an alternative propulsion mechanism, but its user acceptability and practical benefits must be rigorously evaluated. This study will implement a structured mixed-methods usability assessment of the new device with 30 adult wheelchair users with SCI. The evaluation will combine quantitative surveys, objective task-based performance metrics, and qualitative interviews to capture a comprehensive picture of usability. We will conduct a single-arm mixed-methods protocol using a device-specific 45-item usability questionnaire and semi-structured interviews, followed by convergent triangulation to integrate quantitative scores and qualitative themes. Participants will use the dual-propulsion device in realistic scenarios and then complete a 45-item questionnaire covering effectiveness, efficiency, safety, comfort, and psychosocial satisfaction. In addition, semi-structured interviews will explore users’ experiences, perceived benefits, challenges, and suggestions. During a standardized mobility task course (doorway navigation, ramp ascent, threshold crossing, and 50 m level propulsion), objective performance indicators—including task completion time, task success/error rate, number of lever strokes, and self-selected speed—will be recorded as secondary usability outcomes. The use of both a standardized questionnaire and in-depth interviews will ensure both broad and nuanced assessment of the device’s usability. Data from the survey will be analyzed for usability scores across multiple domains, while interview transcripts will undergo thematic analysis to enrich and validate the quantitative findings. This protocol is expected to provide robust evidence of the device’s usability, inform iterative improvements in its design, and highlight the importance of structured usability evaluations for assistive technologies. Full article
Show Figures

Figure 1

13 pages, 2121 KB  
Article
Determining Olefin Content of Gasoline by Adaptive Partial Least Squares Regression Combined with Near-Infrared Spectroscopy
by Biao Du, Hongfu Yuan, Lu Hao, Yutong Wu, Chen He, Qinghong Wang and Chunmao Chen
Molecules 2025, 30(24), 4742; https://doi.org/10.3390/molecules30244742 - 11 Dec 2025
Viewed by 301
Abstract
The accurate and rapid determination of olefin content in gasoline is crucial for fuel quality control. While near-infrared spectroscopy (NIR) offers a rapid analytical solution, multiple parameters in the conventional partial least squares regression (PLSR) modeling process rely on the modeler’s subjective judgment. [...] Read more.
The accurate and rapid determination of olefin content in gasoline is crucial for fuel quality control. While near-infrared spectroscopy (NIR) offers a rapid analytical solution, multiple parameters in the conventional partial least squares regression (PLSR) modeling process rely on the modeler’s subjective judgment. Consequently, the quantitative accuracy of the model is often influenced by the modeler’s experience. To address this limitation, this study developed an integrated adaptive PLSR framework. The methodology incorporates four core adaptive components: automated selection of latent variables based on the rate of decrease in PRESS values, dynamic formation of calibration subsets using Spectral Angle Distance and sample number thresholds, optimization of informative wavelength regions via correlation coefficients, and systematic database cleaning through iterative residual analysis. Applied to 248 gasoline samples, this strategy dramatically enhanced model performance, increasing the coefficient of determination (R2) from 0.7391 to 0.9102 and reducing the root mean square error (RMSE) from 1.51% to 0.866% compared to the global PLSR model. This work demonstrates that the adaptive PLSR framework effectively mitigates spectral nonlinearity and improves predictive robustness, thereby providing a reliable and practical solution for the on-site, rapid monitoring of gasoline quality using handheld NIR spectrometers. Full article
Show Figures

Figure 1

19 pages, 10844 KB  
Article
Hyperspectral Ghost Image Residual Correction Method Based on PSF Degradation Model
by Xijie Li, Jiating Yang, Tieqiao Chen, Siyuan Li, Pengchong Wang, Sai Zhong, Ming Gao and Bingliang Hu
Remote Sens. 2025, 17(24), 4006; https://doi.org/10.3390/rs17244006 - 11 Dec 2025
Viewed by 201
Abstract
In hyperspectral images, ghost image residuals exceeding a certain threshold not only reduce the recognition accuracy of the imaging detection system but also decrease the target identification rate. Ghost image residuals affect both the recognition accuracy of the detection system and the accuracy [...] Read more.
In hyperspectral images, ghost image residuals exceeding a certain threshold not only reduce the recognition accuracy of the imaging detection system but also decrease the target identification rate. Ghost image residuals affect both the recognition accuracy of the detection system and the accuracy of spectral calibration, thereby influencing qualitative and quantitative inversion. Conventional ghost image residual correction methods can significantly affect both the relative and absolute calibration accuracy of hyperspectral images. To minimize the impact on spectral calibration accuracy during ghost image residual correction, we propose a ghost image degradation model and an iterative optimization algorithm. In the proposed approach, a ghost image residual degradation model is constructed based on the point spread function (PSF) of ghost image residuals and their energy distribution characteristics. Using the proportion of ghost image residuals and the accuracy of hyperspectral image calibration as constraints, we iteratively optimized typical regional target ghost image residuals across different spectral channels, achieving automated correction of ghost image residuals in various spectral bands. The experimental results show that the energy proportion of ghost image residuals at different wavelengths decreased from 4.6% to 0.3%, the variations in spectral curves before and after correction were less than 0.8%, and the change in absolute radiometric calibration accuracy was below 0.06%. Full article
Show Figures

Figure 1

18 pages, 546 KB  
Review
Operationalizing Chronic Inflammation: An Endotype-to-Care Framework for Precision and Equity
by Maria E. Ramos-Nino
Clin. Pract. 2025, 15(12), 233; https://doi.org/10.3390/clinpract15120233 - 10 Dec 2025
Viewed by 346
Abstract
Background/Objectives: Chronic inflammation arises from self-reinforcing immune–metabolic circuits encompassing pattern-recognition signaling, inflammasome activation, cytokine networks, immunometabolic reprogramming, barrier–microbiome disruption, cellular senescence, and neuro–immune–endocrine crosstalk. This review synthesizes these mechanistic axes across diseases and introduces an operational endotype-to-care framework designed to translate mechanistic insights [...] Read more.
Background/Objectives: Chronic inflammation arises from self-reinforcing immune–metabolic circuits encompassing pattern-recognition signaling, inflammasome activation, cytokine networks, immunometabolic reprogramming, barrier–microbiome disruption, cellular senescence, and neuro–immune–endocrine crosstalk. This review synthesizes these mechanistic axes across diseases and introduces an operational endotype-to-care framework designed to translate mechanistic insights into precision-based, scalable, and equitable interventions. Methods: A narrative, mechanism-focused review was performed, integrating recent literature on immune–metabolic circuits, including pattern-recognition receptors, inflammasome pathways, cytokine modules, metabolic reprogramming, barrier–microbiome dynamics, senescence, and neuro–immune–endocrine signaling. Validated, low-cost screening biomarkers (hs-CRP, NLR, fibrinogen) were mapped to phenotype-guided endotyping panels and corresponding therapeutic modules, with explicit monitoring targets. Results: We present a stepwise, pragmatic pathway progressing from broad inflammatory screening to phenotype-specific endotyping (e.g., IL-6/TNF for metaflammation; ISG/IFN for autoimmunity; IL-23/17 for neutrophilic disease; IL-1β/NLRP3 or urate for crystal-driven inflammation; permeability markers for barrier–dysbiosis). Each module is paired with targeted interventions and prespecified treat-to-target outcomes: for example, achieving a reduction in hs-CRP (e.g., ~40%) within 8–12 weeks is used here as a pragmatic operational benchmark rather than a validated clinical threshold. Where feasible, cytokine and multi-omic panels further refine classification and prognostication. A tiered implementation model (essential, expanded, comprehensive) ensures adaptability and equity across clinical resource levels. Conclusions: Distinct from prior narrative reviews, this framework defines numeric triage thresholds, minimal endotype panels, and objective monitoring criteria that make chronic inflammation management operationalizable in real-world settings. It embeds principles of precision, equity, and stewardship, supporting iterative, evidence-driven implementation across diverse healthcare environments. Full article
Show Figures

Figure 1

34 pages, 3811 KB  
Article
Wavelet Estimation for Density and Copula Functions
by Heni Boubaker and Houcem Belgacem
Mathematics 2025, 13(24), 3932; https://doi.org/10.3390/math13243932 - 9 Dec 2025
Viewed by 185
Abstract
This article investigates the problem of univariate and bivariate density estimation using wavelet decomposition techniques. Special attention is given to the estimation of copula functions, which capture the dependence structure between random variables independent of their marginals. We consider two distinct frameworks: the [...] Read more.
This article investigates the problem of univariate and bivariate density estimation using wavelet decomposition techniques. Special attention is given to the estimation of copula functions, which capture the dependence structure between random variables independent of their marginals. We consider two distinct frameworks: the case of independent and identically distributed (i.i.d.) variables and the case where variables are dependent, allowing us to highlight the impact of the dependence structure on the performance of wavelet-based estimators. Building on this framework, we propose a novel iterative thresholding method applied to the detail coefficients of the wavelet transform. This iterative scheme aims to enhance noise reduction while preserving significant structural features of the underlying density or copula function. Numerical experiments illustrate the effectiveness of the proposed method in both univariate and bivariate settings, particularly in capturing localized features and discontinuities in the presence of varying dependence patterns. Full article
(This article belongs to the Special Issue Probability Statistics and Quantitative Finance)
Show Figures

Figure 1

14 pages, 1391 KB  
Article
In Vivo Accuracy Assessment of Two Intraoral Scanners Using Open-Source Software: A Comparative Full-Arch Pilot Study
by Francesco Puleio, Fabio Salmeri, Ettore Lupi, Ines Urbano, Roberta Gasparro, Simone De Vita and Roberto Lo Giudice
Oral 2025, 5(4), 97; https://doi.org/10.3390/oral5040097 - 2 Dec 2025
Viewed by 263
Abstract
Background: The precision of intraoral scanners (IOSs) is a key factor in ensuring the reliability of digital impressions, particularly in full-arch workflows. Although proprietary metrology tools are generally employed for scanner validation, open-source platforms could provide a cost-effective alternative for clinical research. Methods: [...] Read more.
Background: The precision of intraoral scanners (IOSs) is a key factor in ensuring the reliability of digital impressions, particularly in full-arch workflows. Although proprietary metrology tools are generally employed for scanner validation, open-source platforms could provide a cost-effective alternative for clinical research. Methods: This in vivo study compared the precision of two IOSs—3Shape TRIOS 3 and Planmeca Emerald S—using an open-source analytical workflow based on Autodesk Meshmixer and CloudCompare. A single healthy subject underwent five consecutive full-arch scans per device. Digital models were trimmed, aligned by manual landmarking and iterative closest-point refinement, and analyzed at six deviation thresholds (<0.01 mm to <0.4 mm). The percentage of surface points within clinically acceptable limits (<0.3 mm) was compared using paired t-tests. Results: TRIOS 3 exhibited significantly higher repeatability than Planmeca Emerald S (p < 0.001). At the <0.3 mm threshold, 99.3% ± 0.4% of points were within tolerance for TRIOS 3 versus 92.9% ± 6.8% for Planmeca. At the <0.1 mm threshold, values were 89.6% ± 5.7% and 47.3% ± 13.7%, respectively. Colorimetric deviation maps confirmed greater spatial consistency of TRIOS 3, particularly in posterior regions. Conclusions: Both scanners achieved clinically acceptable precision for full-arch impressions; however, TRIOS 3 demonstrated superior repeatability and lower variability. The proposed open-source workflow proved feasible and reliable, offering an accessible and reproducible method for IOS performance assessment in clinical settings. Full article
Show Figures

Figure 1

18 pages, 30685 KB  
Article
Leveraging Explainable Artificial Intelligence for Place-Based and Quantitative Strategies in Urban Pluvial Flooding Management
by Chaorui Tan, Entong Ke and Haochen Shi
ISPRS Int. J. Geo-Inf. 2025, 14(12), 475; https://doi.org/10.3390/ijgi14120475 - 1 Dec 2025
Viewed by 361
Abstract
Reducing urban pluvial flooding susceptibility requires identifying dominant variables in different regions and offering quantitative management strategies, which remains a challenge for existing methodologies. To address this, this study delves into the characteristics of SHAP’s (Shapley Additive exPlanations) local interpretability and proposes a [...] Read more.
Reducing urban pluvial flooding susceptibility requires identifying dominant variables in different regions and offering quantitative management strategies, which remains a challenge for existing methodologies. To address this, this study delves into the characteristics of SHAP’s (Shapley Additive exPlanations) local interpretability and proposes a novel and concise framework based on explainable artificial intelligence (ensemble learning-SHAP) and applies it to the central urban area of Guangzhou as a case study. The research findings are as follows: (1) This framework captures the nonlinear and threshold effects of flood drivers, identifying specific inflection points where landscape features shift from mitigating to exacerbating flooding. (2) Anthropogenic variables, specifically impervious surface density (ISD) and vegetation (kNDVI), are identified as the dominant variables driving susceptibility in urban hotspots at the grid scale. (3) The interpretability results demonstrate high stability across model iterations. Finally, based on these findings, this study provides place-based and quantitative pluvial flooding management recommendations: for areas dominated by impervious surfaces and vegetation, maintaining the impervious surface density below 0.8 and the kNDVI above 0.25 can effectively reduce the susceptibility to urban flooding. This study provides a framework for achieving place-based and quantitative flooding management and offers valuable scientific insights for flooding management, urban planning, and sustainable urban development in the central district of Guangzhou, as well as in broader developing regions. Full article
Show Figures

Figure 1

20 pages, 6998 KB  
Article
Seismic Data Enhancement for Tunnel Advanced Prediction Based on TSISTA-Net
by Deshan Feng, Mengchen Yang, Xun Wang, Wenxiu Yan, Chen Chen and Xiao Tao
Appl. Sci. 2025, 15(23), 12700; https://doi.org/10.3390/app152312700 - 30 Nov 2025
Viewed by 335
Abstract
Tunnel seismic advanced prediction is a widely used technique in geotechnical engineering due to its non-destructive characteristics and deep detection capability. However, limitations in acquisition space and complex on-site conditions often result in missing traces, damaged channels, and low-resolution data, thereby hindering accurate [...] Read more.
Tunnel seismic advanced prediction is a widely used technique in geotechnical engineering due to its non-destructive characteristics and deep detection capability. However, limitations in acquisition space and complex on-site conditions often result in missing traces, damaged channels, and low-resolution data, thereby hindering accurate geological interpretation. Although deep learning models such as U-Net have shown promise in seismic data reconstruction, their emphasis on local features and fixed parameter configurations limits their capacity to capture global and long-range dependencies, thereby constraining reconstruction accuracy. To address these challenges, this study proposes a novel deep unrolling network, TSISTA-Net (Tunnel Seismic Iterative Shrinkage–Thresholding Algorithm Network), specifically designed to improve seismic data quality. Built upon the ISTA-Net architecture, TSISTA-Net incorporates three distinct innovations. First, reflection padding is utilized to minimize boundary artifacts and effectively recover edge information. Second, multi-scale dilated convolutions are employed to extend the receptive field, thereby facilitating the extraction of long-range and multi-scale features from seismic signals. Third, a lightweight and patch-based processing strategy is adopted, guaranteeing high computational efficiency while maintaining reconstruction quality. The effectiveness of the proposed method was validated on both synthetic and real tunnel seismic datasets. On synthetic data, TSISTA-Net achieved a PSNR of 37.28 dB, an SSIM of 0.9667, and an LCCC of 0.9357, outperforming U-Net (35.93 dB, 0.9480, 0.9087) and conventional ISTA-Net (34.04 dB, 0.9167, 0.8878). These results demonstrate superior signal fidelity, structural similarity, and local correlation relative to established baselines. Consistent improvements were also observed on real tunnel datasets, indicating that TSISTA-Net provides an efficient, data-driven solution for tunnel seismic data processing with strong potential for practical engineering applications. Full article
Show Figures

Figure 1

Back to TopTop