Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,705)

Search Parameters:
Keywords = error differentiation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 2640 KB  
Article
Mechanism-Guided and Attention-Enhanced Time-Series Model for Rate of Penetration Prediction in Deep and Ultra-Deep Wells
by Chongyuan Zhang, Chengkai Zhang, Ning Li, Chaochen Wang, Long Chen, Rui Zhang, Lin Zhu, Shanlin Ye, Qihao Li and Haotian Liu
Processes 2025, 13(11), 3433; https://doi.org/10.3390/pr13113433 (registering DOI) - 26 Oct 2025
Abstract
Accurate prediction of the rate of penetration (ROP) in deep and ultra-deep wells remains a major challenge due to complex downhole conditions and limited real-time data. To address the issues of physical inconsistency and weak generalization in conventional da-ta-driven approaches, this study proposes [...] Read more.
Accurate prediction of the rate of penetration (ROP) in deep and ultra-deep wells remains a major challenge due to complex downhole conditions and limited real-time data. To address the issues of physical inconsistency and weak generalization in conventional da-ta-driven approaches, this study proposes a mechanism-guided and attention-enhanced deep learning framework. In this framework, drilling physical principles such as energy balance are reformulated into differentiable constraint terms and directly incorporated in-to the loss function of deep neural networks, ensuring that model predictions strictly ad-here to drilling physics. Meanwhile, attention mechanisms are integrated to improve feature selection and temporal modeling: for tree-based models, we investigate their implicit attention to key parameters such as weight on bit (WOB) and torque; for sequential models, we design attention-enhanced architectures (e.g., LSTM and GRU) to capture long-term dependencies among drilling parameters. Validation on 49,284 samples from 11 deep and ultra-deep wells in China (depth range: 1226–8639 m) demonstrates that the synergy between mechanism constraints and attention mechanisms substantially improves ROP prediction accuracy. In blind-well tests, the proposed method achieves a mean absolute percentage error (MAPE) of 9.47% and an R2 of 0.93, significantly outperforming traditional methods under complex deep-well conditions. This study provides reliable intelligent decision support for optimizing deep and ultra-deep well drilling operations. By improving prediction accuracy and enabling real-time anomaly detection, it enhances operational safety and efficiency while reducing drilling risks. The proposed approach offers high practical value for field applications and supports the intelligent development of the oil and gas industry. Full article
(This article belongs to the Section Energy Systems)
Show Figures

Figure 1

20 pages, 3719 KB  
Communication
Research on High-Density Discrete Seismic Signal Denoising Processing Method Based on the SFOA-VMD Algorithm
by Xiaoji Wang, Kai Lin, Guangzhao Guo, Xiaotao Wen and Dan Chen
Geosciences 2025, 15(11), 409; https://doi.org/10.3390/geosciences15110409 (registering DOI) - 25 Oct 2025
Abstract
With the increasing demand for precision in seismic exploration, high-resolution surveys and shallow-layer identification have become essential. This requires higher sampling frequencies during seismic data acquisition, which shortens seismic wavelengths and enables the capture of high-frequency signals to reveal finer subsurface structural details. [...] Read more.
With the increasing demand for precision in seismic exploration, high-resolution surveys and shallow-layer identification have become essential. This requires higher sampling frequencies during seismic data acquisition, which shortens seismic wavelengths and enables the capture of high-frequency signals to reveal finer subsurface structural details. However, the insufficient sampling rate of existing petroleum instruments prevents the effective capture of such high-frequency signals. To address this limitation, we employ high-frequency geophones together with high-density and high-frequency acquisition systems to collect the required data. Meanwhile, conventional processing methods such as Fourier transform-based time–frequency analysis are prone to phase instability caused by frequency interval selection. This instability hinders the accurate representation of subsurface structures and reduces the precision of shallow-layer phase identification. To overcome these challenges, this paper proposes a denoising method for high-sampling-rate seismic data based on Variational Mode Decomposition (VMD) optimized by the Starfish Optimization Algorithm (SFOA). The denoising results of simulated signals demonstrate that the proposed method effectively preserves the stability of noise-free regions while maintaining the integrity of peak signals. It significantly improves the signal-to-noise ratio (SNR) and normalized cross-correlation coefficient (NCC) while reducing the root mean square error (RMSE) and relative root mean square error (RRMSE). After denoising the surface mountain drilling-while-drilling signals, the resulting waveforms show a strong correspondence with the low-velocity zone interfaces, enabling clear differentiation of shallow stratigraphic distributions. Full article
(This article belongs to the Section Geophysics)
24 pages, 1558 KB  
Article
Short-Term Detection of Dynamic Stress Levels in Exergaming with Wearables
by Giulia Masi, Gianluca Amprimo, Irene Rechichi, Gabriella Olmo and Claudia Ferraris
Sensors 2025, 25(21), 6572; https://doi.org/10.3390/s25216572 (registering DOI) - 25 Oct 2025
Abstract
This study evaluates the feasibility of using a lightweight, off-the-shelf sensing system for short-term stress detection during exergaming. Most existing studies in stress detection compare rest and task conditions, providing limited insight into continuous stress dynamics, and there is no agreement on optimal [...] Read more.
This study evaluates the feasibility of using a lightweight, off-the-shelf sensing system for short-term stress detection during exergaming. Most existing studies in stress detection compare rest and task conditions, providing limited insight into continuous stress dynamics, and there is no agreement on optimal sensor configurations. To address these limitations, we investigated dynamic stress responses induced by a cognitive–motor task designed to simulate rehabilitation-like scenarios. Twenty-three participants completed the experiment, providing electrodermal activity (EDA), blood volume pulse (BVP), self-report, and in-game data. Features extracted from physiological signals were analyzed statistically, and shallow machine learning classifiers were applied to discriminate among stress levels. EDA-based features reliably differentiated stress conditions, while BVP features showed less consistent behavior. The classification achieved an overall accuracy of 0.70 across four stress levels, with most errors between adjacent levels. Correlations between EDA dynamics and perceived stress scores suggested individual variability possibly linked to chronic stress. These results demonstrate the feasibility of low-cost, unobtrusive stress monitoring in interactive environments, supporting future applications of dynamic stress detection in rehabilitation and personalized health technologies. Full article
(This article belongs to the Special Issue Wearable Devices for Physical Activity and Healthcare Monitoring)
Show Figures

Figure 1

13 pages, 2365 KB  
Article
A Novel Algorithm for Detecting Convective Cells Based on H-Maxima Transformation Using Satellite Images
by Jia Liu and Qian Zhang
Atmosphere 2025, 16(11), 1232; https://doi.org/10.3390/atmos16111232 (registering DOI) - 25 Oct 2025
Abstract
Mesoscale convective systems (MCSs) play a pivotal role in the occurrence of severe weather phenomena, with convective cells constituting their fundamental elements. The precise identification of these cells from satellite imagery is crucial yet presents significant challenges, including issues related to merging errors [...] Read more.
Mesoscale convective systems (MCSs) play a pivotal role in the occurrence of severe weather phenomena, with convective cells constituting their fundamental elements. The precise identification of these cells from satellite imagery is crucial yet presents significant challenges, including issues related to merging errors and sensitivity to threshold parameters. This study introduces a novel detection algorithm for convective cells that leverages H-maxima transformation and incorporates multichannel data from the FY-2F satellite. The proposed method utilizes H-maxima transformation to identify seed points while maintaining the integrity of core structural features, followed by a novel neighborhood labeling method, region growing and adaptive merging criteria to effectively differentiate adjacent convective cells. The neighborhood labeling method improves the accuracy of seed clustering and avoids “over-clustering” or “under-clustering” issues of traditional neighborhood criteria. When compared to established methods such as RDT, ETITAN, and SA, the algorithm demonstrates superior performance, attaining a Probability of Detection (POD) of 0.87, a False Alarm Ratio (FAR) of 0.21, and a Critical Success Index (CSI) of 0.71. These results underscore the algorithm’s efficacy in elucidating the internal structures of convective complexes and mitigating false merging errors. Full article
(This article belongs to the Section Atmospheric Techniques, Instruments, and Modeling)
Show Figures

Figure 1

21 pages, 1805 KB  
Article
Assessment of Compliance with Integral Conservation Principles in Chemically Reactive Flows Using rhoCentralRfFoam 
by Marcelo Frias, Luis Gutiérrez Marcantoni and Sergio Elaskar
Axioms 2025, 14(11), 782; https://doi.org/10.3390/axioms14110782 (registering DOI) - 25 Oct 2025
Abstract
Reliable simulations of any flow require proper preservation of the fundamental principles governing the mechanics of its motion, whether in differential or integral form. When these principles are solved in differential form, discretization schemes introduce errors by transforming the continuous physical domain into [...] Read more.
Reliable simulations of any flow require proper preservation of the fundamental principles governing the mechanics of its motion, whether in differential or integral form. When these principles are solved in differential form, discretization schemes introduce errors by transforming the continuous physical domain into a discrete representation that only approximates it. This paper analyzes the numerical performance of the solver for supersonic chemically active flows, rhoCentralRfFoam, using integral conservation principles of mass, momentum, energy, and chemical species as a validation tool in a classical test case with a highly refined mesh under nonlinear pre-established reference conditions. The analysis is conducted on this specific test case; however, the methodology presented here can be applied to any problem under study. It may serve as an a posteriori verification tool or be integrated into the solver’s workflow, enabling automatic verification of conservation at each time step. The resulting deviations are evaluated, and it is observed that the numerical errors remain below 0.25%, even in cases with a high degree of nonlinearity. These results provide preliminary validation of the solver’s accuracy, as well as its ability to capture physically consistent solutions using only information generated internally by the solver for validation. This represents a significant advantage over validation methods that require external comparison with reference solutions, numerical benchmarks, or exact solutions. Full article
(This article belongs to the Special Issue Recent Developments in Mathematical Fluid Dynamics)
Show Figures

Figure 1

16 pages, 1028 KB  
Article
Research on Distributed Temperature and Bending Sensing Measurement Based on DPP-BOTDA
by Zijuan Liu, Yongqian Li and Lixin Zhang
Photonics 2025, 12(11), 1056; https://doi.org/10.3390/photonics12111056 (registering DOI) - 24 Oct 2025
Abstract
Traditional single-mode Brillouin optical time-domain analysis systems are inherently limited in terms of sensing capacity, susceptibility to bending loss, and spatial resolution. Multi-core fibers present a promising approach to overcoming these limitations. In this study, a seven-core fiber was utilized, with the central [...] Read more.
Traditional single-mode Brillouin optical time-domain analysis systems are inherently limited in terms of sensing capacity, susceptibility to bending loss, and spatial resolution. Multi-core fibers present a promising approach to overcoming these limitations. In this study, a seven-core fiber was utilized, with the central core and three asymmetrically positioned off-axis cores selected for sensing. The temperature coefficients of the four selected cores were experimentally calibrated as 1.103, 0.962, 1.277, and 0.937 MHz/°C, respectively. By employing differential pulse techniques within the Brillouin distributed sensing system, temperature-compensated bending measurements were achieved with a spatial resolution of 20 cm. The fiber was wound around cylindrical mandrels with diameters of 7 cm, 10 cm, and 15 cm. Experimental results demonstrate effective decoupling of temperature and bending strain, enabling accurate curvature reconstruction. Error analysis reveals a minimum deviation of 0.04% for smaller diameters and 0.68% for larger diameters. Cross-comparison of measurements conducted at varying temperatures confirms the robustness and effectiveness of the proposed temperature compensation method. Full article
19 pages, 562 KB  
Article
New Jacobi Galerkin Operational Matrices of Derivatives: A Highly Accurate Method for Solving Two-Point Fractional-Order Nonlinear Boundary Value Problems with Robin Boundary Conditions
by Hany Mostafa Ahmed
Fractal Fract. 2025, 9(11), 686; https://doi.org/10.3390/fractalfract9110686 (registering DOI) - 24 Oct 2025
Abstract
A novel numerical scheme is developed in this work to approximate solutions (APPSs) for nonlinear fractional differential equations (FDEs) governed by Robin boundary conditions (RBCs). The methodology is founded on a spectral collocation method (SCM) that uses a set of basis functions derived [...] Read more.
A novel numerical scheme is developed in this work to approximate solutions (APPSs) for nonlinear fractional differential equations (FDEs) governed by Robin boundary conditions (RBCs). The methodology is founded on a spectral collocation method (SCM) that uses a set of basis functions derived from generalized shifted Jacobi (GSJ) polynomials. These basis functions are uniquely formulated to satisfy the homogeneous form of RBCs (HRBCs). Key to this approach is the establishment of operational matrices (OMs) for ordinary derivatives (Ods) and fractional derivatives (Fds) of the constructed polynomials. The application of this framework effectively reduces the given FDE and its RBC to a system of nonlinear algebraic equations that are solvable by standard numerical routines. We provide theoretical assurances of the algorithm’s efficacy by establishing its convergence and conducting an error analysis. Finally, the efficacy of the proposed algorithm is demonstrated through three problems, with our APPSs compared against exact solutions (ExaSs) and existing results by other methods. The results confirm the high accuracy and efficiency of the scheme. Full article
(This article belongs to the Section Numerical and Computational Methods)
Show Figures

Figure 1

21 pages, 1426 KB  
Article
Virtual Biomarkers and Simplified Metrics in the Modeling of Breast Cancer Neoadjuvant Therapy: A Proof-of-Concept Case Study Based on Diagnostic Imaging
by Graziella Marino, Maria Valeria De Bonis, Marisabel Mecca, Marzia Sichetti, Aldo Cammarota, Manuela Botte, Giuseppina Dinardo, Maria Imma Lancellotti, Antonio Villonio, Antonella Prudente, Alexios Thodas, Emanuela Zifarone, Francesca Sanseverino, Pasqualina Modano, Francesco Schettini, Andrea Rocca, Daniele Generali and Gianpaolo Ruocco
Med. Sci. 2025, 13(4), 242; https://doi.org/10.3390/medsci13040242 (registering DOI) - 24 Oct 2025
Abstract
Background: Neoadjuvant chemotherapy (NAC) is a standard preoperative intervention for early-stage breast cancer (BC). Dynamic contrast-enhanced magnetic resonance imaging (CE-MRI) has emerged as a critical tool for evaluating treatment response and pathological complete response (pCR) following NAC. Computational modeling offers a robust framework [...] Read more.
Background: Neoadjuvant chemotherapy (NAC) is a standard preoperative intervention for early-stage breast cancer (BC). Dynamic contrast-enhanced magnetic resonance imaging (CE-MRI) has emerged as a critical tool for evaluating treatment response and pathological complete response (pCR) following NAC. Computational modeling offers a robust framework to simulate tumor growth dynamics and therapy response, leveraging patient-specific data to enhance predictive accuracy. Despite this potential, integrating imaging data with computational models for personalized treatment prediction remains underexplored. This case study presents a proof-of-concept prognostic tool that bridges oncology, radiology, and computational modeling by simulating BC behavior and predicting individualized NAC outcomes. Methods: CE-MRI scans, clinical assessments, and blood samples from three retrospective NAC patients were analyzed. Tumor growth was modeled using a system of partial differential equations (PDEs) within a reaction–diffusion mass transfer framework, incorporating patient-specific CE-MRI data. Tumor volumes measured pre- and post-treatment were compared with model predictions. A 20% error margin was applied to assess computational accuracy. Results: All cases were classified as true positive (TP), demonstrating the model’s capacity to predict tumor volume changes within the defined threshold, achieving 100% precision and sensitivity. Absolute differences between predicted and observed tumor volumes ranged from 0.07 to 0.33 cm3. Virtual biomarkers were employed to quantify novel metrics: the biological conversion coefficient ranged from 4 × 10−7 to 6 × 10−6 s-1, while the pharmacodynamic efficiency coefficient ranged from 1 × 10−7 to 4 × 10−4 s-1, reflecting intrinsic tumor biology and treatment effects, respectively. Conclusions: This approach demonstrates the feasibility of integrating CE-MRI and computational modeling to generate patient-specific treatment predictions. Preliminary model training on retrospective cohorts with matched BC subtypes and therapy regimens enabled accurate prediction of NAC outcomes. Future work will focus on model refinement, cohort expansion, and enhanced statistical validation to support broader clinical translation. Full article
(This article belongs to the Special Issue Feature Papers in Section “Cancer and Cancer-Related Research”)
14 pages, 2944 KB  
Article
Calculating the Sediment Flux in Hydrometric Data-Scarce Small Island Coastal Watersheds
by Gaocong Li, Liping Huang, Longbo Deng and Changliang Tong
J. Mar. Sci. Eng. 2025, 13(11), 2039; https://doi.org/10.3390/jmse13112039 (registering DOI) - 24 Oct 2025
Viewed by 31
Abstract
The information of sediment flux (Qs) from hydrometric data-scarce small coastal watersheds is an important supplement for interpreting the sedimentary records of continental shelf sedimentary systems. This paper proposes a solution to estimate their values based upon the empirical formula [...] Read more.
The information of sediment flux (Qs) from hydrometric data-scarce small coastal watersheds is an important supplement for interpreting the sedimentary records of continental shelf sedimentary systems. This paper proposes a solution to estimate their values based upon the empirical formula of small and medium-sized coastal watersheds in adjacent regions, taking the 25 small rivers in Hainan Island as example. Three categories of methods were applied to calculate the Qs. The first category involves the direct application of global empirical formulas, while the second and third categories utilizes empirical formulas that have been calibrated with regional characteristic data. The Qs calculation accuracy the above methods was validated by the observed values of typical rivers. Key findings include: (1) The area values of watersheds extracted from SRTM (Shuttle Radar Topography Mission) data exhibit a high correlation with actual values, confirmed the reliability and applicability of SRTM data; (2) The Global equation significantly overestimates Qs for the validation rivers (average relative error of 18.73), while employing the pristine-modified and disturbed-modified equations effectively improves the calculation accuracy (average relative errors of 0.72 and 1.64, respectively); (3) By averaging the results of different models, the Qs for the major rivers in Hainan Island was calculated as 6.07 Mt/a before large-scale human activities and 4.56 Mt/a after. This study demonstrates that modification not only needs to be considered to adjust global empirical formulas but also to differentiate between the scenarios of before and after large-scale human activities in small coastal watersheds. Full article
(This article belongs to the Special Issue Coastal Geochemistry: The Processes of Water–Sediment Interaction)
Show Figures

Figure 1

22 pages, 2224 KB  
Article
Modelling, Design, and Control of a Central Motor Driving Reconfigurable Quadcopter
by Zhuhuan Wu, Ke Huang and Jiaying Zhang
Drones 2025, 9(11), 736; https://doi.org/10.3390/drones9110736 - 23 Oct 2025
Viewed by 121
Abstract
Constrained by fixed frame dimensions, conventional drones usually demonstrate insufficient capabilities to accommodate complex environments. However, the reconfigurable drone can address this limitation through its deformable frame equipped with actuators or passive interaction mechanisms. Nevertheless, these additional components may introduce an excessive weight [...] Read more.
Constrained by fixed frame dimensions, conventional drones usually demonstrate insufficient capabilities to accommodate complex environments. However, the reconfigurable drone can address this limitation through its deformable frame equipped with actuators or passive interaction mechanisms. Nevertheless, these additional components may introduce an excessive weight burden, which conflicts with the lightweight objective in aircraft design. In this work, we propose a novel reconfigurable quadrotor inspired by the swimming morphology of jellyfish, with only one actuator placed at the centre of the frame to achieve significant morphological reconfiguration. In the design of the morphing mechanism, three telescopic sleeves are driven by the actuator, enabling arms’ rotation to achieve a maximum projected area reduction of 55%. The nested design of sleeves ensures a sufficient morphing range while maintaining structural compactness in the fully deployed mode. Furthermore, key structural dimensions are optimized, reducing the central motor load by up to 65% across configurations. After deriving parameter variations during morphing, Proportion–Integration–Differentiation (PID) controllers are implemented and flight simulations are conducted in MATLAB. Results confirm the drone’s sustained controllability during and after reconfiguration, with an “8”-shaped trajectory tracking root mean square error (RMSE) of 0.109 m and successful traversal through long narrow slits, reducing mission duration under certain conditions. Full article
Show Figures

Figure 1

17 pages, 680 KB  
Article
Stochastic SO(3) Lie Method for Correlation Flow
by Yasemen Ucan and Melike Bildirici
Symmetry 2025, 17(10), 1778; https://doi.org/10.3390/sym17101778 - 21 Oct 2025
Viewed by 172
Abstract
It is very important to create mathematical models for real world problems and to propose new solution methods. Today, symmetry groups and algebras are very popular in mathematical physics as well as in many fields from engineering to economics to solve mathematical models. [...] Read more.
It is very important to create mathematical models for real world problems and to propose new solution methods. Today, symmetry groups and algebras are very popular in mathematical physics as well as in many fields from engineering to economics to solve mathematical models. This paper introduces a novel methodological framework based on the SO(3) Lie method to estimate time-dependent correlation matrices (correlation flows) among three variables that have chaotic, entropy, and fractal characteristics, from 11 April 2011 to 31 December 2024 for daily data; from 10 April 2011 to 29 December 2024 for weekly data; and from April 2011 to December 2024 for monthly data. So, it develops the stochastic SO(2) Lie method into the SO(3) Lie method that aims to obtain the correlation flow for three variables with chaotic, entropy, and fractal structure. The results were obtained at three stages. Firstly, we applied entropy (Shannon, Rényi, Tsallis, Higuchi) measures, Kolmogorov–Sinai complexity, Hurst exponents, rescaled range tests, and Lyapunov exponent methods. The results of the Lyapunov exponents (Wolf, Rosenstein’s Method, Kantz’s Method) and entropy methods, and KSC found evidence of chaos, entropy, and complexity. Secondly, the stochastic differential equations which depend on S2 (SO(3) Lie group) and Lie algebra to obtain the correlation flows are explained. The resulting equation was numerically solved. The correlation flows were obtained by using the defined covariance flow transformation. Finally, we ran the robustness check. Accordingly, our robustness check results showed the SO(3) Lie method produced more effective results than the standard and Spearman correlation and covariance matrix. And, this method found lower RMSE and MAPE values, greater stability, and better forecast accuracy. For daily data, the Lie method found RMSE = 0.63, MAE = 0.43, and MAPE = 5.04, RMSE = 0.78, MAE = 0.56, and MAPE = 70.28 for weekly data, and RMSE = 0.081, MAE = 0.06, and MAPE = 7.39 for monthly data. These findings indicate that the SO(3) framework provides greater robustness, lower errors, and improved forecasting performance, as well as higher sensitivity to nonlinear transitions compared to standard correlation measures. By embedding time-dependent correlation matrix into a Lie group framework inspired by physics, this paper highlights the deep structural parallels between financial markets and complex physical systems. Full article
Show Figures

Figure 1

19 pages, 8646 KB  
Article
Impact of Diagnostic Confidence, Perceived Difficulty, and Clinical Experience in Facial Melanoma Detection: Results from a European Multicentric Teledermoscopic Study
by Alessandra Cartocci, Alessio Luschi, Sofia Lo Conte, Elisa Cinotti, Francesca Farnetani, Aimilios Lallas, John Paoli, Caterina Longo, Elvira Moscarella, Danica Tiodorovic, Ignazio Stanganelli, Mariano Suppa, Emi Dika, Iris Zalaudek, Maria Antonietta Pizzichetta, Jean Luc Perrot, Imma Savarese, Magdalena Żychowska, Giovanni Rubegni, Mario Fruschelli, Ernesto Iadanza, Gabriele Cevenini and Linda Tognettiadd Show full author list remove Hide full author list
Cancers 2025, 17(20), 3388; https://doi.org/10.3390/cancers17203388 - 21 Oct 2025
Viewed by 150
Abstract
Background: Diagnosing facial melanoma, specifically lentigo maligna (LM) and lentigo maligna melanoma (LMM), is a daily clinical challenge, particularly for small or traumatized lesions. LM and LMM are part of the broader group of atypical pigmented facial lesions (aPFLs), which also includes benign [...] Read more.
Background: Diagnosing facial melanoma, specifically lentigo maligna (LM) and lentigo maligna melanoma (LMM), is a daily clinical challenge, particularly for small or traumatized lesions. LM and LMM are part of the broader group of atypical pigmented facial lesions (aPFLs), which also includes benign look-alikes such as solar lentigo (SL), atypical nevi (AN), seborrheic keratosis (SK), and seborrheic-lichenoid keratosis (SLK), as well as pigmented actinic keratosis (PAK), a potentially premalignant keratinocytic lesion. Standard dermoscopy with handheld devices is the most widely used diagnostic tool in dermatology, but its accuracy heavily depends on the clinician’s experience and the perceived difficulty of the case. As a result, many benign aPFLs are excised for histological analysis, often leading to aesthetic concerns. Reflectance confocal microscopy (RCM) can reduce the need for biopsies, but it is limited to specialized centers and requires skilled operators. Aims: This study aimed to assess the impact of personal skill, diagnostic confidence, and perceived difficulty on the diagnostic accuracy and management in the differential dermoscopic diagnosis of aPFLs. Methods: A total of 1197 aPFLs dermoscopic images were examined on a teledermoscopic web platform by 155 dermatologists and residents with 4 skill levels (<1, 1–4, 5–8, >8 years). They were asked to give a diagnosis, to estimate their confidence and rate the case, and choose a management strategy: “follow-up”, “RCM” or “biopsy”. Diagnostic accuracy was examined according to the personal skill level, confidence level, and rating in three settings: (I) all seven diagnoses, (II) LM vs. PAK vs. fully benign aPFLs, (III) malignant vs benign aPFLs. The same analyses were performed for management decisions. Results: The diagnostic confidence has a certain impact on the diagnostic accuracy, both in terms of multi-class diagnosis of six aPFLs in diagnostic (setting 1) and in benign vs malignant (setting 3) or benign vs. malignant/premalignant discrimination (setting 2). The perceived difficulty influences the management of benign lesions, with easy ratings predominantly matching with “follow-up” decision in benign cases, but not that of malignant lesions assigned to “biopsy”. The experience level had an impact on the perception of the number of real easy cases and had no to minimal impact on the average diagnostic accuracy of aPFLs. It, however, has an impact on the management strategy and specifically in terms of error reduction, namely the lowest rates of missed malignant cases after 8 years of experience and the lowest rates of inappropriate biopsies of benign lesions after 1 year of experience. Conclusions: The noninvasive diagnosis and management of aPFLs rest on a daily challenge. Highlighting which specific subgroups of lesions need attention and second-level examination (RCM) or biopsy can help detect early malignant cases, and, in parallel, reduce the rate of unnecessary removal of benign lesions. Full article
(This article belongs to the Special Issue Advances in Skin Cancer: Diagnosis, Treatment and Prognosis)
Show Figures

Figure 1

28 pages, 1103 KB  
Article
An Efficient and Effective Model for Preserving Privacy Data in Location-Based Graphs
by Surapon Riyana and Nattapon Harnsamut
Symmetry 2025, 17(10), 1772; https://doi.org/10.3390/sym17101772 - 21 Oct 2025
Viewed by 145
Abstract
Location-based services (LBSs), which are used for navigation, tracking, and mapping across digital devices and social platforms, establish a user’s position and deliver tailored experiences. Collecting and sharing such trajectory datasets with analysts for business purposes raises critical privacy concerns, as both symmetry [...] Read more.
Location-based services (LBSs), which are used for navigation, tracking, and mapping across digital devices and social platforms, establish a user’s position and deliver tailored experiences. Collecting and sharing such trajectory datasets with analysts for business purposes raises critical privacy concerns, as both symmetry in recurring behavior mobility patterns and asymmetry in irregular movement mobility patterns in sensitive locations collectively expose highly identifiable information, resulting in re-identification risks, trajectory disclosure, and location inference. In response, several privacy preservation models have been proposed, including k-anonymity, l-diversity, t-closeness, LKC-privacy, differential privacy, and location-based approaches. However, these models still exhibit privacy issues, including sensitive location inference (e.g., hospitals, pawnshops, prisons, safe houses), disclosure from duplicate trajectories revealing sensitive places, and the re-identification of unique locations such as homes, condominiums, and offices. Efforts to address these issues often lead to utility loss and computational complexity. To overcome these limitations, we propose a new (ξ, ϵ)-privacy model that combines data generalization and suppression with sliding windows and R-Tree structures, where sliding windows partition large trajectory graphs into simplified subgraphs, R-Trees provide hierarchical indexing for spatial generalization, and suppression removes highly identifiable locations. The model addresses both symmetry and asymmetry in mobility patterns by balancing generalization and suppression to protect privacy while maintaining data utility. Symmetry-driven mechanisms that enhance resistance to inference attacks and support data confidentiality, integrity, and availability are core requirements of cryptography and information security. An experimental evaluation on the City80k and Metro100k datasets confirms that the (ξ, ϵ)-privacy model addresses privacy issues with reduced utility loss and efficient scalability, while validating robustness through relative error across query types in diverse analytical scenarios. The findings provide evidence of the model’s practicality for large-scale location data, confirming its relevance to secure computation, data protection, and information security applications. Full article
Show Figures

Figure 1

16 pages, 325 KB  
Article
Electricity Demand Forecasting and Risk Assessment for Campus Energy Management
by Yon-Hon Tsai and Ming-Tang Tsai
Energies 2025, 18(20), 5521; https://doi.org/10.3390/en18205521 - 20 Oct 2025
Viewed by 259
Abstract
This paper employs the Grey–Markov Model (GMM) to predict users’ electricity demand and introduces the Enhanced Monte Carlo (EMC) method to assess the reliability of the prediction results. The GMM integrates the advantages of the Grey Model (GM) and the Markov Chain to [...] Read more.
This paper employs the Grey–Markov Model (GMM) to predict users’ electricity demand and introduces the Enhanced Monte Carlo (EMC) method to assess the reliability of the prediction results. The GMM integrates the advantages of the Grey Model (GM) and the Markov Chain to enhance prediction accuracy, while the EMC combines the Monte Carlo simulation with a dual-variable approach to conduct a comprehensive risk assessment. This framework helps decision-makers better understand electricity demand patterns and effectively manage associated risks. A university campus in southern Taiwan is selected as the case study. Historical data of monthly maximum electricity demand, including peak, semi-peak, Saturday semi-peak, and off-peak periods, were collected and organized into a database using Excel. The GMM was applied to predict the monthly maximum electricity demand for the target year, and its prediction results were compared with those obtained from the GM and Grey Differential Equation (GDE) models. The results show that the average Mean Absolute Percentage Error (MAPE) values for the GM, GDE, and GMM are 10.96341%, 9.333164%, and 6.56026%, respectively. Among the three models, the GMM exhibits the lowest average MAPE, indicating superior prediction performance. The proposed GMM demonstrates robust predictive capability and significant practical value, offering a more effective forecasting tool than the GM and GDE models. Furthermore, the EMC method is utilized to evaluate the reliability of the risk assessment. The findings of this study provide decision-makers with a reliable reference for electricity demand forecasting and risk management, thereby supporting more effective contract capacity planning. Full article
Show Figures

Figure 1

19 pages, 3205 KB  
Article
Physics-Aware Informer: A Hybrid Framework for Accurate Pavement IRI Prediction in Diverse Climates
by Xintao Cao, Zhiping Zeng and Fan Yi
Infrastructures 2025, 10(10), 278; https://doi.org/10.3390/infrastructures10100278 - 18 Oct 2025
Viewed by 262
Abstract
Accurate prediction of the International Roughness Index (IRI) is critical for road safety and maintenance decisions. In this study, we propose a novel Physics-Aware Informer (PA-Informer) model that integrates the efficiency of the Informer structure with physics constraints derived from partial differential equations [...] Read more.
Accurate prediction of the International Roughness Index (IRI) is critical for road safety and maintenance decisions. In this study, we propose a novel Physics-Aware Informer (PA-Informer) model that integrates the efficiency of the Informer structure with physics constraints derived from partial differential equations (PDEs). The model addresses two key challenges: (1) performance degradation in short-sequence scenarios, and (2) the lack of physics constraints in conventional data-driven models. By embedding residual PDEs to link IRI with influencing factors such as temperature, precipitation, and joint displacement, and introducing a dynamic weighting strategy for balancing data-driven and physics-informed losses, the PA-Informer achieves robust and accurate predictions. Experimental results, based on data from four climatic regions in China, demonstrate its superior performance. The model achieves a Mean Squared Error (MSE) of 0.0165 and R2 of 0.962 with an input window length of 30 weeks, and an MSE of 0.0152 and R2 with an input window length of 120 weeks. Its accuracy is superior to that of other models, and the stability of the model when the input window length changes is far better than that of other models. Sensitivity analysis highlights joint displacement and internal stress as the most influential features, with stable sensitivity coefficients (Sp ≈ 0.89 and Sp ≈ 0.81). These findings validate the PA-Informer as a reliable and scalable tool for predicting pavement performance under diverse conditions, offering significant improvements over other IRI prediction models. Full article
Show Figures

Figure 1

Back to TopTop