Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,258)

Search Parameters:
Keywords = error guarantees

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 654 KiB  
Article
A Conceptual Framework for User Trust in AI Biosensors: Integrating Cognition, Context, and Contrast
by Andrew Prahl
Sensors 2025, 25(15), 4766; https://doi.org/10.3390/s25154766 (registering DOI) - 2 Aug 2025
Abstract
Artificial intelligence (AI) techniques have propelled biomedical sensors beyond measuring physiological markers to interpreting subjective states like stress, pain, or emotions. Despite these technological advances, user trust is not guaranteed and is inadequately addressed in extant research. This review proposes the Cognition–Context–Contrast (CCC) [...] Read more.
Artificial intelligence (AI) techniques have propelled biomedical sensors beyond measuring physiological markers to interpreting subjective states like stress, pain, or emotions. Despite these technological advances, user trust is not guaranteed and is inadequately addressed in extant research. This review proposes the Cognition–Context–Contrast (CCC) conceptual framework to explain the trust and acceptance of AI-enabled sensors. First, we map cognition, comprising the expectations and stereotypes that humans have about machines. Second, we integrate task context by situating sensor applications along an intellective-to-judgmental continuum and showing how demonstrability predicts tolerance for sensor uncertainty and/or errors. Third, we analyze contrast effects that arise when automated sensing displaces familiar human routines, heightening scrutiny and accelerating rejection if roll-out is abrupt. We then derive practical implications such as enhancing interpretability, tailoring data presentations to task demonstrability, and implementing transitional introduction phases. The framework offers researchers, engineers, and clinicians a structured conceptual framework for designing and implementing the next generation of AI biosensors. Full article
(This article belongs to the Special Issue AI in Sensor-Based E-Health, Wearables and Assisted Technologies)
Show Figures

Figure 1

16 pages, 3001 KiB  
Article
Tractor Path Tracking Control Method Based on Prescribed Performance and Sliding Mode Control
by Liwei Zhu, Weiming Sun, Qian Zhang, En Lu, Jialin Xue and Guohui Sha
Agriculture 2025, 15(15), 1663; https://doi.org/10.3390/agriculture15151663 - 1 Aug 2025
Abstract
In addressing the challenges of low path tracking accuracy and poor robustness during tractor autonomous operation, this paper proposes a path tracking control method for tractors that integrates prescribed performance with sliding mode control (SMC). A key feature of this control method is [...] Read more.
In addressing the challenges of low path tracking accuracy and poor robustness during tractor autonomous operation, this paper proposes a path tracking control method for tractors that integrates prescribed performance with sliding mode control (SMC). A key feature of this control method is its inherent immunity to system parameter perturbations and external disturbances, while ensuring path tracking errors are constrained within a predefined range. First, the tractor is simplified into a two-wheeled vehicle model, and a path tracking error model is established based on the reference operation trajectory. By defining a prescribed performance function, the constrained tracking control problem is transformed into an unconstrained stability control problem, guaranteeing the boundedness of tracking errors. Then, by incorporating SMC theory, a prescribed performance sliding mode path tracking controller is designed to achieve robust path tracking and error constraint for the tractor. Finally, both simulation and field experiments are conducted to validate the method. The results demonstrate that compared with the traditional SMC method, the proposed method effectively mitigates the impact of complex farmland conditions, reducing path tracking errors while enforcing strict error constraints. Field experiment data shows the proposed method achieves an average absolute error of 0.02435 m and a standard deviation of 0.02795 m, confirming its effectiveness and superiority. This research lays a foundation for the intelligent development of agricultural machinery. Full article
(This article belongs to the Section Agricultural Technology)
Show Figures

Figure 1

27 pages, 10397 KiB  
Article
Methods for Measuring and Computing the Reference Temperature in Newton’s Law of Cooling for External Flows
by James Peck, Tom I-P. Shih, K. Mark Bryden and John M. Crane
Energies 2025, 18(15), 4074; https://doi.org/10.3390/en18154074 (registering DOI) - 31 Jul 2025
Abstract
Newton’s law of cooling requires a reference temperature (Tref) to define the heat-transfer coefficient (h). For external flows with multiple temperatures in the freestream, obtaining Tref is a challenge. One widely used method, [...] Read more.
Newton’s law of cooling requires a reference temperature (Tref) to define the heat-transfer coefficient (h). For external flows with multiple temperatures in the freestream, obtaining Tref is a challenge. One widely used method, referred to as the adiabatic-wall (AW) method, obtains Tref by requiring the surface of the solid exposed to convective heat transfer to be adiabatic. Another widely used method, referred to as the linear-extrapolation (LE) method, obtains Tref by measuring/computing the heat flux (qs) on the solid surface at two different surface temperatures (Ts) and then linearly extrapolating to qs=0. A third recently developed method, referred to as the state-space (SS) method, obtains Tref by probing the temperature space between the highest and lowest in the flow to account for the effects of Ts or qs on Tref. This study examines the foundation and accuracy of these methods via a test problem involving film cooling of a flat plate where qs switches signs on the plate’s surface. Results obtained show that only the SS method could guarantee a unique and physically meaningful Tref where Ts=Tref on a nonadiabatic surface qs=0. The AW and LE methods both assume Tref to be independent of Ts, which the SS method shows to be incorrect. Though this study also showed the adiabatic-wall temperature, TAW, to be a good approximation of Tref (<10% relative error), huge errors can occur in h about the solid surface where |TsTAW| is near zero because where Ts=TAW, qs0. Full article
Show Figures

Figure 1

10 pages, 1357 KiB  
Article
Design of Balanced Wide Gap No-Hit Zone Sequences with Optimal Auto-Correlation
by Duehee Lee, Seho Lee and Jin-Ho Chung
Mathematics 2025, 13(15), 2454; https://doi.org/10.3390/math13152454 - 30 Jul 2025
Viewed by 129
Abstract
Frequency-hopping multiple access is widely adopted to blunt narrow-band jamming and limit spectral disclosure in cyber–physical systems, yet its practical resilience depends on three sequence-level properties. First, balancedness guarantees that every carrier is occupied equally often, removing spectral peaks that a jammer or [...] Read more.
Frequency-hopping multiple access is widely adopted to blunt narrow-band jamming and limit spectral disclosure in cyber–physical systems, yet its practical resilience depends on three sequence-level properties. First, balancedness guarantees that every carrier is occupied equally often, removing spectral peaks that a jammer or energy detector could exploit. Second, a wide gap between successive hops forces any interferer to re-tune after corrupting at most one symbol, thereby containing error bursts. Third, a no-hit zone (NHZ) window with a zero pairwise Hamming correlation eliminates user collisions and self-interference when chip-level timing offsets fall inside the window. This work introduces an algebraic construction that meets the full set of requirements in a single framework. By threading a permutation over an integer ring and partitioning the period into congruent sub-blocks tied to the desired NHZ width, we generate balanced wide gap no-hit zone frequency-hopping (WG-NHZ FH) sequence sets. Analytical proofs show that (i) each sequence achieves the Lempel–Greenberger bound for auto-correlation, (ii) the family and zone sizes satisfy the Ye–Fan bound with equality, (iii) the hop-to-hop distance satisfies a provable WG condition, and (iv) balancedness holds exactly for every carrier frequency. Full article
Show Figures

Figure 1

37 pages, 5345 KiB  
Article
Synthesis of Sources of Common Randomness Based on Keystream Generators with Shared Secret Keys
by Dejan Cizelj, Milan Milosavljević, Jelica Radomirović, Nikola Latinović, Tomislav Unkašević and Miljan Vučetić
Mathematics 2025, 13(15), 2443; https://doi.org/10.3390/math13152443 - 29 Jul 2025
Viewed by 122
Abstract
Secure autonomous secret key distillation (SKD) systems traditionally depend on external common randomness (CR) sources, which often suffer from instability and limited reliability over long-term operation. In this work, we propose a novel SKD architecture that synthesizes CR by combining a keystream of [...] Read more.
Secure autonomous secret key distillation (SKD) systems traditionally depend on external common randomness (CR) sources, which often suffer from instability and limited reliability over long-term operation. In this work, we propose a novel SKD architecture that synthesizes CR by combining a keystream of a shared-key keystream generator KSG(KG) with locally generated binary Bernoulli noise. This construction emulates the statistical properties of the classical Maurer satellite scenario while enabling deterministic control over key parameters such as bit error rate, entropy, and leakage rate (LR). We derive a closed-form lower bound on the equivocation of the shared-secret key  KG from the viewpoint of an adversary with access to public reconciliation data. This allows us to define an admissible operational region in which the system guarantees long-term secrecy through periodic key refreshes, without relying on advantage distillation. We integrate the Winnow protocol as the information reconciliation mechanism, optimized for short block lengths (N=8), and analyze its performance in terms of efficiency, LR, and final key disagreement rate (KDR). The proposed system operates in two modes: ideal secrecy, achieving secret key rates up to 22% under stringent constraints (KDR < 10−5, LR < 10−10), and perfect secrecy mode, which approximately halves the key rate. Notably, these security guarantees are achieved autonomously, without reliance on advantage distillation or external CR sources. Theoretical findings are further supported by experimental verification demonstrating the practical viability of the proposed system under realistic conditions. This study introduces, for the first time, an autonomous CR-based SKD system with provable security performance independent of communication channels or external randomness, thus enhancing the practical viability of secure key distribution schemes. Full article
Show Figures

Figure 1

22 pages, 825 KiB  
Article
Conformal Segmentation in Industrial Surface Defect Detection with Statistical Guarantees
by Cheng Shen and Yuewei Liu
Mathematics 2025, 13(15), 2430; https://doi.org/10.3390/math13152430 - 28 Jul 2025
Viewed by 223
Abstract
Detection of surface defects can significantly elongate mechanical service time and mitigate potential risks during safety management. Traditional defect detection methods predominantly rely on manual inspection, which suffers from low efficiency and high costs. Some machine learning algorithms and artificial intelligence models for [...] Read more.
Detection of surface defects can significantly elongate mechanical service time and mitigate potential risks during safety management. Traditional defect detection methods predominantly rely on manual inspection, which suffers from low efficiency and high costs. Some machine learning algorithms and artificial intelligence models for defect detection, such as Convolutional Neural Networks (CNNs), present outstanding performance, but they are often data-dependent and cannot provide guarantees for new test samples. To this end, we construct a detection model by combining Mask R-CNN, selected for its strong baseline performance in pixel-level segmentation, with Conformal Risk Control. The former evaluates the distribution that discriminates defects from all samples based on probability. The detection model is improved by retraining with calibration data that is assumed to be independent and identically distributed (i.i.d) with the test data. The latter constructs a prediction set on which a given guarantee for detection will be obtained. First, we define a loss function for each calibration sample to quantify detection error rates. Subsequently, we derive a statistically rigorous threshold by optimization of error rates and a given guarantee significance as the risk level. With the threshold, defective pixels with high probability in test images are extracted to construct prediction sets. This methodology ensures that the expected error rate on the test set remains strictly bounded by the predefined risk level. Furthermore, our model shows robust and efficient control over the expected test set error rate when calibration-to-test partitioning ratios vary. Full article
Show Figures

Figure 1

19 pages, 3658 KiB  
Article
Optimal Design of Linear Quadratic Regulator for Vehicle Suspension System Based on Bacterial Memetic Algorithm
by Bala Abdullahi Magaji, Aminu Babangida, Abdullahi Bala Kunya and Péter Tamás Szemes
Mathematics 2025, 13(15), 2418; https://doi.org/10.3390/math13152418 - 27 Jul 2025
Viewed by 324
Abstract
The automotive suspension must perform competently to support comfort and safety when driving. Traditionally, car suspension control tuning is performed through trial and error or with classical techniques that cannot guarantee optimal performance under varying road conditions. The study aims at designing a [...] Read more.
The automotive suspension must perform competently to support comfort and safety when driving. Traditionally, car suspension control tuning is performed through trial and error or with classical techniques that cannot guarantee optimal performance under varying road conditions. The study aims at designing a Linear Quadratic Regulator-based Bacterial Memetic Algorithm (LQR-BMA) for suspension systems of automobiles. BMA combines the bacterial foraging optimization algorithm (BFOA) and the memetic algorithm (MA) to enhance the effectiveness of its search process. An LQR control system adjusts the suspension’s behavior by determining the optimal feedback gains using BMA. The control objective is to significantly reduce the random vibration and oscillation of both the vehicle and the suspension system while driving, thereby making the ride smoother and enhancing road handling. The BMA adopts control parameters that support biological attraction, reproduction, and elimination-dispersal processes to accelerate the search and enhance the program’s stability. By using an algorithm, it explores several parts of space and improves its value to determine the optimal setting for the control gains. MATLAB 2024b software is used to run simulations with a randomly generated road profile that has a power spectral density (PSD) value obtained using the Fast Fourier Transform (FFT) method. The results of the LQR-BMA are compared with those of the optimized LQR based on the genetic algorithm (LQR-GA) and the Virus Evolutionary Genetic Algorithm (LQR-VEGA) to substantiate the potency of the proposed model. The outcomes reveal that the LQR-BMA effectuates efficient and highly stable control system performance compared to the LQR-GA and LQR-VEGA methods. From the results, the BMA-optimized model achieves reductions of 77.78%, 60.96%, 70.37%, and 73.81% in the sprung mass displacement, unsprung mass displacement, sprung mass velocity, and unsprung mass velocity responses, respectively, compared to the GA-optimized model. Moreover, the BMA-optimized model achieved a −59.57%, 38.76%, 94.67%, and 95.49% reduction in the sprung mass displacement, unsprung mass displacement, sprung mass velocity, and unsprung mass velocity responses, respectively, compared to the VEGA-optimized model. Full article
(This article belongs to the Special Issue Advanced Control Systems and Engineering Cybernetics)
Show Figures

Figure 1

17 pages, 3368 KiB  
Article
A Heave Motion Prediction Approach Based on Sparse Bayesian Learning Incorporated with Empirical Mode Decomposition for an Underwater Towed System
by Zhu-Fei Lu, Heng-Chang Yan and Jin-Bang Xu
J. Mar. Sci. Eng. 2025, 13(8), 1427; https://doi.org/10.3390/jmse13081427 - 27 Jul 2025
Viewed by 190
Abstract
Underwater towed systems (UTSs) are widely used in underwater exploration and oceanographic data acquisition. However, the heave motion information of the towing ship is usually affected by the measurement transmitting delay, sensor noise and surface waves, which will result in uncontrolled depth variation [...] Read more.
Underwater towed systems (UTSs) are widely used in underwater exploration and oceanographic data acquisition. However, the heave motion information of the towing ship is usually affected by the measurement transmitting delay, sensor noise and surface waves, which will result in uncontrolled depth variation of the towed vehicle, so as to adversely affect the monitoring performance and mechanical robustness of the UTS. To resolve this problem, a heave motion prediction approach based on sparse Bayesian learning (SBL) incorporated with empirical mode decomposition (EMD) for the UTS is proposed in this paper. With the proposed approach, a heave motion model of the towing ship with random waves is firstly developed based on strip theory. Meanwhile, the EMD is employed to eliminate the high-frequency noise of the measurement data to restore low-frequency towing ship motion. And then, the SBL is utilized to train the weight parameters in the built model to predict the heave motion, which not only reconstruct the heave motion from non-stationary sensor signals with noise but also prevent overfitting. Furthermore, the depth compensation of the towed vehicle is then performed using the predicted heave motion. Finally, experimental results demonstrate that the proposed EMD-SBL method significantly improves both the prediction accuracy and model adaptability under various sea conditions, and it also guarantees that the maximum prediction depth error of the heave motion does not exceed 1 cm. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

26 pages, 3625 KiB  
Article
Deep-CNN-Based Layout-to-SEM Image Reconstruction with Conformal Uncertainty Calibration for Nanoimprint Lithography in Semiconductor Manufacturing
by Jean Chien and Eric Lee
Electronics 2025, 14(15), 2973; https://doi.org/10.3390/electronics14152973 - 25 Jul 2025
Viewed by 244
Abstract
Nanoimprint lithography (NIL) has emerged as a promising sub-10 nm patterning at low cost; yet, robust process control remains difficult because of time-consuming physics-based simulators and labeled SEM data scarcity. We propose a data-efficient, two-stage deep-learning framework here that directly reconstructs post-imprint SEM [...] Read more.
Nanoimprint lithography (NIL) has emerged as a promising sub-10 nm patterning at low cost; yet, robust process control remains difficult because of time-consuming physics-based simulators and labeled SEM data scarcity. We propose a data-efficient, two-stage deep-learning framework here that directly reconstructs post-imprint SEM images from binary design layouts and delivers calibrated pixel-by-pixel uncertainty simultaneously. First, a shallow U-Net is trained on conformalized quantile regression (CQR) to output 90% prediction intervals with statistically guaranteed coverage. Moreover, per-level errors on a small calibration dataset are designed to drive an outlier-weighted and encoder-frozen transfer fine-tuning phase that refines only the decoder, with its capacity explicitly focused on regions of spatial uncertainty. On independent test layouts, our proposed fine-tuned model significantly reduces the mean absolute error (MAE) from 0.0365 to 0.0255 and raises the coverage from 0.904 to 0.926, while cutting the labeled data and GPU time by 80% and 72%, respectively. The resultant uncertainty maps highlight spatial regions associated with error hotspots and support defect-aware optical proximity correction (OPC) with fewer guard-band iterations. Extending the current perspective beyond OPC, the innovatively model-agnostic and modular design of the pipeline here allows flexible integration into other critical stages of the semiconductor manufacturing workflow, such as imprinting, etching, and inspection. In these stages, such predictions are critical for achieving higher precision, efficiency, and overall process robustness in semiconductor manufacturing, which is the ultimate motivation of this study. Full article
Show Figures

Figure 1

24 pages, 988 KiB  
Article
Consistency-Oriented SLAM Approach: Theoretical Proof and Numerical Validation
by Zhan Wang, Alain Lambert, Yuwei Meng, Rongdong Yu, Jin Wang and Wei Wang
Electronics 2025, 14(15), 2966; https://doi.org/10.3390/electronics14152966 - 24 Jul 2025
Viewed by 204
Abstract
Simultaneous Localization and Mapping (SLAM) has long been a fundamental and challenging task in robotics literature, where safety and reliability are the critical issues for successfully autonomous applications of robots. Classically, the SLAM problem is tackled via probabilistic or optimization methods (such as [...] Read more.
Simultaneous Localization and Mapping (SLAM) has long been a fundamental and challenging task in robotics literature, where safety and reliability are the critical issues for successfully autonomous applications of robots. Classically, the SLAM problem is tackled via probabilistic or optimization methods (such as EKF-SLAM, Fast-SLAM, and Graph-SLAM). Despite their strong performance in real-world scenarios, these methods may exhibit inconsistency, which is caused by the inherent characteristic of model linearization or Gaussian noise assumption. In this paper, we propose an alternative monocular SLAM algorithm which theoretically relies on interval analysis (iMonoSLAM), to pursue guaranteed rather than probabilistically defined solutions. We consistently modeled and initialized the SLAM problem with a bounded-error parametric model. The state estimation process is then cast into an Interval Constraint Satisfaction Problem (ICSP) and resolved through interval constraint propagation techniques without any linearization or Gaussian noise assumption. Furthermore, we theoretically prove the obtained consistency and propose a versatile method for numerical validation. To the best of our knowledge, this is the first time such a proof has been proposed. A plethora of numerical experiments are carried to validate the consistency, and a preliminary comparison with classical EKF-SLAM in different noisy situations is also presented. Our proposed iMonoSLAM shows outstanding performance in obtaining reliable solutions, highlighting the potential application prospect in safety-critical scenarios of mobile robots. Full article
(This article belongs to the Special Issue Simultaneous Localization and Mapping (SLAM) of Mobile Robots)
Show Figures

Figure 1

14 pages, 1346 KiB  
Article
Composite Continuous High-Order Nonsingular Terminal Sliding Mode Control for Flying Wing UAVs with Disturbances and Actuator Faults
by Hao Wang and Zhenhua Zhao
Mathematics 2025, 13(15), 2375; https://doi.org/10.3390/math13152375 - 24 Jul 2025
Viewed by 175
Abstract
Flying wing UAVs are widely used in both civil and military areas and they are vulnerable to being affected by multi-source disturbances and actuator faults due to their unique aerodynamic configuration. This paper proposes composite continuous high-order nonsingular terminal sliding mode control controllers [...] Read more.
Flying wing UAVs are widely used in both civil and military areas and they are vulnerable to being affected by multi-source disturbances and actuator faults due to their unique aerodynamic configuration. This paper proposes composite continuous high-order nonsingular terminal sliding mode control controllers for the longitudinal command tracking control of flying wing UAVs. The proposed method guarantees not only the finite-time convergence of command tracking errors, but also the continuity of control actions. Simulation results validate the effectiveness of the proposed method. Full article
Show Figures

Figure 1

17 pages, 3321 KiB  
Article
Multi-Objective Automated Machine Learning for Inversion of Mesoscopic Parameters in Discrete Element Contact Models
by Xu Ao, Shengpeng Hao, Yuyu Zhang and Wenyu Xu
Appl. Sci. 2025, 15(15), 8181; https://doi.org/10.3390/app15158181 - 23 Jul 2025
Viewed by 151
Abstract
Accurate calibration of mesoscopic contact model parameters is essential for ensuring the reliability of Particle Flow Code in Three Dimensions (PFC3D) simulations in geotechnical engineering. Trial-and-error approaches are often used to determine the parameters of the contact model, but they are time-consuming, labor-intensive, [...] Read more.
Accurate calibration of mesoscopic contact model parameters is essential for ensuring the reliability of Particle Flow Code in Three Dimensions (PFC3D) simulations in geotechnical engineering. Trial-and-error approaches are often used to determine the parameters of the contact model, but they are time-consuming, labor-intensive, and offer no guarantee of parameter validity or simulation credibility. Although conventional machine learning techniques have been applied to invert the contact model parameters, they are hampered by the difficulty of selecting the optimal hyperparameters and, in some cases, insufficient data, which limits both the predictive accuracy and robustness. In this study, a total of 361 PFC3D uniaxial compression simulations using a linear parallel bond model with varied mesoscopic parameters were generated to capture a wide range of rock and geotechnical material behaviors. From each stress–strain curve, eight characteristic points were extracted as inputs to a multi-objective Automated Machine Learning (AutoML) model designed to invert three key mesoscopic parameters, i.e., the elastic modulus (E), stiffness ratio (ks/kn), and degraded elastic modulus (Ed). The developed AutoML model, comprising two hidden layers of 256 and 32 neurons with ReLU activation function, achieved coefficients of determination (R2) of 0.992, 0.710, and 0.521 for E, ks/kn, and Ed, respectively, demonstrating acceptable predictive accuracy and generalizability. The multi-objective AutoML model was also applied to invert the parameters from three independent uniaxial compression tests on rock-like materials to validate its practical performance. The close match between the experimental and numerically simulated stress–strain curves confirmed the model’s reliability for mesoscopic parameter inversion in PFC3D. Full article
Show Figures

Figure 1

17 pages, 382 KiB  
Review
Physics-Informed Neural Networks: A Review of Methodological Evolution, Theoretical Foundations, and Interdisciplinary Frontiers Toward Next-Generation Scientific Computing
by Zhiyuan Ren, Shijie Zhou, Dong Liu and Qihe Liu
Appl. Sci. 2025, 15(14), 8092; https://doi.org/10.3390/app15148092 - 21 Jul 2025
Viewed by 691
Abstract
Physics-informed neural networks (PINNs) have emerged as a transformative methodology integrating deep learning with scientific computing. This review establishes a three-dimensional analytical framework to systematically decode PINNs’ development through methodological innovation, theoretical breakthroughs, and cross-disciplinary convergence. The contributions include threefold: First, identifying the [...] Read more.
Physics-informed neural networks (PINNs) have emerged as a transformative methodology integrating deep learning with scientific computing. This review establishes a three-dimensional analytical framework to systematically decode PINNs’ development through methodological innovation, theoretical breakthroughs, and cross-disciplinary convergence. The contributions include threefold: First, identifying the co-evolutionary path of algorithmic architectures from adaptive optimization (neural tangent kernel-guided weighting achieving 230% convergence acceleration in Navier-Stokes solutions) to hybrid numerical-deep learning integration (5× speedup via domain decomposition) and second, constructing bidirectional theory-application mappings where convergence analysis (operator approximation theory) and generalization guarantees (Bayesian-physical hybrid frameworks) directly inform engineering implementations, as validated by 72% cost reduction compared to FEM in high-dimensional spaces (p<0.01,n=15 benchmarks). Third, pioneering cross-domain knowledge transfer through application-specific architectures: TFE-PINN for turbulent flows (5.12±0.87% error in NASA hypersonic tests), ReconPINN for medical imaging (SSIM=+0.18±0.04 on multi-institutional MRI), and SeisPINN for seismic systems (0.52±0.18 km localization accuracy). We further present a technological roadmap highlighting three critical directions for PINN 2.0: neuro-symbolic, federated physics learning, and quantum-accelerated optimization. This work provides methodological guidelines and theoretical foundations for next-generation scientific machine learning systems. Full article
Show Figures

Figure 1

34 pages, 3579 KiB  
Review
A Comprehensive Review of Mathematical Error Characterization and Mitigation Strategies in Terrestrial Laser Scanning
by Mansoor Sabzali and Lloyd Pilgrim
Remote Sens. 2025, 17(14), 2528; https://doi.org/10.3390/rs17142528 - 20 Jul 2025
Viewed by 395
Abstract
In recent years, there has been an increasing transition from 1D point-based to 3D point-cloud-based data acquisition for monitoring applications and deformation analysis tasks. Previously, many studies relied on point-to-point measurements using total stations to assess structural deformation. However, the introduction of terrestrial [...] Read more.
In recent years, there has been an increasing transition from 1D point-based to 3D point-cloud-based data acquisition for monitoring applications and deformation analysis tasks. Previously, many studies relied on point-to-point measurements using total stations to assess structural deformation. However, the introduction of terrestrial laser scanning (TLS) has commenced a new era in data capture with a high level of efficiency and flexibility for data collection and post processing. Thus, a robust understanding of both data acquisition and processing techniques is required to guarantee high-quality deliverables to geometrically separate the measurement uncertainty and movements. TLS is highly demanding in capturing detailed 3D point coordinates of a scene within either short- or long-range scanning. Although various studies have examined scanner misalignments under controlled conditions within the short range of observation (scanner calibration), there remains a knowledge gap in understanding and characterizing errors related to long-range scanning (scanning calibration). Furthermore, limited information on manufacturer-oriented calibration tests highlights the motivation for designing a user-oriented calibration test. This research focused on investigating four primary sources of error in the generic error model of TLS. These were categorized into four geometries: instrumental imperfections related to the scanner itself, atmospheric effects that impact the laser beam, scanning geometry concerning the setup and varying incidence angles during scanning, and object and surface characteristics affecting the overall data accuracy. This study presents previous findings of TLS calibration relevant to the four error sources and mitigation strategies and identified current challenges that can be implemented as potential research directions. Full article
Show Figures

Figure 1

25 pages, 1507 KiB  
Article
DARN: Distributed Adaptive Regularized Optimization with Consensus for Non-Convex Non-Smooth Composite Problems
by Cunlin Li and Yinpu Ma
Symmetry 2025, 17(7), 1159; https://doi.org/10.3390/sym17071159 - 20 Jul 2025
Viewed by 208
Abstract
This paper proposes a Distributed Adaptive Regularization Algorithm (DARN) for solving composite non-convex and non-smooth optimization problems in multi-agent systems. The algorithm employs a three-phase iterative framework to achieve efficient collaborative optimization: (1) a local regularized optimization step, which utilizes proximal mappings to [...] Read more.
This paper proposes a Distributed Adaptive Regularization Algorithm (DARN) for solving composite non-convex and non-smooth optimization problems in multi-agent systems. The algorithm employs a three-phase iterative framework to achieve efficient collaborative optimization: (1) a local regularized optimization step, which utilizes proximal mappings to enforce strong convexity of weakly convex objectives and ensure subproblem well-posedness; (2) a consensus update based on doubly stochastic matrices, guaranteeing asymptotic convergence of agent states to a global consensus point; and (3) an innovative adaptive regularization mechanism that dynamically adjusts regularization strength using local function value variations to balance stability and convergence speed. Theoretical analysis demonstrates that the algorithm maintains strict monotonic descent under non-convex and non-smooth conditions by constructing a mixed time-scale Lyapunov function, achieving a sublinear convergence rate. Notably, we prove that the projection-based update rule for regularization parameters preserves lower-bound constraints, while spectral decay properties of consensus errors and perturbations from local updates are globally governed by the Lyapunov function. Numerical experiments validate the algorithm’s superiority in sparse principal component analysis and robust matrix completion tasks, showing a 6.6% improvement in convergence speed and a 51.7% reduction in consensus error compared to fixed-regularization methods. This work provides theoretical guarantees and an efficient framework for distributed non-convex optimization in heterogeneous networks. Full article
(This article belongs to the Section Mathematics)
Show Figures

Figure 1

Back to TopTop