Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (16)

Search Parameters:
Keywords = sharp feature recovery

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
33 pages, 947 KB  
Article
Global Dynamics for a Distributed Delay SVEIR Model for Measles Transmission with Imperfect Vaccination: A Threshold Analysis
by Mohammed H. Alharbi and Ali Rashash Alzahrani
Mathematics 2026, 14(7), 1219; https://doi.org/10.3390/math14071219 - 5 Apr 2026
Viewed by 324
Abstract
Measles remains a significant public health threat despite widespread vaccination, with recent resurgences driven by vaccine hesitancy and coverage gaps. Existing mathematical models often fail to capture the substantial temporal heterogeneity in incubation periods, vaccine-induced protection, and recovery processes that characterize measles transmission. [...] Read more.
Measles remains a significant public health threat despite widespread vaccination, with recent resurgences driven by vaccine hesitancy and coverage gaps. Existing mathematical models often fail to capture the substantial temporal heterogeneity in incubation periods, vaccine-induced protection, and recovery processes that characterize measles transmission. We develop and analyze an SVEIR epidemic model incorporating four independent distributed time delays with exponential survival factors, capturing the realistic variability in these epidemiological processes. The model features compartment-specific mortality rates, disease-induced mortality, and imperfect vaccination with failure probability θ. Using next-generation matrix methods adapted for delay kernels, we derive the delay-dependent reproduction number R0d and prove, via systematic construction of Volterra-type Lyapunov functionals, that it constitutes a sharp threshold: the disease-free equilibrium is globally asymptotically stable when R0d1, while a unique endemic equilibrium emerges and is globally stable when R0d>1. Normalized forward sensitivity analysis reveals that the transmission rate β and recruitment rate Λ exhibit maximal positive elasticity, while the vaccination rate p, vaccine failure probability θ, and incubation delay τ3 possess the largest negative elasticities. Critically, τ3 exerts exponential influence via en3τ3, making interventions that delay infectiousness—such as post-exposure prophylaxis—unusually potent. We derive an explicit expression for the critical delay τ3cr at which R0d=1, demonstrating that prolonging the effective incubation period sufficiently can shift the system from endemic persistence to extinction. Numerical simulations using Dirac delta kernels confirm all theoretical predictions. These findings provide three actionable insights for public health: (1) maintaining high vaccination coverage among new birth cohorts remains paramount; (2) improving vaccine quality (reducing θ) yields substantial returns; and (3) the incubation delay represents a quantifiable, measurable target for evaluating the population-level impact of time-sensitive interventions. The framework is broadly applicable to infectious diseases characterized by significant temporal heterogeneity. Full article
(This article belongs to the Special Issue Advances in Epidemiological and Biological Systems Modeling)
Show Figures

Figure 1

24 pages, 6704 KB  
Article
Strong Longitudinal and Latitudinal Differences of Ionospheric Responses in North American and European Sectors During the 10–11 October 2024 Geomagnetic Storm
by Xinyue Luo, Ercha Aa, Xin Wang and Bingxian Luo
Remote Sens. 2026, 18(2), 256; https://doi.org/10.3390/rs18020256 - 13 Jan 2026
Cited by 2 | Viewed by 613
Abstract
This study examines the spatiotemporal evolution of midlatitude ionospheric disturbances during the intense geomagnetic storm on 10–11 October 2024, focusing on the North American and European sectors. It utilizes multi-instrument datasets from ground-based observations, including Global Navigation Satellite System (GNSS) receivers and ionosondes, [...] Read more.
This study examines the spatiotemporal evolution of midlatitude ionospheric disturbances during the intense geomagnetic storm on 10–11 October 2024, focusing on the North American and European sectors. It utilizes multi-instrument datasets from ground-based observations, including Global Navigation Satellite System (GNSS) receivers and ionosondes, supplemented by the measurements from the Swarm, DMSP and GUVI/TIMED satellites. The results reveal significant longitudinal and latitudinal variations in regional ionospheric responses, specifically related to Storm Enhanced Density (SED) and the midlatitude trough. Key findings include: (a) During the main phase of the storm, the North American midlatitude ionosphere exhibited a pronounced longitudinal contrast: a positive SED-driven phase in the west versus a negative trough-dominated phase in the east. In the early recovery phase, the western sector transitioned to a trough-induced negative phase, while the eastern sector showed a positive phase related to auroral particle precipitation during substorms. (b) The North American SED featured a strong northwest-extending plume with a westward shift velocity of 200–300 m/s at 45°N, and a sharp density gradient of 60–65 TECU on its northeastern side, in contrast to the trough. (c) The European sector displayed a “sandwich-like” latitudinal pattern, with “positive–negative–positive” variations during the storm. (d) The European sector’s storm-time trough expanded rapidly equatorward, reaching a minimum of ~35° magnetic latitude (MLAT), while broadening latitudinally to a width of 18–20°. These density gradient structures, along with the longitudinal/latitudinal differences, highlight the dynamic processes occurring in the magnetosphere–ionosphere–thermosphere system during intense storms and contribute to the understanding of storm-response mechanisms across different sectors. Full article
Show Figures

Graphical abstract

19 pages, 2140 KB  
Article
AI-Driven Adaptive Segmentation of Timed Up and Go Test Phases Using a Smartphone
by Muntazir Rashid, Arshad Sher, Federico Villagra Povina and Otar Akanyeti
Electronics 2025, 14(23), 4650; https://doi.org/10.3390/electronics14234650 - 26 Nov 2025
Viewed by 998
Abstract
The Timed Up and Go (TUG) test is a widely used clinical tool for assessing mobility and fall risk in older adults and individuals with neurological or musculoskeletal conditions. While it provides a quick measure of functional independence, traditional stopwatch-based timing offers only [...] Read more.
The Timed Up and Go (TUG) test is a widely used clinical tool for assessing mobility and fall risk in older adults and individuals with neurological or musculoskeletal conditions. While it provides a quick measure of functional independence, traditional stopwatch-based timing offers only a single completion time and fails to reveal which movement phases contribute to impairment. This study presents a smartphone-based system that automatically segments the TUG test into distinct phases, delivering objective and low-cost biomarkers of lower-limb performance. This approach enables clinicians to identify phase-specific impairments in populations such as individuals with Parkinson’s disease, and older adults, supporting precise diagnosis, personalized rehabilitation, and continuous monitoring of mobility decline and neuroplastic recovery. Our method combines adaptive preprocessing of accelerometer and gyroscope signals with supervised learning models (Random Forest, Support Vector Machine (SVM), and XGBoost) using statistical features to achieve continuous phase detection and maintain robustness against slow or irregular gait, accommodating individual variability. A threshold-based turn detection strategy captures both sharp and gradual rotations. Validation against video ground truth using group K-fold cross-validation demonstrated strong and consistent performance: start and end points were detected in 100% of trials. The mean absolute error for total time was 0.42 s (95% CI: 0.36–0.48 s). The average error across phases (stand, walk, turn) was less than 0.35 s, and macro F1 scores exceeded 0.85 for all models, with the SVM achieving the highest score of 0.882. Combining accelerometer and gyroscope features improved macro F1 by up to 12%. Statistical tests (McNemar, Bowker) confirmed significant differences between models, and calibration metrics indicated reliable probabilistic outputs (ROC-AUC > 0.96, Brier score < 0.08). These findings show that a single smartphone can deliver accurate, interpretable, and phase-aware TUG analysis without complex multi-sensor setups, enabling practical and scalable mobility assessment for clinical use. Full article
Show Figures

Figure 1

16 pages, 2193 KB  
Article
Microscopic Mechanism of Moisture Affecting Methane Adsorption and Desorption in Coal by Low-Field NMR Relaxation
by Qi Li, Lingyun Zhang, Jiaqing Cui, Guorui Feng, Zhiwei Zhai and Zhen Li
Processes 2025, 13(10), 3113; https://doi.org/10.3390/pr13103113 - 28 Sep 2025
Viewed by 862
Abstract
Moisture in coal seams significantly impacts methane adsorption/desorption, yet its microscopic mechanism in intact coal remains poorly characterized due to methodological limitations. This study introduces a novel approach that integrates low-field nuclear magnetic resonance (LF-NMR) with volumetric analysis to quantify, in real-time, the [...] Read more.
Moisture in coal seams significantly impacts methane adsorption/desorption, yet its microscopic mechanism in intact coal remains poorly characterized due to methodological limitations. This study introduces a novel approach that integrates low-field nuclear magnetic resonance (LF-NMR) with volumetric analysis to quantify, in real-time, the effect of moisture on methane dynamics in intact coal samples. The results quantitatively demonstrate that micropores (relative specific surface area > 700 m2/cm3) are the primary adsorption sites, accounting for over 95% of the stored gas. Moisture drastically reduces the adsorption capacity (by ~72% at 0.29 MPa and ~57% at 1.83 MPa) and inhibits the desorption process, evidenced by a strong linear decrease in desorption ratio (DR) (R2 = 0.906) and a sharp exponential drop in the initial desorption rate (R2 = 0.999) with increasing moisture content. The findings provide a mechanistic understanding that is crucial for optimizing coalbed methane (CBM) recovery and enhancing strategies for outburst prevention and methane emission mitigation. The results reveal distinct adsorption and desorption features of intact coal compared with coal powder, which can be useful in total methane utilization and mining safety enhancement. Full article
Show Figures

Graphical abstract

15 pages, 6254 KB  
Article
Influence of Alpha/Gamma-Stabilizing Elements on the Hot Deformation Behaviour of Ferritic Stainless Steel
by Andrés Núñez, Irene Collado, Marta Muratori, Andrés Ruiz, Juan F. Almagro and David L. Sales
J. Manuf. Mater. Process. 2025, 9(8), 265; https://doi.org/10.3390/jmmp9080265 - 6 Aug 2025
Viewed by 1138
Abstract
This study investigates the hot deformation behaviour and microstructural evolution of two AISI 430 ferritic stainless steel variants: 0A (basic) and 1C (modified). These variants primarily differ in chemical composition, with 0A containing higher austenite-stabilizing elements (C, N) compared to 1C, which features [...] Read more.
This study investigates the hot deformation behaviour and microstructural evolution of two AISI 430 ferritic stainless steel variants: 0A (basic) and 1C (modified). These variants primarily differ in chemical composition, with 0A containing higher austenite-stabilizing elements (C, N) compared to 1C, which features lower interstitial content and slightly higher Si and Cr. This research aimed to optimize hot rolling conditions for enhanced forming properties. Uniaxial hot compression tests were conducted using a Gleeble thermo-mechanical system between 850 and 990 °C at a strain rate of 3.3 s−1, simulating industrial finishing mill conditions. Analysis of flow curves, coupled with detailed microstructural characterization using electron backscatter diffraction, revealed distinct dynamic restoration mechanisms influencing each material’s response. Thermodynamic simulations confirmed significant austenite formation in both materials within the tested temperature range, notably affecting their deformation behaviour despite their initial ferritic state. Material 0A consistently exhibited a strong tendency towards dynamic recrystallization (DRX) across a wider temperature range, particularly at 850 °C. DRX led to a microstructure with a high concentration of low-angle grain boundaries and sharp deformation textures, actively reorienting grains towards energetically favourable configurations. However, under this condition, DRX did not fully complete the recrystallization process. In contrast, material 1C showed greater activity of both dynamic recovery and DRX, leading to a much more advanced state of grain refinement and recrystallization compared to 0A. This indicates that the composition of 1C helps mitigate the strong influence of the deformation temperature on the crystallographic texture, leading to a weaker texture overall than 0A. Full article
Show Figures

Figure 1

15 pages, 4373 KB  
Article
Deep Supervised Attention Network for Dynamic Scene Deblurring
by Seok-Woo Jang, Limin Yan and Gye-Young Kim
Sensors 2025, 25(6), 1896; https://doi.org/10.3390/s25061896 - 18 Mar 2025
Cited by 8 | Viewed by 1644
Abstract
In this study, we propose a dynamic scene deblurring approach using a deep supervised attention network. While existing deep learning-based deblurring methods have significantly outperformed traditional techniques, several challenges remain: (1) Invariant weights: Small conventional neural network (CNN) models struggle to address the [...] Read more.
In this study, we propose a dynamic scene deblurring approach using a deep supervised attention network. While existing deep learning-based deblurring methods have significantly outperformed traditional techniques, several challenges remain: (1) Invariant weights: Small conventional neural network (CNN) models struggle to address the spatially variant nature of dynamic scene deblurring, making it difficult to capture the necessary information. A more effective architecture is needed to better extract valuable features. (2) Limitations of standard datasets: Current datasets often suffer from low data volume, unclear ground truth (GT) images, and a single blur scale, which hinders performance. To address these challenges, we propose a multi-scale, end-to-end recurrent network that utilizes supervised attention to recover sharp images. The supervised attention mechanism focuses the model on features most relevant to ambiguous information as data are passed between networks at difference scales. Additionally, we introduce new loss functions to overcome the limitations of the peak signal-to-noise ratio (PSNR) estimation metric. By incorporating a fast Fourier transform (FFT), our method maps features into frequency space, aiding in the recovery of lost high-frequency details. Experimental results demonstrate that our model outperforms previous methods in both quantitative and qualitative evaluations, producing higher-quality deblurring results. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

28 pages, 11667 KB  
Article
Investigation of the Ionospheric Response on Mother’s Day 2024 Geomagnetic Superstorm over the European Sector
by Krishnendu Sekhar Paul, Haris Haralambous, Mefe Moses, Christina Oikonomou, Stelios M. Potirakis, Nicolas Bergeot and Jean-Marie Chevalier
Atmosphere 2025, 16(2), 180; https://doi.org/10.3390/atmos16020180 - 5 Feb 2025
Cited by 20 | Viewed by 3393
Abstract
The present study examines the negative ionospheric response over Europe during two geomagnetic storms on 10–13 May 2024, known as the Mother’s Day geomagnetic superstorm. The first storm, with a peak SYM-H value of −436 nT, occurred in the interval 10–11 May, while [...] Read more.
The present study examines the negative ionospheric response over Europe during two geomagnetic storms on 10–13 May 2024, known as the Mother’s Day geomagnetic superstorm. The first storm, with a peak SYM-H value of −436 nT, occurred in the interval 10–11 May, while the second, less intense storm (SYM-H~−103 nT), followed in the interval 12–13 May. Using data from four European locations, temporal and spatial variations in ionospheric parameters (TEC, foF2, and hmF2) were analyzed to investigate the morphology of the strong negative response. Sharp electron density (Ne) depletion is associated with the equatorward displacement of the Midlatitude Ionospheric Trough (MIT), confirmed by Swarm satellite data. A key finding was the absence of foF2 and hmF2 values over all ionosonde stations during the recovery phase of the storms, likely due to the coupling between the Equatorial Ionization Anomaly (EIA) crests and the auroral ionosphere influenced by the intense uplift of the F layer. Relevant distinct features such as Large-scale Travelling Ionospheric Disturbance (LSTID) signatures and Spread F were also noted, particularly during the initial and main phase of the first storm over high midlatitude regions. Regional effects varied, with high European midlatitudes exhibiting different features compared to lower European latitude areas. Full article
(This article belongs to the Special Issue Feature Papers in Upper Atmosphere (2nd Edition))
Show Figures

Figure 1

23 pages, 1890 KB  
Article
Physics-Informed Neural Networks for Modal Wave Field Predictions in 3D Room Acoustics
by Stefan Schoder
Appl. Sci. 2025, 15(2), 939; https://doi.org/10.3390/app15020939 - 18 Jan 2025
Cited by 3 | Viewed by 4875
Abstract
The generalization of Physics-Informed Neural Networks (PINNs) used to solve the inhomogeneous Helmholtz equation in a simplified three-dimensional room is investigated. PINNs are appealing since they can efficiently integrate a partial differential equation and experimental data by minimizing a loss function. However, a [...] Read more.
The generalization of Physics-Informed Neural Networks (PINNs) used to solve the inhomogeneous Helmholtz equation in a simplified three-dimensional room is investigated. PINNs are appealing since they can efficiently integrate a partial differential equation and experimental data by minimizing a loss function. However, a previous study experienced limitations in acoustics regarding the source term. A challenging but realistic excitation case is a confined (e.g., single-point) excitation area, yielding a smooth spatial wave field periodically with the wavelength. Compared to studies using smooth (unrealistic) sound excitation, the network’s generalization capabilities regarding a realistic sound excitation are addressed. Different methods like hyperparameter optimization, adaptive refinement, Fourier feature engineering, and locally adaptive activation functions with slope recovery are tested to tailor the PINN’s accuracy to an experimentally validated finite element analysis reference solution computed with openCFS. The hyperparameter study and optimization are conducted regarding the network depth and width, the learning rate, the used activation functions, and the deep learning backends (PyTorch 2.5.1, TensorFlow 2.18.0 1, TensorFlow 2.18.0 2, JAX 0.4.39). A modified (feature-engineered) PINN architecture was designed using input feature engineering to include the dispersion relation of the wave in the neural network. For smoothly (unrealistic) distributed sources, it was shown that the standard PINNs and the feature-engineered PINN converge to the analytic solution, with a relative error of 0.28% and 2×104%, respectively. The locally adaptive activation functions with the slope lead to a relative error of 0.086% with a source sharpness of s=1 m. Similar relative errors were obtained for the case s=0.2 m using adaptive refinement. The feature-engineered PINN significantly outperformed the results of previous studies regarding accuracy. Furthermore, the trainable parameters were reduced to a fraction by Bayesian hyperparameter optimization (around 5%), and likewise, the training time (around 3%) was reduced compared to the standard PINN formulation. By narrowing this excitation towards a single point, the convergence rate and minimum errors obtained of all presented network architectures increased. The feature-engineered architecture yielded a one order of magnitude lower accuracy of 0.20% compared to 0.019% of the standard PINN formulation with a source sharpness of s=1 m. It outperformed the finite element analysis and the standard PINN in terms time needed to obtain the solution, needing 15 min and 30 s on an AMD Ryzen 7 Pro 8840HS CPU (AMD, Santa Clara, CA, USA) for the FEM, compared to about 20 min (standard PINN) and just under a minute of the feature-engineered PINN, both trained on a Tesla T4 GPU (NVIDIA, Santa Clara, CA, USA). Full article
(This article belongs to the Special Issue Artificial Intelligence in Acoustic Simulation and Design)
Show Figures

Figure 1

17 pages, 6219 KB  
Article
DGGNets: Deep Gradient-Guidance Networks for Speckle Noise Reduction
by Li Wang, Jinkai Li, Yi-Fei Pu, Hao Yin and Paul Liu
Fractal Fract. 2024, 8(11), 666; https://doi.org/10.3390/fractalfract8110666 - 15 Nov 2024
Cited by 3 | Viewed by 2313
Abstract
Speckle noise is a granular interference that degrades image quality in coherent imaging systems, including underwater sonar, Synthetic Aperture Radar (SAR), and medical ultrasound. This study aims to enhance speckle noise reduction through advanced deep learning techniques. We introduce the Deep Gradient-Guidance Network [...] Read more.
Speckle noise is a granular interference that degrades image quality in coherent imaging systems, including underwater sonar, Synthetic Aperture Radar (SAR), and medical ultrasound. This study aims to enhance speckle noise reduction through advanced deep learning techniques. We introduce the Deep Gradient-Guidance Network (DGGNet), which features an architecture comprising one encoder and two decoders—one dedicated to image recovery and the other to gradient preservation. Our approach integrates a gradient map and fractional-order total variation into the loss function to guide training. The gradient map provides structural guidance for edge preservation and directs the denoising branch to focus on sharp regions, thereby preventing over-smoothing. The fractional-order total variation mitigates detail ambiguity and excessive smoothing, ensuring rich textures and detailed information are retained. Extensive experiments yield an average Peak Signal-to-Noise Ratio (PSNR) of 31.52 dB and a Structural Similarity Index (SSIM) of 0.863 across various benchmark datasets, including McMaster, Kodak24, BSD68, Set12, and Urban100. DGGNet outperforms existing methods, such as RIDNet, which achieved a PSNR of 31.42 dB and an SSIM of 0.853, thereby establishing new benchmarks in speckle noise reduction. Full article
Show Figures

Figure 1

13 pages, 4701 KB  
Article
Variations in the Thermomechanical and Structural Properties during the Cooling of Shape-Memory R-PETG
by Ștefan-Dumitru Sava, Bogdan Pricop, Radu-Ioachim Comăneci, Nicanor Cimpoeșu, Mihai Popa, Nicoleta-Monica Lohan and Leandru-Gheorghe Bujoreanu
Polymers 2024, 16(14), 1965; https://doi.org/10.3390/polym16141965 - 9 Jul 2024
Cited by 3 | Viewed by 1672
Abstract
One of the useful features of 3D-printed specimens of recycled polyethylene terephthalate glycol (R-PETG) is the ability to repetitively develop free recovery as well as the work-generating, shape-memory effect. This behavior is enabled by the R-PETG’s capacity to stiffen during cooling, thus allowing [...] Read more.
One of the useful features of 3D-printed specimens of recycled polyethylene terephthalate glycol (R-PETG) is the ability to repetitively develop free recovery as well as the work-generating, shape-memory effect. This behavior is enabled by the R-PETG’s capacity to stiffen during cooling, thus allowing for a new temporary shape to be induced. Aiming to devise an explanation for the polymer’s stiffening, in this study, the variation in some of the R-PETG’s parameters during cooling are emphasized and discussed. The evolution of an R-PETG filament’s shape was monitored during room-temperature-bending heating–cooling cycles. Straight-shape recovery and the complete loss of stiffness were observed at the start and the end of heating, respectively, followed by the forced straightening of the filament, performed by the operator, around 40 °C, during cooling. The tests performed by dynamic mechanical analysis disclosed the rise of the storage modulus (E’) after 100 °C heating followed by either liquid-nitrogen- or air-cooling to room temperature, in such a way that E’ was always larger after cooling than initially. Static tests emphasized a peculiar stress variation during a heating–cooling cycle applied in air, within the heating chamber of the tensile testing machine. Tensile-failure tests were performed at −10 °C at a rate of 100 mm/min, with specimens printed at various deposition directions between 10 and 40° to the transversal direction. The specimens printed at 40°, which had the largest ultimate strains, were broken with tensile rates between 100 and 500 mm/min. Deformation rate increase favored the shift from crazing to delamination failure modes. The correlation between the structural changes, the sharp E’ increase on heating, and the stiffening induced by cooling represents a novel approach that enables the use of 3D-printed R-PETG for the fabrication of the active parts of low-priced lightweight resettable actuators. Full article
(This article belongs to the Special Issue Recycling of Plastic and Rubber Wastes, 2nd Edition)
Show Figures

Figure 1

14 pages, 2556 KB  
Article
Mathematical Models for Forecasting Unstable Economic Processes in the Eurozone
by Askar Akaev, Alexander Zvyagintsev, Tessaleno Devezas, Askar Sarygulov and Andrea Tick
Mathematics 2023, 11(21), 4544; https://doi.org/10.3390/math11214544 - 3 Nov 2023
Cited by 1 | Viewed by 3816
Abstract
In an unstable economic climate, all market participants want to know is when is the timing to overcome a recession, and what measures and means to use for economic recovery. In this regard, the process through which the Eurozone economy has gained momentum [...] Read more.
In an unstable economic climate, all market participants want to know is when is the timing to overcome a recession, and what measures and means to use for economic recovery. In this regard, the process through which the Eurozone economy has gained momentum since the summer of 2022 has been a volatile one. This was reflected in a sharp rise in the price level, followed by a sharp rise in the ECB interest rates. The purpose of this paper is to provide short-term forecasts of the main parameters of monetary and fiscal policy by the euro area monetary authorities, based on a model developed by the authors. The distinctive feature of the presented and proposed model lies in the particularly careful selection of the parameter values based on actual statistical data. The statistics used for the proposed model cover the period from 2015 to December 2022. The simulation results show that the European Central Bank (ECB) needs to maintain a policy of high interest rates for a period of 12 to 14 months, which will help to bring inflation down to 2–3 percent in the future and move to a stage and phase of sustainable economic growth. Full article
(This article belongs to the Special Issue Quantitative Methods for Economic Policy and Public Economics)
Show Figures

Figure 1

27 pages, 1244 KB  
Article
Generalized Penalized Constrained Regression: Sharp Guarantees in High Dimensions with Noisy Features
by Ayed M. Alrashdi, Meshari Alazmi and Masad A. Alrasheedi
Mathematics 2023, 11(17), 3706; https://doi.org/10.3390/math11173706 - 28 Aug 2023
Cited by 1 | Viewed by 2008
Abstract
The generalized penalized constrained regression (G-PCR) is a penalized model for high-dimensional linear inverse problems with structured features. This paper presents a sharp error performance analysis of the G-PCR in the over-parameterized high-dimensional setting. The analysis is carried out under the assumption of [...] Read more.
The generalized penalized constrained regression (G-PCR) is a penalized model for high-dimensional linear inverse problems with structured features. This paper presents a sharp error performance analysis of the G-PCR in the over-parameterized high-dimensional setting. The analysis is carried out under the assumption of a noisy or erroneous Gaussian features matrix. To assess the performance of the G-PCR problem, the study employs multiple metrics such as prediction risk, cosine similarity, and the probabilities of misdetection and false alarm. These metrics offer valuable insights into the accuracy and reliability of the G-PCR model under different circumstances. Furthermore, the derived results are specialized and applied to well-known instances of G-PCR, including l1-norm penalized regression for sparse signal recovery and l2-norm (ridge) penalization. These specific instances are widely utilized in regression analysis for purposes such as feature selection and model regularization. To validate the obtained results, the paper provides numerical simulations conducted on both real-world and synthetic datasets. Using extensive simulations, we show the universality and robustness of the results of this work to the assumed Gaussian distribution of the features matrix. We empirically investigate the so-called double descent phenomenon and show how optimal selection of the hyper-parameters of the G-PCR can help mitigate this phenomenon. The derived expressions and insights from this study can be utilized to optimally select the hyper-parameters of the G-PCR. By leveraging these findings, one can make well-informed decisions regarding the configuration and fine-tuning of the G-PCR model, taking into consideration the specific problem at hand as well as the presence of noisy features in the high-dimensional setting. Full article
(This article belongs to the Section E2: Control Theory and Mechanics)
Show Figures

Figure 1

19 pages, 16162 KB  
Article
Evaluation of Preprocessing Methods on Independent Medical Hyperspectral Databases to Improve Analysis
by Beatriz Martinez-Vega, Mariia Tkachenko, Marianne Matkabi, Samuel Ortega, Himar Fabelo, Francisco Balea-Fernandez, Marco La Salvia, Emanuele Torti, Francesco Leporati, Gustavo M. Callico and Claire Chalopin
Sensors 2022, 22(22), 8917; https://doi.org/10.3390/s22228917 - 18 Nov 2022
Cited by 21 | Viewed by 4968
Abstract
Currently, one of the most common causes of death worldwide is cancer. The development of innovative methods to support the early and accurate detection of cancers is required to increase the recovery rate of patients. Several studies have shown that medical Hyperspectral Imaging [...] Read more.
Currently, one of the most common causes of death worldwide is cancer. The development of innovative methods to support the early and accurate detection of cancers is required to increase the recovery rate of patients. Several studies have shown that medical Hyperspectral Imaging (HSI) combined with artificial intelligence algorithms is a powerful tool for cancer detection. Various preprocessing methods are commonly applied to hyperspectral data to improve the performance of the algorithms. However, there is currently no standard for these methods, and no studies have compared them so far in the medical field. In this work, we evaluated different combinations of preprocessing steps, including spatial and spectral smoothing, Min-Max scaling, Standard Normal Variate normalization, and a median spatial smoothing technique, with the goal of improving tumor detection in three different HSI databases concerning colorectal, esophagogastric, and brain cancers. Two machine learning and deep learning models were used to perform the pixel-wise classification. The results showed that the choice of preprocessing method affects the performance of tumor identification. The method that showed slightly better results with respect to identifing colorectal tumors was Median Filter preprocessing (0.94 of area under the curve). On the other hand, esophagogastric and brain tumors were more accurately identified using Min-Max scaling preprocessing (0.93 and 0.92 of area under the curve, respectively). However, it is observed that the Median Filter method smooths sharp spectral features, resulting in high variability in the classification performance. Therefore, based on these results, obtained with different databases acquired by different HSI instrumentation, the most relevant preprocessing technique identified in this work is Min-Max scaling. Full article
Show Figures

Figure 1

23 pages, 8946 KB  
Article
LIMOFilling: Local Information Guide Hole-Filling and Sharp Feature Recovery for Manifold Meshes
by Guohua Gou, Haigang Sui, Dajun Li, Zhe Peng, Bingxuan Guo, Wei Yang and Duo Huang
Remote Sens. 2022, 14(2), 289; https://doi.org/10.3390/rs14020289 - 9 Jan 2022
Cited by 10 | Viewed by 3799
Abstract
Manifold mesh, a triangular network for representing 3D objects, is widely used to reconstruct accurate 3D models of objects structure. The complexity of these objects and self-occlusion, however, can cause cameras to miss some areas, creating holes in the model. The existing hole-filling [...] Read more.
Manifold mesh, a triangular network for representing 3D objects, is widely used to reconstruct accurate 3D models of objects structure. The complexity of these objects and self-occlusion, however, can cause cameras to miss some areas, creating holes in the model. The existing hole-filling methods do not have the ability to detect holes at the model boundaries, leaving overlaps between the newly generated triangles, and also lack the ability to recover missing sharp features in the hole-region. To solve these problems, LIMOFilling, a new method for filling holes in 3D manifold meshes was proposed, and recovering the sharp features. The proposed method, detects the boundary holes robustly by constructing local overlap judgments, and provides the possibility for sharp features recovery using local structure information, as well as reduces the cost of maintaining manifold meshes thus enhancing their utility. The novel method against the existing methods have been tested on different types of holes in four scenes. Experimental results demonstrate the visual effect of the proposed method and the quality of the generated meshes, relative to the existing methods. The proposed hole-detection algorithm found almost all of the holes in different scenes and qualitatively, the subsequent repairs are difficult to see with the naked eye. Full article
Show Figures

Figure 1

20 pages, 5464 KB  
Article
A Lightweight Localization Strategy for LiDAR-Guided Autonomous Robots with Artificial Landmarks
by Sen Wang, Xiaohe Chen, Guanyu Ding, Yongyao Li, Wenchang Xu, Qinglei Zhao, Yan Gong and Qi Song
Sensors 2021, 21(13), 4479; https://doi.org/10.3390/s21134479 - 30 Jun 2021
Cited by 20 | Viewed by 7728
Abstract
This paper proposes and implements a lightweight, “real-time” localization system (SORLA) with artificial landmarks (reflectors), which only uses LiDAR data for the laser odometer compensation in the case of high-speed or sharp-turning. Theoretically, due to the feature-matching mechanism of the LiDAR, locations of [...] Read more.
This paper proposes and implements a lightweight, “real-time” localization system (SORLA) with artificial landmarks (reflectors), which only uses LiDAR data for the laser odometer compensation in the case of high-speed or sharp-turning. Theoretically, due to the feature-matching mechanism of the LiDAR, locations of multiple reflectors and the reflector layout are not limited by geometrical relation. A series of algorithms is implemented to find and track the features of the environment, such as the reflector localization method, the motion compensation technique, and the reflector matching optimization algorithm. The reflector extraction algorithm is used to identify the reflector candidates and estimates the precise center locations of the reflectors from 2D LiDAR data. The motion compensation algorithm predicts the potential velocity, location, and angle of the robot without odometer errors. Finally, the matching optimization algorithm searches the reflector combinations for the best matching score, which ensures that the correct reflector combination could be found during the high-speed movement and fast turning. All those mechanisms guarantee the algorithm’s precision and robustness in the high speed and noisy background. Our experimental results show that the SORLA algorithm has an average localization error of 6.45 mm at a speed of 0.4 m/s, and 9.87 mm at 4.2 m/s, and still works well with the angular velocity of 1.4 rad/s at a sharp turn. The recovery mechanism in the algorithm could handle the failure cases of reflector occlusion, and the long-term stability test of 72 h firmly proves the algorithm’s robustness. This work shows that the strategy used in the SORLA algorithm is feasible for industry-level navigation with high precision and a promising alternative solution for SLAM. Full article
(This article belongs to the Special Issue Recent Advances in Automated Measuring Systems)
Show Figures

Figure 1

Back to TopTop