Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (717)

Search Parameters:
Keywords = fixed-time consistency

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 1209 KB  
Article
Consensus Control of Robot Fractional-Order MAS Based on FOILC with Time Delay
by Zhida Huang, Shuaishuai Lv, Kunpeng Shen, Xiao Jiang and Haibin Yu
Fractal Fract. 2026, 10(2), 93; https://doi.org/10.3390/fractalfract10020093 (registering DOI) - 28 Jan 2026
Abstract
In this paper, we investigate the finite-time consensus problem of a fractional-order multi-agent system with repetitive motion. The system under consideration consists of robotic agents with a leader and a fixed communication topology. A distributed open-closed-loop PDα fractional-order iterative learning control (FOILC) algorithm [...] Read more.
In this paper, we investigate the finite-time consensus problem of a fractional-order multi-agent system with repetitive motion. The system under consideration consists of robotic agents with a leader and a fixed communication topology. A distributed open-closed-loop PDα fractional-order iterative learning control (FOILC) algorithm is proposed. The finite-time uniform convergence of the proposed algorithm is analyzed, and sufficient convergence conditions are derived. The theoretical analysis demonstrates that, as the number of iterations increases, each agent can achieve complete tracking within a finite time by appropriately selecting the gain matrices. Simulation results are presented to verify the effectiveness of the proposed method. Full article
(This article belongs to the Special Issue Analysis and Modeling of Fractional-Order Dynamical Networks)
15 pages, 1881 KB  
Article
Finite-Range Scalar–Tensor Gravity: Constraints from Cosmology and Galaxy Dynamics
by Elie Almurr and Jean Claude Assaf
Galaxies 2026, 14(1), 7; https://doi.org/10.3390/galaxies14010007 - 27 Jan 2026
Abstract
Objective: We examine whether a finite-range scalar–tensor modification of gravity can be simultaneously compatible with cosmological background data, galaxy rotation curves, and local/astrophysical consistency tests, while satisfying the luminal gravitational-wave propagation constraint (cT=1) implied by GW170817 at low [...] Read more.
Objective: We examine whether a finite-range scalar–tensor modification of gravity can be simultaneously compatible with cosmological background data, galaxy rotation curves, and local/astrophysical consistency tests, while satisfying the luminal gravitational-wave propagation constraint (cT=1) implied by GW170817 at low redshifts. Methods: We formulate the model at the level of an explicit covariant action and derive the corresponding field equations; for cosmological inferences, we adopt an effective background closure in which the late-time dark-energy density is modulated by a smooth activation function characterized by a length scale λ and amplitude ϵ. We constrain this background model using Pantheon+, DESI Gaussian Baryon Acoustic Oscillations (BAOs), and a Planck acoustic-scale prior, including an explicit ΛCDM comparison. We then propagate the inferred characteristic length by fixing λ in the weak-field Yukawa kernel used to model 175 SPARC galaxy rotation curves with standard baryonic components and a controlled spherical approximation for the scalar response. Results: The joint background fit yields Ωm=0.293±0.007, λ=7.691.71+1.85Mpc, and H0=72.33±0.50kms1Mpc1. With λ fixed, the baryons + scalar model describes the SPARC sample with a median reduced chi-square of χν2=1.07; for a 14-galaxy subset, this model is moderately preferred over the standard baryons + NFW halo description in the finite-sample information criteria, with a mean ΔAICc outcome in favor of the baryons + scalar model (≈2.8). A Vainshtein-type screening completion with Λ=1.3×108 eV satisfies Cassini, Lunar Laser Ranging, and binary pulsar bounds while keeping the kpc scales effectively unscreened. For linear growth observables, we adopt a conservative General Relativity-like baseline (μ0=0) and show that current fσ8 data are consistent with μ00 for our best-fit background; the model predicts S8=0.791, consistent with representative cosmic-shear constraints. Conclusions: Within the present scope (action-level weak-field dynamics for galaxy modeling plus an explicitly stated effective closure for background inference), the results support a mutually compatible characteristic length at the Mpc scale; however, a full perturbation-level implementation of the covariant theory remains an issue for future work, and the role of cold dark matter beyond galaxy scales is not ruled out. Full article
Show Figures

Figure 1

31 pages, 2800 KB  
Article
Intelligent Fusion: A Resilient Anomaly Detection Framework for IoMT Health Devices
by Flavio Pastore, Raja Waseem Anwar, Nafaa Hadi Jabeur and Saqib Ali
Information 2026, 17(2), 117; https://doi.org/10.3390/info17020117 - 26 Jan 2026
Viewed by 35
Abstract
Modern healthcare systems increasingly depend on wearable Internet of Medical Things (IoMT) devices for the continuous monitoring of patients’ physiological parameters. It remains challenging to differentiate between genuine physiological anomalies, sensor faults, and malicious cyber interference. In this work, we propose a hybrid [...] Read more.
Modern healthcare systems increasingly depend on wearable Internet of Medical Things (IoMT) devices for the continuous monitoring of patients’ physiological parameters. It remains challenging to differentiate between genuine physiological anomalies, sensor faults, and malicious cyber interference. In this work, we propose a hybrid fusion framework designed to attribute the most plausible source of an anomaly, thereby supporting more reliable clinical decisions. The proposed framework is developed and evaluated using two complementary datasets: CICIoMT2024 for modelling security threats and a large-scale intensive care cohort from MIMIC-IV for analysing key vital signs and bedside interventions. The core of the system combines a supervised XGBoost classifier for attack detection with an unsupervised LSTM autoencoder for identifying physiological and technical deviations. To improve clinical realism and avoid artefacts introduced by quantised or placeholder measurements, the physiological module incorporates quality-aware preprocessing and missingness indicators. The fusion decision policy is calibrated under prudent, safety-oriented constraints to limit false escalation. Rather than relying on fixed fusion weights, we train a lightweight fusion classifier that combines complementary evidence from the security and clinical modules, and we select class-specific probability thresholds on a dedicated calibration split. The security module achieves high cross-validated performance, while the clinical model captures abnormal physiological patterns at scale, including deviations consistent with both acute deterioration and data-quality faults. Explainability is provided through SHAP analysis for the security module and reconstruction-error attribution for physiological anomalies. The integrated fusion framework achieves a final accuracy of 99.76% under prudent calibration and a Matthews Correlation Coefficient (MCC) of 0.995, with an average end-to-end inference latency of 84.69 ms (p95 upper bound of 107.30 ms), supporting near real-time execution in edge-oriented settings. While performance is strong, clinical severity labels are operationalised through rule-based proxies, and cross-domain fusion relies on harmonised alignment assumptions. These aspects should be further evaluated using realistic fault traces and prospective IoMT data. Despite these limitations, the proposed framework offers a practical and explainable approach for IoMT-based patient monitoring. Full article
(This article belongs to the Special Issue Intrusion Detection Systems in IoT Networks)
Show Figures

Graphical abstract

17 pages, 1129 KB  
Article
Kinematic and Kinetic Adaptations to Step Cadence Modulation During Walking in Healthy Adults
by Joan Lluch Fruns, Maria Cristina Manzanares-Céspedes, Laura Pérez-Palma and Carles Vergés Salas
J. Funct. Morphol. Kinesiol. 2026, 11(1), 53; https://doi.org/10.3390/jfmk11010053 - 26 Jan 2026
Viewed by 35
Abstract
Background: Walking cadence is commonly adjusted in sport and rehabilitation, yet its effects on spatiotemporal gait parameters and regional plantar pressure distribution under controlled speed conditions remain incompletely characterized. Therefore, this study aimed to determine whether imposed cadence increases at a constant walking [...] Read more.
Background: Walking cadence is commonly adjusted in sport and rehabilitation, yet its effects on spatiotemporal gait parameters and regional plantar pressure distribution under controlled speed conditions remain incompletely characterized. Therefore, this study aimed to determine whether imposed cadence increases at a constant walking speed would (i) systematically reduce temporal gait parameters while preserving inter-limb symmetry and (ii) be associated with region-specific increases in forefoot plantar loading, representing the primary novel contribution of this work. Methods: Fifty-two adults walked at three imposed cadences (110, 120, 130 steps·min−1) while maintaining a fixed treadmill speed of 1.39 m·s−1 via auditory biofeedback. Spatiotemporal parameters were recorded with an OptoGait system, and plantar pressure distribution was measured using in-shoe pressure insoles. Normally distributed variables were analyzed using repeated-measures ANOVA, whereas plantar pressure metrics were assessed using the Friedman test, followed by Wilcoxon signed-rank post-hoc comparisons with false discovery rate (FDR) correction. Associations between temporal parameters and plantar loading metrics (peak pressure, pressure–time integral) were examined using Spearman’s rank correlation with FDR correction (α = 0.05). Results: Increasing cadence produced progressive reductions in gait cycle duration (~8–10%), contact time (~7–8%), and step time (all p < 0.01), while inter-limb symmetry indices remained below 2% across conditions. Peak plantar pressure increased significantly in several forefoot regions with increasing cadence (all p_FDR < 0.05), whereas changes in the first ray were less consistent across conditions. Regional forefoot pressure–time integral also increased modestly with higher cadence (p_FDR < 0.01). Spearman’s correlations revealed moderate negative associations between temporal gait parameters and global plantar loading metrics (ρ = −0.38 to −0.46, all p_FDR < 0.05). Conclusions: At a constant walking speed, increasing cadence systematically shortens temporal gait components and is associated with small but consistent region-specific increases in forefoot plantar loading. These findings highlight cadence as a key temporal constraint shaping plantar loading patterns during steady-state walking and support the existence of concurrent temporal–mechanical adaptations. Full article
Show Figures

Figure 1

29 pages, 6199 KB  
Article
Multi-Objective Optimization and Load-Flow Analysis in Complex Power Distribution Networks
by Tariq Ali, Muhammad Ayaz, Husam S. Samkari, Mohammad Hijji, Mohammed F. Allehyani and El-Hadi M. Aggoune
Fractal Fract. 2026, 10(2), 82; https://doi.org/10.3390/fractalfract10020082 - 25 Jan 2026
Viewed by 56
Abstract
Modern power distribution networks are increasingly challenged with nonlinear operating conditions, the high penetration of distributed energy resources, and conflicting operational objectives such as loss minimization and voltage regulation. Existing load-flow optimization approaches often suffer from slow convergence, premature stagnation in non-convex search [...] Read more.
Modern power distribution networks are increasingly challenged with nonlinear operating conditions, the high penetration of distributed energy resources, and conflicting operational objectives such as loss minimization and voltage regulation. Existing load-flow optimization approaches often suffer from slow convergence, premature stagnation in non-convex search spaces, and limited robustness when handling conflicting multi-objective performance criteria under fixed network constraints. To address these challenges, this paper proposes a Fractional Multi-Objective Load Flow Optimizer (FMOLFO), which integrates a fractional-order numerical regularization mechanism with an adaptive Pareto-based Differential Evolution framework. The fractional-order formulation employed in FMOLFO operates over an auxiliary iteration domain and serves as a numerical regularization strategy to improve the sensitivity conditioning and convergence stability of the load-flow solution, rather than modeling the physical time dynamics or memory effects of the power system. The optimization framework simultaneously minimizes physically consistent active power loss and voltage deviation within existing network operating constraints. Extensive simulations on IEEE 33-bus and 69-bus benchmark distribution systems demonstrate that FMOLFO achieves an up to 27% reduction in active power loss, improved voltage profile uniformity, and faster convergence compared with classical Newton–Raphson and metaheuristic baselines evaluated under identical conditions. The proposed framework is intended as a numerically enhanced, optimization-driven load-flow analysis tool, rather than a control- or dispatch-oriented optimal power flow formulation. Full article
(This article belongs to the Special Issue Fractional Dynamics and Control in Multi-Agent Systems and Networks)
23 pages, 2628 KB  
Article
Scattering-Based Self-Supervised Learning for Label-Efficient Cardiac Image Segmentation
by Serdar Alasu and Muhammed Fatih Talu
Electronics 2026, 15(3), 506; https://doi.org/10.3390/electronics15030506 - 24 Jan 2026
Viewed by 201
Abstract
Deep learning models based on supervised learning rely heavily on large annotated datasets and particularly in the context of medical image segmentation, the requirement for pixel-level annotations makes the labeling process labor-intensive, time-consuming and expensive. To overcome these limitations, self-supervised learning (SSL) has [...] Read more.
Deep learning models based on supervised learning rely heavily on large annotated datasets and particularly in the context of medical image segmentation, the requirement for pixel-level annotations makes the labeling process labor-intensive, time-consuming and expensive. To overcome these limitations, self-supervised learning (SSL) has emerged as a promising alternative that learns generalizable representations from unlabeled data; however, existing SSL frameworks often employ highly parameterized encoders that are computationally expensive and may lack robustness in label-scarce settings. In this work, we propose a scattering-based SSL framework that integrates Wavelet Scattering Networks (WSNs) and Parametric Scattering Networks (PSNs) into a Bootstrap Your Own Latent (BYOL) pretraining pipeline. By replacing the initial stages of the BYOL encoder with fixed or learnable scattering-based front-ends, the proposed method reduces the number of learnable parameters while embedding translation-invariant and small deformation-stable representations into the SSL pipeline. The pretrained encoders are transferred to a U-Net and fine-tuned for cardiac image segmentation on two datasets with different imaging modalities, namely, cardiac cine MRI (ACDC) and cardiac CT (CHD), under varying amounts of labeled data. Experimental results show that scattering-based SSL pretraining consistently improves segmentation performance over random initialization and ImageNet pretraining in low-label regimes, with particularly pronounced gains when only a few labeled patients are available. Notably, the PSN variant achieves improvements of 4.66% and 2.11% in average Dice score over standard BYOL with only 5 and 10 labeled patients, respectively, on the ACDC dataset. These results demonstrate that integrating mathematically grounded scattering representations into SSL pipelines provides a robust and data-efficient initialization strategy for cardiac image segmentation, particularly under limited annotation and domain shift. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

20 pages, 3656 KB  
Article
Efficient Model for Detecting Steel Surface Defects Utilizing Dual-Branch Feature Enhancement and Downsampling
by Quan Lu, Minsheng Gong and Linfei Yin
Appl. Sci. 2026, 16(3), 1181; https://doi.org/10.3390/app16031181 - 23 Jan 2026
Viewed by 62
Abstract
Surface defect evaluation in steel production demands both high inference speed and accuracy for efficient production. However, existing methods face two critical challenges: (1) the diverse dimensions and irregular morphologies of surface defects reduce detection accuracy, and (2) computationally intensive feature extraction slows [...] Read more.
Surface defect evaluation in steel production demands both high inference speed and accuracy for efficient production. However, existing methods face two critical challenges: (1) the diverse dimensions and irregular morphologies of surface defects reduce detection accuracy, and (2) computationally intensive feature extraction slows inference. In response to these challenges, this study proposes an innovative network based on dual-branch feature enhancement and downsampling (DFED-Net). First, an atrous convolution and multi-scale dilated attention fusion module (AMFM) is developed, incorporating local–global feature representation. By emphasizing local details and global semantics, the module suppresses noise interference and enhances the capability of the model to separate small-object features from complex backgrounds. Additionally, a dual-branch downsampling module (DBDM) is developed to preserve the fine details related to scale that are typically lost during downsampling. The DBDM efficiently fuses semantic and detailed information, improving consistency across feature maps at different scales. A lightweight dynamic upsampling (DySample) is introduced to supplant traditional fixed methods with a learnable, adaptive approach, which retains critical feature information more flexibly while reducing redundant computation. Experimental evaluation shows a mean average precision (mAP) of 81.5% on the Northeastern University surface defect detection (NEU-DET) dataset, a 5.2% increase compared to the baseline, while maintaining a real-time inference speed of 120 FPS compared to the 118 FPS of the baseline. The proposed DFED-Net provides strong support for the development of automated visual inspection systems for detecting defects on steel surfaces. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

17 pages, 1183 KB  
Article
Blind Channel Estimation Based on K-Means Clustering with Resource Grouping in Fading Channel
by Yumin Kim, Jonghyun Bang and Taehyoung Kim
Mathematics 2026, 14(3), 400; https://doi.org/10.3390/math14030400 - 23 Jan 2026
Viewed by 120
Abstract
This paper proposes a novel blind channel estimation method based on K-means clustering algorithm with efficient time–frequency resource grouping. Existing K-means-based blind channel estimation techniques assume that received symbols within the coherence time and coherence bandwidth experience the same channel response, which is [...] Read more.
This paper proposes a novel blind channel estimation method based on K-means clustering algorithm with efficient time–frequency resource grouping. Existing K-means-based blind channel estimation techniques assume that received symbols within the coherence time and coherence bandwidth experience the same channel response, which is not valid under fading channel with severe time variation or frequency selectivity. To overcome this limitation, this paper proposes an efficient time–frequency resource grouping pattern selection algorithm. The proposed method introduces the concept of an effective number of data symbols, which eliminates patterns that are computationally expensive yet performance-irrelevant, thereby reducing the search space compared to exhaustive search. Two strategies are applied: Time-main, which prioritizes grouping in the time domain, and Freq-main, which prioritizes grouping in the frequency domain. Simulation results demonstrate that the proposed method consistently outperforms conventional and fixed-pattern approaches across various channel conditions. Full article
(This article belongs to the Special Issue Computational Methods in Wireless Communications with Applications)
21 pages, 9102 KB  
Article
A Lightweight Edge AI Framework for Adaptive Traffic Signal Control in Mid-Sized Philippine Cities
by Alex L. Maureal, Franch Maverick A. Lorilla and Ginno L. Andres
Sustainability 2026, 18(3), 1147; https://doi.org/10.3390/su18031147 - 23 Jan 2026
Viewed by 150
Abstract
Mid-sized Philippine cities commonly rely on fixed-time traffic signal plans that cannot respond to short-term, demand-driven surges, resulting in measurable idle time at stop lines, increased delay, and unnecessary emissions, while adaptive signal control has demonstrated performance benefits, many existing solutions depend on [...] Read more.
Mid-sized Philippine cities commonly rely on fixed-time traffic signal plans that cannot respond to short-term, demand-driven surges, resulting in measurable idle time at stop lines, increased delay, and unnecessary emissions, while adaptive signal control has demonstrated performance benefits, many existing solutions depend on centralized infrastructure and high-bandwidth connectivity, limiting their applicability for resource-constrained local government units (LGUs). This study reports a field deployment of TrafficEZ, a lightweight edge AI signal controller that reallocates green splits locally using traffic-density approximations derived from cabinet-mounted cameras. The controller follows a macroscopic, cycle-level control abstraction consistent with Transportation System Models (TSMs) and does not rely on stationary flow–density–speed (fundamental diagram) assumptions. The system estimates queued demand and discharge efficiency on-device and updates green time each cycle without altering cycle length, intergreen intervals, or pedestrian safety timings. A quasi-experimental pre–post evaluation was conducted at three signalized intersections in El Salvador City using an existing 125 s, three-phase fixed-time plan as the baseline. Observed field results show average per-vehicle delay reductions of 18–32%, with reclaimed effective green translating into approximately 50–200 additional vehicles per hour served at the busiest approaches. Box-occupancy durations shortened, indicating reduced spillback risk, while conservative idle-time estimates imply corresponding CO2 savings during peak periods. Because all decisions run locally within the signal cabinet, operation remained robust during backhaul interruptions and supported incremental, intersection-by-intersection deployment; per-cycle actions were logged to support auditability and governance reporting. These findings demonstrate that density-driven edge AI can deliver practical mobility, reliability, and sustainability gains for LGUs while supporting evidence-based governance and performance reporting. Full article
Show Figures

Figure 1

28 pages, 2192 KB  
Article
AptEVS: Adaptive Edge-and-Vehicle Scheduling for Hierarchical Federated Learning over Vehicular Networks
by Yu Tian, Nina Wang, Zongshuai Zhang, Wenhao Zou, Liangjie Zhao, Shiyao Liu and Lin Tian
Electronics 2026, 15(2), 479; https://doi.org/10.3390/electronics15020479 - 22 Jan 2026
Viewed by 35
Abstract
Hierarchical federated learning (HFL) has emerged as a promising paradigm for distributed machine learning over vehicular networks. Despite recent advances in vehicle selection and resource allocation, most still adopt a fixed Edge-and-Vehicle Scheduling (EVS) configuration that keeps the number of participating edge nodes [...] Read more.
Hierarchical federated learning (HFL) has emerged as a promising paradigm for distributed machine learning over vehicular networks. Despite recent advances in vehicle selection and resource allocation, most still adopt a fixed Edge-and-Vehicle Scheduling (EVS) configuration that keeps the number of participating edge nodes and vehicles per node constant across training rounds. However, given the diverse training tasks and dynamic vehicular environments, our experiments confirm that such static configurations struggle to efficiently meet the task-specific requirements across model accuracy, time delay, and energy consumption. To address this, we first formulate a unified, long-term training cost metric that balances these conflicting objectives. We then propose AptEVS, an adaptive scheduling framework based on deep reinforcement learning (DRL), designed to minimize this cost. The core of AptEVS is its phase-aware design, which adapts the scheduling strategy by first identifying the current training phase and then switching to specialized strategies accordingly. Extensive simulations demonstrate that AptEVS learns an effective scheduling policy online from scratch, consistently outperforming baselines and and reducing the long-term training cost by up to 66.0%. Our findings demonstrate that phase-aware DRL is both feasible and highly effective for resource scheduling over complex vehicular networks. Full article
(This article belongs to the Special Issue Technology of Mobile Ad Hoc Networks)
Show Figures

Figure 1

26 pages, 942 KB  
Article
Institutional Quality, ESG Performance, and Aggressive Tax Planning in Developing Countries
by Marwan Mansour and Mohammed Alomair
Sustainability 2026, 18(2), 1126; https://doi.org/10.3390/su18021126 - 22 Jan 2026
Viewed by 97
Abstract
Aggressive corporate tax avoidance represents a significant fiscal and governance challenge in developing economies, where public revenues are critical for sustainable development and enforcement capacity is often uneven. This study examines whether environmental, social, and governance (ESG) performance constrains corporate tax avoidance and [...] Read more.
Aggressive corporate tax avoidance represents a significant fiscal and governance challenge in developing economies, where public revenues are critical for sustainable development and enforcement capacity is often uneven. This study examines whether environmental, social, and governance (ESG) performance constrains corporate tax avoidance and whether this relationship is conditioned by national institutional quality. Using a multi-country panel of 2464 publicly listed non-financial firms from 14 developing economies over the period 2015–2023, the analysis employs fixed-effects estimation, dynamic System GMM, and instrumental-variable (2SLS) techniques to address unobserved heterogeneity and endogeneity concerns. The results indicate that stronger ESG performance is associated with significantly lower levels of tax avoidance; however, this effect is highly contingent on institutional quality. ESG exerts a substantive disciplining role primarily in governance-strong environments characterized by effective regulation and credible enforcement. Heterogeneity analyses further reveal that the ESG–tax avoidance relationship is driven mainly by the governance and environmental pillars, is more pronounced among large firms, varies across regions, and strengthens over time as ESG frameworks mature. In contrast, the social ESG dimension and smaller firms exhibit weaker or insignificant effects, consistent with symbolic compliance in low-enforcement settings. By integrating stakeholder, legitimacy, agency, and institutional theories, this study advances a context-sensitive understanding of ESG effectiveness and helps reconcile mixed findings in the existing literature. The findings offer policy-relevant insights for regulators and tax authorities seeking to strengthen fiscal discipline and development financing in developing economies. Full article
Show Figures

Figure 1

22 pages, 7096 KB  
Article
An Improved ORB-KNN-Ratio Test Algorithm for Robust Underwater Image Stitching on Low-Cost Robotic Platforms
by Guanhua Yi, Tianxiang Zhang, Yunfei Chen and Dapeng Yu
J. Mar. Sci. Eng. 2026, 14(2), 218; https://doi.org/10.3390/jmse14020218 - 21 Jan 2026
Viewed by 77
Abstract
Underwater optical images often exhibit severe color distortion, weak texture, and uneven illumination due to light absorption and scattering in water. These issues result in unstable feature detection and inaccurate image registration. To address these challenges, this paper proposes an underwater image stitching [...] Read more.
Underwater optical images often exhibit severe color distortion, weak texture, and uneven illumination due to light absorption and scattering in water. These issues result in unstable feature detection and inaccurate image registration. To address these challenges, this paper proposes an underwater image stitching method that integrates ORB (Oriented FAST and Rotated BRIEF) feature extraction with a fixed-ratio constraint matching strategy. First, lightweight color and contrast enhancement techniques are employed to restore color balance and improve local texture visibility. Then, ORB descriptors are extracted and matched via a KNN (K-Nearest Neighbors) nearest-neighbor search, and Lowe’s ratio test is applied to eliminate false matches caused by weak texture similarity. Finally, the geometric transformation between image frames is estimated by incorporating robust optimization, ensuring stable homography computation. Experimental results on real underwater datasets show that the proposed method significantly improves stitching continuity and structural consistency, achieving 40–120% improvements in SSIM (Structural Similarity Index) and PSNR (peak signal-to-noise ratio) over conventional Harris–ORB + KNN, SIFT (scale-invariant feature transform) + BF (brute force), SIFT + KNN, and AKAZE (accelerated KAZE) + BF methods while maintaining processing times within one second. These results indicate that the proposed method is well-suited for real-time underwater environment perception and panoramic mapping on low-cost, micro-sized underwater robotic platforms. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

34 pages, 3055 KB  
Article
The Impact of ESG Factors on Corporate Credit Risk: An Empirical Analysis of European Firms Using the Altman Z-Score
by Cinzia Baldan, Francesco Zen and Margherita Targhetta
Account. Audit. 2026, 2(1), 2; https://doi.org/10.3390/accountaudit2010002 - 21 Jan 2026
Viewed by 179
Abstract
Background: The increasing integration of Environmental, Social, and Governance (ESG) factors into financial decision-making has prompted debate over their impact on corporate credit risk. While many studies suggest that ESG performance may enhance firms’ resilience, empirical evidence remains mixed due to data [...] Read more.
Background: The increasing integration of Environmental, Social, and Governance (ESG) factors into financial decision-making has prompted debate over their impact on corporate credit risk. While many studies suggest that ESG performance may enhance firms’ resilience, empirical evidence remains mixed due to data inconsistency and methodological heterogeneity and differences in time horizons over which ESG effects materialise. Methods: The study investigates the relationship between ESG performance and credit risk using a panel of European firms from 2020 to 2024, a phase highly characterised by substantial macroeconomic shocks. The Altman Z-score serves as a proxy for default risk, while ESG data are sourced from Refinitiv Eikon. Four fixed-effects panel regressions are estimated: a baseline model using aggregate ESG scores, an extended model with financial controls, and disaggregated and sector-specific models. Results: The findings indicate that ESG scores—either aggregated or by pillar—show limited statistical significance in explaining variations in the Z-score. In contrast, financial variables such as solvency, liquidity, and cash flow ratios display strong, positive, and significant effects on credit stability. Some heterogeneous sectoral effects emerge: social factors are positive in technology, while governance has a negative impact in basic materials. Conclusions: ESG initiatives may not yield immediate improvements in default risk metrics, particularly over short and crisis-dominated periods, but could enhance financial resilience over time. Combining ESG information with traditional financial ratios remains essential; the results underscore the importance of consistent and high-quality ESG disclosure to reduce measurement error and enhance comparability across firms. Full article
Show Figures

Figure 1

22 pages, 11111 KB  
Article
DeePC Sensitivity for Pressure Control with Pressure-Reducing Valves (PRVs) in Water Networks
by Jason Davda and Avi Ostfeld
Water 2026, 18(2), 253; https://doi.org/10.3390/w18020253 - 17 Jan 2026
Viewed by 200
Abstract
This study provides a practice-oriented sensitivity analysis of DeePC for pressure management in water distribution systems. Two public benchmark systems were used, Fossolo (simpler) and Modena (more complex). Each run fixed a monitored node and pressure reference, applied the same randomized identification phase [...] Read more.
This study provides a practice-oriented sensitivity analysis of DeePC for pressure management in water distribution systems. Two public benchmark systems were used, Fossolo (simpler) and Modena (more complex). Each run fixed a monitored node and pressure reference, applied the same randomized identification phase followed by closed-loop control, and quantified performance by the mean absolute error (MAE) of the node pressure relative to the reference value. To better characterize closed-loop behavior beyond MAE, we additionally report (i) the maximum deviation from the reference over the control window and (ii) a valve actuation effort metric, normalized to enable fair comparison across different numbers of valves and, where relevant, different control update rates. Motivated by the need for practical guidance on how hydraulic boundary conditions and algorithmic choices shape DeePC performance in complex water networks, we examined four factors: (1) placement of an additional internal PRV, supplementing the reservoir-outlet PRVs; (2) the control time step (Δt); (3) a uniform reservoir-head offset (Δh); and (4) DeePC regularization weights (λg,λu,λy). Results show strong location sensitivity, in Fossolo, topologically closer placements tended to lower MAE, with exceptions; the baseline MAE with only the inlet PRV was 3.35 [m], defined as a DeePC run with no additions, no extra valve, and no changes to reservoir head, time step, or regularization weights. Several added-valve locations improved the MAE (i.e., reduced it) below this level, whereas poor choices increased the error up to ~8.5 [m]. In Modena, 54 candidate pipes were tested, the baseline MAE was 2.19 [m], and the best candidate (Pipe 312) achieved 2.02 [m], while pipes adjacent to the monitored node did not outperform the baseline. Decreasing Δt across nine tested values consistently reduced MAE, with an approximately linear trend over the tested range, maximum deviation was unchanged (7.8 [m]) across all Δt cases, and actuation effort decreased with shorter steps after normalization. Changing reservoir head had a pronounced effect: positive offsets improved tracking toward a floor of ≈0.49 [m] around Δh ≈ +30 [m], whereas negative offsets (below the reference) degraded performance. Tuning of regularization weights produced a modest spread (≈0.1 [m]) relative to other factors, and the best tested combination (λy, λg, λu) = (102, 10−3, 10−2) yielded MAE ≈ 2.11 [m], while actuation effort was more sensitive to the regularization choice than MAE/max deviation. We conclude that baseline system calibration, especially reservoir heads, is essential before running DeePC to avoid biased or artificially bounded outcomes, and that for large systems an external optimization (e.g., a genetic-algorithm search) is advisable to identify beneficial PRV locations. Full article
Show Figures

Figure 1

18 pages, 1521 KB  
Systematic Review
Neuroprotective Potential of SGLT2 Inhibitors in Animal Models of Alzheimer’s Disease and Type 2 Diabetes Mellitus: A Systematic Review
by Azim Haikal Md Roslan, Tengku Marsya Hadaina Tengku Muhazan Shah, Shamin Mohd Saffian, Lisha Jenny John, Muhammad Danial Che Ramli, Che Mohd Nasril Che Mohd Nassir, Mohd Kaisan Mahadi and Zaw Myo Hein
Pharmaceuticals 2026, 19(1), 166; https://doi.org/10.3390/ph19010166 - 16 Jan 2026
Viewed by 272
Abstract
Background: Alzheimer’s disease (AD) features progressive cognitive decline and amyloid-beta (Aβ) accumulation. Insulin resistance in type 2 diabetes mellitus (T2DM) is increasingly recognised as a mechanistic link between metabolic dysfunction and neurodegeneration. Although sodium–glucose cotransporter-2 inhibitors (SGLT2is) have established glycaemic and cardioprotective benefits, [...] Read more.
Background: Alzheimer’s disease (AD) features progressive cognitive decline and amyloid-beta (Aβ) accumulation. Insulin resistance in type 2 diabetes mellitus (T2DM) is increasingly recognised as a mechanistic link between metabolic dysfunction and neurodegeneration. Although sodium–glucose cotransporter-2 inhibitors (SGLT2is) have established glycaemic and cardioprotective benefits, their neuroprotective role remains less well defined. Objectives: This systematic review examines animal studies on the neuroprotective effects of SGLT2i in T2DM and AD models. Methods: A literature search was conducted across the Web of Science, Scopus, and PubMed databases, covering January 2014 to November 2024. Heterogeneity was assessed with I2, and data were pooled using fixed-effects models, reported as standardised mean differences with 95% confidence intervals. We focus on spatial memory performance as measured by the Morris Water Maze (MWM) test, including escape latency and time spent in the target quadrant, as the primary endpoints. The secondary endpoints of Aβ accumulation, oxidative stress, and inflammatory markers were also analysed and summarised. Results: Twelve studies met the inclusion criteria for this review. A meta-analysis showed that SGLT2i treatment significantly improved spatial memory by reducing the escape latency in both T2DM and AD models. In addition, SGLT2i yielded a significant improvement in spatial memory, as indicated by an increased target quadrant time for both T2DM and AD. Furthermore, SGLT2i reduced Aβ accumulation in the hippocampus and cortex, which met the secondary endpoint; the treatment also lessened oxidative stress and inflammatory markers in animal brains. Conclusions: Our findings indicate that SGLT2is confer consistent neuroprotective benefits in experimental T2DM and AD models. Full article
(This article belongs to the Special Issue Novel Therapeutic Strategies for Alzheimer’s Disease Treatment)
Show Figures

Graphical abstract

Back to TopTop