Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (55,409)

Search Parameters:
Keywords = captures

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 1596 KB  
Article
Amino Acid-Derived Metabolic Signature Across Stages of Systolic Dysfunction: Derivation and Internal Evaluation of the HASI (Heart Failure Amino Acid-Derived Systolic Index)—40 Index
by Beata Krasińska, Ievgen Spasenenko, Dagmara Pietkiewicz, Szymon Plewa, Krzysztof J. Filipiak, Katarzyna Pawlaczyk-Gabriel, Jarosław Bartkowski, Andrzej Tykarski, Zbigniew Krasiński, Jan Matysiak and Tomasz Urbanowicz
Int. J. Mol. Sci. 2026, 27(10), 4459; https://doi.org/10.3390/ijms27104459 (registering DOI) - 15 May 2026
Abstract
Heart failure with reduced ejection fraction (HFrEF) is increasingly recognized as a systemic metabolic disorder. The aim of this study was to characterize amino acid-related metabolic differences between heart failure with moderately reduced ejection fraction (HFmrEF) (LVEF 40–49%) and HFrEF (LVEF < 40%) [...] Read more.
Heart failure with reduced ejection fraction (HFrEF) is increasingly recognized as a systemic metabolic disorder. The aim of this study was to characterize amino acid-related metabolic differences between heart failure with moderately reduced ejection fraction (HFmrEF) (LVEF 40–49%) and HFrEF (LVEF < 40%) and to derive a biologically interpretable composite metabolomic index capable of discriminating between these two stages of systolic dysfunction. We conducted a cross-sectional metabolomic analysis of 42 patients stratified by left ventricular ejection fraction (LVEF < 40% vs. 40–49%). The reference group comprised patients with mildly reduced ejection fraction (LVEF 40–49%), without inclusion of individuals with preserved or normal cardiac function. Targeted amino acid profiling was performed using liquid chromatography-tandem mass spectrometry (LC–MS/MS). Metabolites were standardized and analyzed individually and in combination. A composite index (Heart Failure Amino Acid-Derived Systolic Index: HASI-40), integrating markers of proteolysis and metabolic resilience, was derived to distinguish patients with HFrEF from those with HFmrEF. Discrimination was assessed using receiver operator curve (ROC) analysis with internal validation and multivariable adjustment. Patients with LVEF < 40% exhibited a coordinated metabolic phenotype characterized by reduced methionine, sarcosine, serine, and taurine. While individual metabolites did not retain significance after multiple-testing correction, the composite HASI-40 index remained strongly associated with HFrEF (OR 5.56, 95% CI: 1.70–18.14; p = 0.004), although the wide confidence interval indicates limited precision due to sample size. The index demonstrated good discrimination with an area under the curve (AUC) of 0.862, which improved when combined with age (AUC 0.932). The index represents a standardized composite measure and does not define a diagnostic cutoff for individual patients. These findings suggest that HFmrEF and HFrEF exhibit partially distinct metabolic phenotypes despite overlapping clinical characteristics. These findings suggest that HASI-40 captures metabolic differences between patients with HFmrEF (LVEF 40–49%) and those with HFrEF (LVEF < 40%), reflecting progression toward more advanced systolic dysfunction. However, due to the absence of a control group with preserved ejection fraction, small sample size, and lack of external validation, the index should be considered exploratory and hypothesis-generating rather than clinically applicable. Full article
Show Figures

Figure 1

27 pages, 7263 KB  
Article
LEViM-Net: A Lightweight EfficientViM Network for Earthquake Building Damage Assessment
by Qing Ma, Dongpu Wu, Yichen Zhang, Jiquan Zhang, Jinyuan Xu and Yechi Yao
Remote Sens. 2026, 18(10), 1592; https://doi.org/10.3390/rs18101592 (registering DOI) - 15 May 2026
Abstract
Building damage and collapse are the main sources of serious casualties and financial losses during earthquakes, which are among the most destructive natural disasters that endanger human life and property. Therefore, quick and precise post-earthquake building damage assessment is essential for risk assessment [...] Read more.
Building damage and collapse are the main sources of serious casualties and financial losses during earthquakes, which are among the most destructive natural disasters that endanger human life and property. Therefore, quick and precise post-earthquake building damage assessment is essential for risk assessment and emergency action. Convolutional neural networks (CNNs) primarily concentrate on local features and frequently ignore global contextual information within and across buildings, despite the fact that deep learning-based techniques allow automated damage identification. Transformer-based approaches, on the other hand, are good at capturing global dependencies, but their large memory and processing costs restrict their usefulness. As a result, existing networks still struggle to achieve an effective balance between accuracy and efficiency. To address this issue, this study proposes a lightweight and efficient network for post-earthquake building damage assessment. Specifically, we develop a two-stage method based on EfficientViM with an encoder–decoder architecture. In the encoder, Mamba is introduced to extract multi-scale change features with long-range dependencies, leveraging the state space model to preserve global modeling capability while significantly reducing computational complexity. In the decoder, two lightweight modules are designed to further enhance discriminative capability and computational efficiency. The network finally outputs building localization and pixel-level building damage, respectively. Experiments were conducted on four earthquake events from the BRIGHT dataset using a three-for-training and one-for-testing cross-event rotation evaluation strategy. The results demonstrate that LEViM-Net requires only 30.94 M parameters and 27.10 G FLOPs. In addition, for the Türkiye earthquake event, the proposed method achieves an F1 score of 80.49%, an overall accuracy (OA) of 88.17%, and a mean intersection over union (mIoU) of 49.73%. The proposed model enables efficient remote-sensing-based mapping of macroscopic and image-visible building damage, providing timely support for early-stage emergency response. Full article
(This article belongs to the Special Issue Advances in AI-Driven Remote Sensing for Geohazard Perception)
15 pages, 897 KB  
Article
Advanced Mathematical Platform for the Control and Manipulation of Magnetized Living Cells
by Vitaly Goranov, Tatiana Shelyakova, Jaroslav Koštál, Alexander Makhaniok, Gianluca Giavaresi and Valentin Alek Dediu
Bioengineering 2026, 13(5), 560; https://doi.org/10.3390/bioengineering13050560 (registering DOI) - 15 May 2026
Abstract
Magnetizing living cells with superparamagnetic iron oxide nanoparticles (SPIONs) enables their remote manipulation using external magnetic field. This lays the foundation for magnetically assembling tissue precursors within cell-friendly, proliferation-permissive environments and holds considerable promise for biomedical applications, particularly in the development of complex [...] Read more.
Magnetizing living cells with superparamagnetic iron oxide nanoparticles (SPIONs) enables their remote manipulation using external magnetic field. This lays the foundation for magnetically assembling tissue precursors within cell-friendly, proliferation-permissive environments and holds considerable promise for biomedical applications, particularly in the development of complex single- and multicellular tissue constructs for bone and organ reconstruction. However, progress in this field is limited by the lack of robust mathematical tools for accurate control of ensembles of magnetic nano- and micro-objects. In practical printing scenarios, collective behavior and unavoidable statistical heterogeneity—such as variations in SPION size and shape or deviations in cell magnetization—render traditional equation-based modeling inadequate. We developed a hybrid modeling framework integrating conventional physics-based simulations with artificial intelligence-driven image analysis. Dynamic parameters were extracted from video recordings of magnetized cells moving within model microfluidic devices exposed to well-defined magnetic fields and gradients. The AI-based analysis enabled quantitative characterization of ensemble behavior under heterogeneous conditions. The proposed framework successfully captured the collective dynamics of magnetized cell ensembles and enabled accurate control of their spatial organization under external magnetic actuation. The integration of simulation and data-driven analysis provided robust parameter identification despite statistical heterogeneity within the system. This integrated modeling approach provides a practical and effective tool for controlling the three-dimensional magnetic assembly of living cells, with strong potential for applications in tissue engineering. Full article
19 pages, 732 KB  
Systematic Review
From the Digital Divide to Algorithmic Vulnerability: A Systematic Review of Social Stratification in the AI Era (2015–2025)
by Manuel José Mera Cedeño, Gertrudis Amarilis Laínez Quinde, Wilson Alexander Zambrano Vélez and César Ernesto Roldán Martínez
Soc. Sci. 2026, 15(5), 326; https://doi.org/10.3390/socsci15050326 (registering DOI) - 15 May 2026
Abstract
The present study seeks to synthesize the scientific evidence from the last decade (2015–2025) regarding the transition from inequality in technological access toward social stratification mediated by automated decision-making systems. Following PRISMA 2020 guidelines and the SPIDER model, a corpus of 74 high-impact [...] Read more.
The present study seeks to synthesize the scientific evidence from the last decade (2015–2025) regarding the transition from inequality in technological access toward social stratification mediated by automated decision-making systems. Following PRISMA 2020 guidelines and the SPIDER model, a corpus of 74 high-impact records from Scopus, Web of Science, ProQuest, and PsycINFO was examined. The results reveal an exponential growth in scientific production since 2018, marking a shift from infrastructure-based inequality toward a systemic stratification mediated by algorithmic opacity. Three critical sectors of exclusion are categorized: the socio-health nexus, the labor market, and the educational ecosystem. Methodologically, quantitative algorithmic auditing predominates (58%), although mixed sociotechnical approaches have increased by 25% since 2021 to capture experiences of intersectional vulnerability. The study concludes that AI acts as an active agent of social reproduction, necessitating a transition toward “Algorithmic Justice” and “Human-Centric Governance.” Finally, a “Reinstating AI” framework is proposed to democratize technological development and mitigate systemic biases, offering a roadmap for researchers and policymakers in the pursuit of technological sovereignty. Full article
27 pages, 4553 KB  
Article
Explicit Water Balance Constraints for Trustworthy Graph Neural Network Flood Forecasting
by Yuqi Chen, Ruixi Huang, Yue Tang, Hao Wang, Tong Zhou, Junlin Fan, Yin Long and Tehseen Zia
Appl. Sci. 2026, 16(10), 4963; https://doi.org/10.3390/app16104963 (registering DOI) - 15 May 2026
Abstract
Although Graph Neural Networks (GNNs) are widely regarded as an ideal tool for capturing spatial dependencies in river basins, their effectiveness in hydrological forecasting is severely challenged by a topology paradox: under a purely data-driven paradigm, GNNs fail to spontaneously learn physical laws, [...] Read more.
Although Graph Neural Networks (GNNs) are widely regarded as an ideal tool for capturing spatial dependencies in river basins, their effectiveness in hydrological forecasting is severely challenged by a topology paradox: under a purely data-driven paradigm, GNNs fail to spontaneously learn physical laws, generating predictions that lack physical interpretability and frequently violate mass conservation. To address this fundamental problem, this paper proposes a physics-informed graph learning framework integrated with an explicit, differentiable water balance constraint (WB-GNN). By reconstructing the continuity equation into a differentiable loss function, we directly embed physical conservation as a strong inductive bias into the neural network’s training objective. We comprehensively evaluated the model on two large-sample datasets (LamaH-CE and CAMELS) against state-of-the-art baselines, including EA-LSTM and unconstrained Pure-GNN. Quantitative results demonstrate that the proposed physical constraint successfully awakens the potential of river network topology. On the LamaH-CE dataset, WB-GNN achieved a Nash-Sutcliffe Efficiency (NSE) of 0.86 and a Root Mean Square Error (RMSE) of 9.2 m3/s, outperforming both the domain-specific EA-LSTM (NSE: 0.83) and the unconstrained Pure-GNN (NSE: 0.74). Crucially, the introduction of the differentiable constraint reduced the Physical Inconsistency Ratio (PIR) by an order of magnitude-from 39.8% in the unconstrained model to just 4.3%. Similar robust improvements were validated across the highly heterogeneous CAMELS dataset. These quantifiable results confirm that the proposed method not only achieves superior forecasting accuracy but also fundamentally guarantees physical trustworthiness, making it highly robust for critical decision-making in extreme flood events. Full article
Show Figures

Figure 1

47 pages, 1590 KB  
Article
A Hybrid PoS–PoW Blockchain Framework for Secure Cyber Threat Intelligence Sharing: Design, Implementation, and Evaluation
by Ahmed El-Kosairy and Heba Kamal Aslan
Big Data Cogn. Comput. 2026, 10(5), 158; https://doi.org/10.3390/bdcc10050158 (registering DOI) - 15 May 2026
Abstract
Many blockchain-based cyber threat intelligence (CTI) sharing systems emphasize immutability and auditability, but often treat CTI submissions as ordinary blockchain transactions without explicitly separating content validation from publication anchoring. This paper presents CTIB, a proof-of-concept hybrid Proof-of-Stake (PoS) and Proof-of-Work (PoW) framework for [...] Read more.
Many blockchain-based cyber threat intelligence (CTI) sharing systems emphasize immutability and auditability, but often treat CTI submissions as ordinary blockchain transactions without explicitly separating content validation from publication anchoring. This paper presents CTIB, a proof-of-concept hybrid Proof-of-Stake (PoS) and Proof-of-Work (PoW) framework for CTI publication. CTIB uses a sequential workflow in which a PoS committee first evaluates CTI submissions, and an accepted feed hash is then anchored through a PoW step to provide verifiable temporal binding. The prototype is evaluated in a controlled local Hardhat environment; therefore, the results should be interpreted as prototype-level feasibility evidence rather than production-scale deployment results. CTI content is represented using STIX 2.1, canonicalized, and hashed using SHA-256; only integrity-critical evidence is stored on-chain, while full CTI content remains off-chain. Experimental results demonstrate prototype-level feasibility, with measured throughput, latency, and success rate metrics under different PoW difficulty profiles. Across ten independent local runs, CTIB achieved an average throughput between 141.13 and 166.14 feeds/min, average p50 latency between 326.18 and 403.09 ms, and average p95 latency between 553.22 and 700.82 ms under the tested difficulty profiles. Security analysis uses analytical modeling, committee capture probability, and Monte Carlo simulation to evaluate majority-attack feasibility under stated assumptions. The results indicate that sequential compromise of both validation and anchoring layers increases the cost of coordinated manipulation. Full article
Show Figures

Graphical abstract

38 pages, 624 KB  
Review
From Biosignals to Bedside: A Review of Real-Time Edge Machine Learning for Wearable Health Monitoring
by Mustapha Oloko-Oba, Ebenezer Esenogho and Kehinde Aruleba
Bioengineering 2026, 13(5), 559; https://doi.org/10.3390/bioengineering13050559 (registering DOI) - 15 May 2026
Abstract
Wearable devices increasingly capture biosignals such as electrocardiograms, photoplethysmograms, inertial signals, and electrodermal activity during daily life, enabling earlier detection and continuous monitoring outside the clinic. Real-time edge machine learning can convert these streams into timely, privacy-preserving inference by placing computation on a [...] Read more.
Wearable devices increasingly capture biosignals such as electrocardiograms, photoplethysmograms, inertial signals, and electrodermal activity during daily life, enabling earlier detection and continuous monitoring outside the clinic. Real-time edge machine learning can convert these streams into timely, privacy-preserving inference by placing computation on a wearable (device-only) or a paired phone, with intermittent cloud assist used selectively for dashboards, summarisation, and lifecycle management. Clinical adoption remains uneven because free-living data are noisy, labels are often delayed, and device ecosystems evolve over time. This narrative review organises the literature as an end-to-end deployment pathway: sensing and artefact management, streaming windowing and multimodal alignment, and model families suited to on-device inference. We compare classical feature-based pipelines with learned representations, including compact CNN/TCN and recurrent and efficient attention-based models, and discuss when self-supervised pretraining and distillation are most useful in low-label settings. We then synthesise deployment engineering levers (quantisation, pruning, and distillation) and benchmarking requirements, emphasising runtime constraints that determine feasibility: latency per update, peak RAM, energy per inference, duty cycle, and thermal behaviour. Applications are grouped across cardiovascular monitoring, blood pressure and haemodynamics, sleep and respiration, and movement and stress, with explicit attention to false-alert burden, adherence, and workflow integration. To support translation, we provide a validation ladder and a reliability toolkit covering calibration, uncertainty-aware thresholds and deferral, drift monitoring triggers, and safe update governance. The novelty of this review is a deployment-oriented synthesis that ties modelling choices to edge tiers and resource budgets and provides reusable reporting templates, including an edge-cost card and comparative tables spanning modalities, models, deployment levers, applications, and reliability requirements. Full article
14 pages, 16317 KB  
Article
Cross-Purification Mask Network: A Mask Refinement Method for Single-Channel Speech Separation
by Fuwen Zhu, Kaihao Yao and Keping Wang
Mathematics 2026, 14(10), 1709; https://doi.org/10.3390/math14101709 (registering DOI) - 15 May 2026
Abstract
Accurate target speech mask estimation is the key to single-channel speech separation. Masks generated by conventional mask networks are easily corrupted by interfering speech and background noise, which degrades separation performance. To solve this problem, this paper proposes a Cross-Purification Mask Network (CPMN), [...] Read more.
Accurate target speech mask estimation is the key to single-channel speech separation. Masks generated by conventional mask networks are easily corrupted by interfering speech and background noise, which degrades separation performance. To solve this problem, this paper proposes a Cross-Purification Mask Network (CPMN), which consists of three core modules: the Dynamic Context-Aware Mechanism (DCAM), Feature Cross-Complementation Mechanism (FCCM), and Adaptive Purification Mask Mechanism (APMM). The DCAM aggregates dynamic sliding window and long-term temporal features to capture long-range temporal dependencies of masks and enhance the localization accuracy of target speech. The FCCM fuses weighted mask features of interfering speakers to dynamically supplement missing information in target speech masks. The APMM combines adaptive filters and residual networks to output high-precision refined masks. The CPMN is embedded into three mainstream speech separation frameworks including Conv-TasNet, DPTNet, and TDANet, and extensive experiments are conducted on Libri2Mix, WHAM!, and WSJ0-2Mix datasets. The results show that the CPMN brings stable performance gains. After integration, TDANet achieves SI-SNRi of 17.4 dB (+0.5 dB) on Libri2Mix and 15.2 dB (+0.4 dB) on WHAM!. Meanwhile, Conv-TasNet and DPTNet obtain SI-SNR improvements of 0.3 dB (15.6 dB) and 0.4 dB (20.8 dB) on WSJ0-2Mix, respectively. Full article
(This article belongs to the Special Issue Deep Learning and Adaptive Control, 4th Edition)
24 pages, 5438 KB  
Article
An Improved DeepLabV3+-Based Method for Crop Row Segmentation and Navigation Line Extraction in Agricultural Fields
by Letian Wu, Yongzhi Cui, Huifeng Shi, Xiaoli Sun, Jiayan Yang, Xinwei Cao, Ping Zou and Ya Liu
Sensors 2026, 26(10), 3142; https://doi.org/10.3390/s26103142 - 15 May 2026
Abstract
Accurate crop row detection is identified as a critical prerequisite for autonomous agricultural navigation, yet it remains challenging in complex field environments. To achieve a balance between segmentation accuracy, robustness, and real-time performance, an improved crop row segmentation and navigation method based on [...] Read more.
Accurate crop row detection is identified as a critical prerequisite for autonomous agricultural navigation, yet it remains challenging in complex field environments. To achieve a balance between segmentation accuracy, robustness, and real-time performance, an improved crop row segmentation and navigation method based on the DeepLabV3+ framework was developed. MobileNetV2 was adopted as the backbone to minimize computational costs, while feature representation was enhanced through integrated attention mechanisms and multi-scale fusion. Specifically, split-attention convolution was integrated into the backbone, a DenseASPP + SP module was employed for multi-scale contextual capture, and a Convolutional Block Attention Module (CBAM) was added to refine feature responses. Experimental results demonstrated that the proposed method outperformed mainstream models, achieving a mean Intersection over Union (mIoU) of 93.42% and an f1-score of 96.8%. The model maintained a lightweight architecture with 8.35 M parameters and a real-time speed of 32 FPS. Furthermore, crop row anchor points were extracted and processed via DBSCAN clustering and RANSAC fitting to generate high-precision navigation lines. Validation showed that the middle crop row yielded the highest fitting accuracy with minimal angular and lateral errors. This study provides an efficient visual perception solution for intelligent field operations. Full article
(This article belongs to the Section Smart Agriculture)
Show Figures

Figure 1

27 pages, 6347 KB  
Article
Uncertainty-Calibrated Safety Gating for Vision–Language– Action Manipulation Under Domain Shift: Reliability Gains and Intervention–Efficiency Trade-Offs
by Atef M. Ghaleb, Ali S. Allahloh, Sobhi Mejjaouli, Mohammed A. H. Ali and Adel Al-Shayea
Sensors 2026, 26(10), 3140; https://doi.org/10.3390/s26103140 - 15 May 2026
Abstract
Vision–Language–Action (VLA) policies promise flexible long-horizon manipulation, but deployment under domain shift requires both reliable uncertainty estimates and a workable runtime-assurance policy. We study a model-agnostic uncertainty-calibrated safety-gating wrapper that estimates online failure risk and routes control among policy execution, pause-and-reobserve, and a [...] Read more.
Vision–Language–Action (VLA) policies promise flexible long-horizon manipulation, but deployment under domain shift requires both reliable uncertainty estimates and a workable runtime-assurance policy. We study a model-agnostic uncertainty-calibrated safety-gating wrapper that estimates online failure risk and routes control among policy execution, pause-and-reobserve, and a fallback planner. Using a cleaned and consistently aggregated benchmark pipeline, we evaluate two long-horizon manipulation tasks in NVIDIA Isaac Sim 5.0 under lighting, texture, occlusion, sensor, and combined shifts. Relative to an ungated VLA baseline, calibrated gating improves mean shifted success from 57.5% to 77.2% and reduces aggregate expected calibration error from 0.303 to 0.100. The largest success gains occur under occlusion and combined shift, including improvements from 48.3% to 85.2% on the drawer task and from 59.4% to 87.8% on clutter sort. The results also expose a systems trade-off: an aggressive uncalibrated threshold baseline attains stronger raw success and collision metrics, but requires nearly twice as many interventions per shifted episode (21.6 vs. 11.5). The main contribution is, therefore, an empirical characterization of the reliability–intervention trade-off created by calibrated supervision, not a claim that the calibrated supervisor is universally the best terminal controller. We frame calibrated gating as a better-calibrated, lower-intervention supervisor that materially improves robustness relative to an ungated VLA while revealing the open problem of mapping calibrated risk into efficient intervention policies. Additional threshold-sensitivity, signal-diagnostic, overhead, and residual-failure analyses show that the selected operating point is meaningful but not universal: the calibrated risk threshold captures most shifted failures in retrospective logs, yet residual contacts still arise during pause and fallback states. These findings provide controlled simulation evidence for trustworthy VLA supervision under distribution shift and clarify the reliability–intervention frontier that future embodied-control systems must navigate. Full article
Show Figures

Figure 1

29 pages, 7615 KB  
Article
Analyzing Economic and Social Inequalities in Housing: A Visual Storytelling Case Study in Portugal
by Afonso Crespo, José Barateiro and Elsa Cardoso
World 2026, 7(5), 84; https://doi.org/10.3390/world7050084 (registering DOI) - 15 May 2026
Abstract
Housing inequalities remain a major challenge for contemporary urban governance, as they combine economic, social, spatial, and demographic dynamics that are difficult to capture through single indicators. This paper develops a data-driven assessment of housing inequalities in Portugal between 2015 and 2025, drawing [...] Read more.
Housing inequalities remain a major challenge for contemporary urban governance, as they combine economic, social, spatial, and demographic dynamics that are difficult to capture through single indicators. This paper develops a data-driven assessment of housing inequalities in Portugal between 2015 and 2025, drawing on official national and European statistics and applying a Business Intelligence (BI) and urban analytics framework oriented towards policy monitoring. Official data from Statistics Portugal and Eurostat are integrated through an analytical pipeline including automated extraction via public APIs, data enrichment, and visual analytics. The workflow follows a CRISP-DM-inspired structure, creating a set of normalized indicators to capture different dimensions of housing conditions. The results point to a structurally polarized housing market. Housing valuations increased across all regions, but at uneven rates, reinforcing territorial disparities rather than convergence. Metropolitan and tourism-oriented regions experienced faster appreciation and indirect effects, while year-over-year growth in completed dwellings slowed after 2021–2022, indicating an uneven supply response. Beyond its empirical findings, the primary contribution of this study lies in demonstrating how BI and data science methodologies can be operationalized to monitor housing inequalities using official statistics. The proposed framework is replicable and can be adapted to other territorial and policy contexts. Full article
(This article belongs to the Section Health, Population, and Crisis Systems)
Show Figures

Figure 1

15 pages, 1074 KB  
Article
Risk Factors and Nonlinear Risk Patterns of Prolonged Air Leak After Robot-Assisted Lung Resection for Lung Cancer: A Retrospective Cohort Study
by Hao Xu, Han Zhang and Linyou Zhang
Cancers 2026, 18(10), 1612; https://doi.org/10.3390/cancers18101612 - 15 May 2026
Abstract
Background/Objectives: Prolonged air leak (PAL) remains a common complication after lung resection and may delay postoperative recovery and subsequent treatment. This study aimed to identify clinical factors associated with PAL after robot-assisted thoracic surgery (RATS) and to explore potential nonlinear relationships using restricted [...] Read more.
Background/Objectives: Prolonged air leak (PAL) remains a common complication after lung resection and may delay postoperative recovery and subsequent treatment. This study aimed to identify clinical factors associated with PAL after robot-assisted thoracic surgery (RATS) and to explore potential nonlinear relationships using restricted cubic spline (RCS) modeling. Methods: A retrospective cohort of 1185 patients who underwent RATS for primary lung cancer was analyzed. Multivariable Firth logistic regression was used to identify independent predictors of PAL (≥5 days). A nomogram was constructed based on the final model and internally validated using 1000 bootstrap resamples; its clinical utility was assessed using decision curve analysis. RCS analysis was performed to evaluate potential nonlinear associations. Results: A total of 98 patients (8.3%) developed PAL. Male sex was independently associated with increased PAL risk (OR 3.29, p < 0.001), whereas higher FEV1 was associated with reduced risk (OR 0.50 per 1-L increase, p < 0.001). BMI showed a modest protective effect (OR 0.91, p = 0.01). Age was not significant in the linear model (p = 0.86), but RCS analysis demonstrated a significant nonlinear association, with increased risk at older ages. The nomogram demonstrated moderate discrimination (apparent C-statistic 0.670, optimism-corrected 0.644) and good calibration, with decision curve analysis confirming net clinical benefit over treat-all and treat-none strategies. Conclusions: Male sex and impaired pulmonary function are key predictors of PAL after RATS. Nonlinear modeling revealed complex age-related risk patterns not captured by conventional approaches. The proposed nomogram may assist in preoperative risk stratification and perioperative decision-making. Full article
(This article belongs to the Special Issue Advances in Minimally Invasive Surgery in Thoracic Oncology)
15 pages, 2566 KB  
Article
The Shifting Core: Antigenic Variability of the Influenza Virus Nucleoprotein Despite Evolutionary Conservation
by Alexandra Rak, Veronika Muzurova, Svetlana Donina, Polina Prokopenko, Irina Isakova-Sivak and Larisa Rudenko
Antibodies 2026, 15(3), 41; https://doi.org/10.3390/antib15030041 (registering DOI) - 15 May 2026
Abstract
Background. The highly mutable influenza virus causes severe annual infections worldwide and results in substantial socioeconomic losses. The spread of infection could be effectively controlled by cross-protective vaccines and universal diagnostic test systems based on the nucleoprotein (NP) as one of the most [...] Read more.
Background. The highly mutable influenza virus causes severe annual infections worldwide and results in substantial socioeconomic losses. The spread of infection could be effectively controlled by cross-protective vaccines and universal diagnostic test systems based on the nucleoprotein (NP) as one of the most conserved viral antigens. However, NP also undergoes slow evolutionary changes, and little is known about the influence of these mutations on its antigenicity and immunogenicity. Methods. We expressed the full-length recombinant 6xHis-tagged NPs of ten evolutionary distant influenza A strains of different subtypes in E. coli BL21(DE3) cells and purified these proteins by immobilized metal affinity chromatography. The obtained antigens were identified by mass spectrometry and serological methods. NPs served as antigens for three immunizations of BALB/c mice (15 µg/animal at 14-day interval) and as capturing proteins in ELISA at 2 µg/mL, in order to study the effect of adaptive mutations on the antigenic and immunogenic properties of NPs. Results. A pronounced cross-reactivity of anti-NP antibodies induced in mice by immunization with different NPs was revealed. At the same time, we observed the differences in the humoral immunogenicity of NP, which are in line with the accumulation of evolutionarily driven NP mutations. In general, antibody affinity to heterologous NPs was reduced, indicating the differences in the specificity of anti-NP immunoglobulins, which may be caused by evolutionarily determined variability of immunogenic epitopes leading to the emergence of escape mutations. Conclusions. Overall, our results reflect the slightly evolving nature of the NP antigen, which influences the specificity spectrum of anti-NP antibodies and should be considered as a limitation for the development of NP-based cross-protective vaccines and test systems. Full article
(This article belongs to the Section Humoral Immunity)
28 pages, 2981 KB  
Article
Local Extrema Adaptive Pyramid Decomposition for Optical and SAR Image Fusion
by Zhiyang Huang, Qianwen Xiao and Qiao Liu
Electronics 2026, 15(10), 2129; https://doi.org/10.3390/electronics15102129 - 15 May 2026
Abstract
Optical and Synthetic Aperture Radar (SAR) sensors capture complementary and consistent information, and their fusion enhances remote sensing image quality. Existing pyramid decomposition-based methods suffer from insufficient texture–edge discrimination. Additionally, the manual setting of parameters during pyramid decomposition introduces uncertainty in the fusion [...] Read more.
Optical and Synthetic Aperture Radar (SAR) sensors capture complementary and consistent information, and their fusion enhances remote sensing image quality. Existing pyramid decomposition-based methods suffer from insufficient texture–edge discrimination. Additionally, the manual setting of parameters during pyramid decomposition introduces uncertainty in the fusion results. To address this problem, we propose an optical and SAR image fusion framework based on local extrema adaptive pyramid decomposition (LEAPFusion), which enhances edge preservation and improves parameter adaptability. Specifically, by leveraging the edge-preserving properties of local extrema, we introduce them into the image pyramid decomposition framework to construct complementary local extrema and Laplacian pyramids. Then, we introduce an explicit parameter adaptation strategy in which the decomposition levels and local extrema kernel sizes are automatically determined from image size and pyramid scale, enabling consistent multi-scale representation and reducing parameter sensitivity compared to empirically tuned settings. Finally, by exploiting the complementary properties of the two pyramids, we implement a multi-type fusion strategy: weighted averaging for low-frequency components and parameter-adaptive pulse-coupled neural network (PAPCNN) for high-frequency details. Our decomposition framework seamlessly integrates three representative edge-preserving filters—a median filter, a guided filter, and a rolling guidance filter—demonstrating strong generalization capability across different filtering paradigms. Extensive experiments on two benchmark datasets demonstrate that our method outperforms seven state-of-the-art algorithms, achieving the best results across diverse scenes with improvements of up to 13.38% in SF and 18.90% in SCD compared to the second-best methods. Full article
(This article belongs to the Section Computer Science & Engineering)
21 pages, 1998 KB  
Article
Consistency-Regularized Hybrid Deep Learning with Entropy-Weighted Attention and Branch Dropout for Intrusion Detection in IoT Networks
by El Hariri Ayyoub, Mouiti Mohammed and Lazaar Mohamed
Future Internet 2026, 18(5), 262; https://doi.org/10.3390/fi18050262 - 15 May 2026
Abstract
Securing IoT networks presents fundamental challenges rooted in hardware constraints: firmware is often non-upgradeable and every security boundary is fixed at manufacture. Machine learning-based intrusion detection offers a scalable response, yet nearly all published systems assume clean training data and clean inference conditions. [...] Read more.
Securing IoT networks presents fundamental challenges rooted in hardware constraints: firmware is often non-upgradeable and every security boundary is fixed at manufacture. Machine learning-based intrusion detection offers a scalable response, yet nearly all published systems assume clean training data and clean inference conditions. Production IoT environments satisfy neither assumption. Sensors degrade, packets drop, and adversaries deliberately corrupt telemetry streams to evade detection. The framework described here is built around that reality. The proposed framework is distinguished from prior work by four design decisions. First, three encoding branches, a residual DNN, a 1D-CNN, and a BiLSTM, are run in parallel and are fused by concatenation, each capturing structural patterns in tabular traffic data that the others miss. Second, a dual-view consistency loss trains the model under simultaneous feature masking and Gaussian noise, penalizing prediction divergence between two independently corrupted views of the same sample. Third, we introduce entropy-weighted attention: rather than fixed learned weights, per-feature importance is adjusted dynamically from information entropy measured across training batches, giving higher-entropy features stronger influence because they carry more discriminative variation. Fourth, branch-dropout regularization randomly silences entire branches during training, forcing each to develop independently useful representations instead of co-adapting. Class imbalance is handled through severity-aware loss weighting which scales contributions by the operational cost of missing each attack category, not purely by inverse frequency. On UNSW-NB15, the full model achieves 99.99% accuracy, 100% precision, 99.97% recall, and a false-negative rate of 2.65 × 10−4—the lowest across all compared architectures. Full article
(This article belongs to the Topic Applications of IoT in Multidisciplinary Areas)
Back to TopTop