Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,046)

Search Parameters:
Keywords = mapping methods and tools

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 2933 KB  
Article
The iPSM-SD Framework: Enhancing Predictive Soil Mapping for Precision Agriculture Through Spatial Proximity Integration
by Peng-Tao Guo, Wen-Tao Li, Mao-Fen Li, Pei-Sheng Yan, Yan Liu and Ju Zhao
Agronomy 2026, 16(2), 231; https://doi.org/10.3390/agronomy16020231 (registering DOI) - 18 Jan 2026
Abstract
A key challenge in precision agriculture is acquiring reliable spatial soil information under varying sampling densities, from sparse surveys to intensive monitoring. The individual predictive soil mapping (iPSM) method performs well in data-scarce conditions but neglects spatial proximity, limiting its predictive accuracy where [...] Read more.
A key challenge in precision agriculture is acquiring reliable spatial soil information under varying sampling densities, from sparse surveys to intensive monitoring. The individual predictive soil mapping (iPSM) method performs well in data-scarce conditions but neglects spatial proximity, limiting its predictive accuracy where spatial autocorrelation exists. To overcome this, we developed an enhanced framework, iPSM-Spatial Distance (iPSM-SD), which systematically integrates spatial proximity through multiplicative (MUL) and additive (ADD) strategies. The framework was validated using two contrasting cases: sparse soil organic carbon density data from Yunnan Province (n = 118) and dense soil organic matter data from Bayi Farm (n = 2511). Results show that the additive model (iPSM-ADD) significantly outperformed the original iPSM and benchmark models, including random forest, regression kriging, geographically weighted regression, and multiple linear regression, under sufficient sampling, achieving an R2 of 0.86 and reducing RMSE by 46.6% at Bayi Farm. It also maintained robust accuracy under sparse sampling conditions. The iPSM-SD framework thus provides a unified and adaptive tool for digital soil mapping across a wide range of data availability, supporting scalable soil management decisions from regional assessment to field-scale variable-rate applications in precision agriculture. Full article
(This article belongs to the Section Precision and Digital Agriculture)
21 pages, 3501 KB  
Article
Subsurface Fracture Mapping in Adhesive Interfaces Using Terahertz Spectroscopy
by Mahavir Singh, Sushrut Karmarkar, Marco Herbsommer, Seongmin Yoon and Vikas Tomar
Materials 2026, 19(2), 388; https://doi.org/10.3390/ma19020388 (registering DOI) - 18 Jan 2026
Abstract
Adhesive fracture in layered structures is governed by subsurface crack evolution that cannot be accessed using surface-based diagnostics. Methods such as digital image correlation and optical spectroscopy measure surface deformation but implicitly assume a straight and uniform crack front, an assumption that becomes [...] Read more.
Adhesive fracture in layered structures is governed by subsurface crack evolution that cannot be accessed using surface-based diagnostics. Methods such as digital image correlation and optical spectroscopy measure surface deformation but implicitly assume a straight and uniform crack front, an assumption that becomes invalid for interfacial fracture with wide crack openings and asymmetric propagation. In this work, terahertz time-domain spectroscopy (THz-TDS) is combined with double-cantilever beam testing to directly map subsurface crack-front geometry in opaque adhesive joints. A strontium titanate-doped epoxy is used to enhance dielectric contrast. Multilayer refractive index extraction, pulse deconvolution, and diffusion-based image enhancement are employed to separate overlapping terahertz echoes and reconstruct two-dimensional delay maps of interfacial separation. The measured crack geometry is coupled with load–displacement data and augmented beam theory to compute spatially averaged stresses and energy release rates. The measurements resolve crack openings down to approximately 100 μm and reveal pronounced width-wise non-uniform crack advance and crack-front curvature during stable growth. These observations demonstrate that surface-based crack-length measurements can either underpredict or overpredict fracture toughness depending on the measurement location. Fracture toughness values derived from width-averaged subsurface crack fronts agree with J-integral estimates obtained from surface digital image correlation. Signal-to-noise limitations near the crack tip define the primary resolution limit. The results establish THz-TDS as a quantitative tool for subsurface fracture mechanics and provide a framework for physically representative toughness measurements in layered and bonded structures. Full article
Show Figures

Graphical abstract

32 pages, 7558 KB  
Article
Research Progress and Frontier Trends in Generative AI in Architectural Design
by Yingli Yang, Yanxi Li, Xuefei Bai, Wei Zhang and Siyu Chen
Buildings 2026, 16(2), 388; https://doi.org/10.3390/buildings16020388 (registering DOI) - 17 Jan 2026
Abstract
In recent years, with the rapid advancement of science and technology, generative artificial intelligence has increasingly entered the public eye. Primarily through intelligent algorithms that simulate human logic and integrate vast amounts of network data, it provides designers with solutions that transcend traditional [...] Read more.
In recent years, with the rapid advancement of science and technology, generative artificial intelligence has increasingly entered the public eye. Primarily through intelligent algorithms that simulate human logic and integrate vast amounts of network data, it provides designers with solutions that transcend traditional thinking, enhancing both design efficiency and quality. Compared to traditional design methods reliant on human experience, generative design possesses robust data processing capabilities and the ability to refine design proposals, significantly reducing preliminary design time. This study employs the CiteSpace visualization tool to systematically organize and conduct knowledge map analysis of research literature related to generative AI in architectural design within the Web of Science database from 2005 to 2025. Findings reveal the following: (1) International research exhibits a trend toward interdisciplinary convergence. In recent years, research in this field has grown rapidly across nations, with continuously increasing academic influence; (2) Research primarily focuses on technological applications within architectural design, aiming to drive innovation and development by providing superior, more efficient technical support; (3) Generative AI in architectural design has emerged as a prominent international research focus, reflecting a shift from isolated design to industry-wide integration; (4) Generative AI has become a core global architectural design topic, with future research advancing toward full-process intelligent collaboration. High-quality knowledge graphs tailored for the architecture industry should be constructed to overcome data silos. Concurrently, a multidimensional evaluation system for generative quality must be established to deepen the symbiotic design paradigm of human–machine collaboration. This significantly enhances efficiency while reducing the iterative nature of traditional methods. This study aims to provide empirical support for theoretical and practical advancements, offering crucial references for practitioners to identify business opportunities and policymakers to optimize relevant strategies. Full article
31 pages, 3774 KB  
Article
Enhancing Wind Farm Siting with the Combined Use of Multicriteria Decision-Making Methods
by Dimitra Triantafyllidou and Dimitra G. Vagiona
Wind 2026, 6(1), 4; https://doi.org/10.3390/wind6010004 - 16 Jan 2026
Viewed by 54
Abstract
The purpose of this study is to determine the optimal location for siting an onshore wind farm on the island of Skyros, thereby maximizing performance and minimizing the project’s environmental impacts. Seven evaluation criteria are defined across various sectors, including environmental and economic [...] Read more.
The purpose of this study is to determine the optimal location for siting an onshore wind farm on the island of Skyros, thereby maximizing performance and minimizing the project’s environmental impacts. Seven evaluation criteria are defined across various sectors, including environmental and economic sectors, and six criteria weighting methods are applied in combination with four multicriteria decision-making (MCDM) ranking methods for suitable areas, resulting in twenty-four ranking models. The alternatives considered in the analysis were defined through the application of constraints imposed by the Specific Framework for Spatial Planning and Sustainable Development for Renewable Energy Sources (SFSPSD RES), complemented by exclusion criteria documented in the international literature, as well as a minimum area requirement ensuring the feasibility of installing at least four wind turbines within the study area. The correlations between their results are then assessed using the Spearman coefficient. Geographic information systems (GISs) are utilized as a mapping tool. Through the application of the methodology, it emerges that area A9, located in the central to northern part of Skyros, is consistently assessed as the most suitable site for the installation of a wind farm based on nine models combining criteria weighting and MCDM methods, which should be prioritized as an option for early-stage wind farm siting planning. The results demonstrate an absolute correlation among the subjective weighting methods, whereas the objective methods do not appear to be significantly correlated with each other or with the subjective methods. The ranking methods with the highest correlation are PROMETHEE II and ELECTRE III, while those with the lowest are TOPSIS and VIKOR. Additionally, the hierarchy shows consistency across results using weights from AHP, BWM, ROC, and SIMOS. After applying multiple methods to investigate correlations and mitigate their disadvantages, it is concluded that when experts in the field are involved, it is preferable to incorporate subjective multicriteria analysis methods into decision-making problems. Finally, it is recommended to use more than one MCDM method in order to reach sound decisions. Full article
Show Figures

Figure 1

19 pages, 9385 KB  
Article
YOLOv11-MDD: YOLOv11 in an Encoder–Decoder Architecture for Multi-Label Post-Wildfire Damage Detection—A Case Study of the 2023 US and Canada Wildfires
by Masoomeh Gomroki, Negar Zahedi, Majid Jahangiri, Bahareh Kalantar and Husam Al-Najjar
Remote Sens. 2026, 18(2), 280; https://doi.org/10.3390/rs18020280 - 15 Jan 2026
Viewed by 147
Abstract
Natural disasters occur worldwide and cause significant financial and human losses. Wildfires are among the most important natural disasters, occurring more frequently in recent years due to global warming. Fast and accurate post-disaster damage detection could play an essential role in swift rescue [...] Read more.
Natural disasters occur worldwide and cause significant financial and human losses. Wildfires are among the most important natural disasters, occurring more frequently in recent years due to global warming. Fast and accurate post-disaster damage detection could play an essential role in swift rescue planning and operations. Remote sensing (RS) data is an important source for tracking damage detection. Deep learning (DL) methods, as efficient tools, can extract valuable information from RS data to generate an accurate damage map for future operations. The present study proposes an encoder–decoder architecture composed of pre-trained Yolov11 blocks as the encoder path and Modified UNet (MUNet) blocks as the decoder path. The proposed network includes three main steps: (1) pre-processing, (2) network training, (3) prediction multilabel damage map and accuracy evaluation. To evaluate the network’s performance, the US and Canada datasets were considered. The datasets are satellite images of the 2023 wildfires in the US and Canada. The proposed method reaches the Overall Accuracy (OA) of 97.36, 97.47, and Kappa Coefficient (KC) of 0.96, 0.87 for the US and Canada 2023 wildfire datasets, respectively. Regarding the high OA and KC, an accurate final burnt map can be generated to assist in rescue and recovery efforts after the wildfire. The proposed YOLOv11–MUNet framework introduces an efficient and accurate post-event-only approach for wildfire damage detection. By overcoming the dependency on pre-event imagery and reducing model complexity, this method enhances the applicability of DL in rapid post-disaster assessment and management. Full article
Show Figures

Figure 1

14 pages, 1819 KB  
Article
A Hybrid Model with Quantum Feature Map Based on CNN and Vision Transformer for Clinical Support in Diagnosis of Acute Appendicitis
by Zeki Ogut, Mucahit Karaduman, Pinar Gundogan Bozdag, Mehmet Karakose and Muhammed Yildirim
Biomedicines 2026, 14(1), 183; https://doi.org/10.3390/biomedicines14010183 - 14 Jan 2026
Viewed by 177
Abstract
Background/Objectives: Rapid and accurate diagnosis of acute appendicitis is crucial for patient health and management, and the diagnostic process can be prolonged due to varying clinical symptoms and limitations of diagnostic tools. This study aims to shorten the timeframe for these vital [...] Read more.
Background/Objectives: Rapid and accurate diagnosis of acute appendicitis is crucial for patient health and management, and the diagnostic process can be prolonged due to varying clinical symptoms and limitations of diagnostic tools. This study aims to shorten the timeframe for these vital processes and increase accuracy by developing a quantum-inspired hybrid model to identify appendicitis types. Methods: The developed model initially selects the two most performing architectures using four convolutional neural networks (CNNs) and two Transformers (ViTs). Feature extraction is then performed from these architectures. Phase-based trigonometric embedding, low-order interactions, and norm-preserving principles are used to generate a Quantum Feature Map (QFM) from these extracted features. The generated feature map is then passed to the Multiple Head Attention (MHA) layer after undergoing Hadamard fusion. At the end of this stage, classification is performed using a multilayer perceptron (MLP) with a ReLU activation function, which allows for the identification of acute appendicitis types. The developed quantum-inspired hybrid model is also compared with six different CNN and ViT architectures recognized in the literature. Results: The proposed quantum-inspired hybrid model outperformed the other models used in the study for acute appendicitis detection. The accuracy achieved in the proposed model was 97.96%. Conclusions: While the performance metrics obtained from the quantum-inspired model will form the basis of deep learning architectures for quantum technologies in the future, it is thought that if 6G technology is used in medical remote interventions, it will form the basis for real-time medical interventions by taking advantage of quantum speed. Full article
(This article belongs to the Section Biomedical Engineering and Materials)
Show Figures

Figure 1

18 pages, 14907 KB  
Article
Renal-AI: A Deep Learning Platform for Multi-Scale Detection of Renal Ultrastructural Features in Electron Microscopy Images
by Leena Nezamuldeen, Walaa Mal, Reem A. Al Zahrani, Sahar Jambi and M. Saleet Jafri
Diagnostics 2026, 16(2), 264; https://doi.org/10.3390/diagnostics16020264 - 14 Jan 2026
Viewed by 199
Abstract
Background/Objectives: Transmission electron microscopy (TEM) is an essential tool for diagnosing renal diseases. It produces high-resolution visualization of glomerular and mesangial ultrastructural features. However, manual interpretation of TEM images is labor-intensive and prone to interobserver variability. In this study, we introduced and [...] Read more.
Background/Objectives: Transmission electron microscopy (TEM) is an essential tool for diagnosing renal diseases. It produces high-resolution visualization of glomerular and mesangial ultrastructural features. However, manual interpretation of TEM images is labor-intensive and prone to interobserver variability. In this study, we introduced and evaluated deep learning architectures based on YOLOv8-OBB for automated detection of six ultrastructural features in kidney biopsy TEM images: glomerular basement membrane, mesangial folds, mesangial deposits, normal podocytes, podocytopathy, and subepithelial deposits. Methods: Building on our previous work, we propose a modified YOLOv8-OBB architecture that incorporates three major refinements: a grayscale input channel, a high-resolution P2 feature pyramid with refinement blocks (FPRbl), and a four-branch oriented detection head designed to detect small-to-large structures at multiple image scales (feature-map strides of 4, 8, 16, and 32 pixels). We compared two pretrained variants: our previous YOLOv8-OBB model developed with a grayscale input channel (GSch) and four additional feature-extraction layers (4FExL) (Pretrained + GSch + 4FExL) and the newly developed (Pretrained + FPRbl). Results: Quantitative assessment showed that our previously developed model (Pretrained + GSch + 4FExL) achieved an F1-score of 0.93 and mAP@0.5 of 0.953, while the (Pretrained + FPRbl) model developed in this study achieved an F1-score of 0.92 and mAP@0.5 of 0.941, demonstrating strong and clinically meaningful performance for both approaches. Qualitative assessment based on expert visual inspection of predicted bounding boxes revealed complementary strengths: (Pretrained + GSch + 4FExL) exhibited higher recall for subtle or infrequent findings, whereas (Pretrained + FPRbl) produced cleaner bounding boxes with higher-confidence predictions. Conclusions: This study presents how targeted architectural refinements in YOLOv8-OBB can enhance the detection of small, low-contrast, and variably oriented ultrastructural features in renal TEM images. Evaluating these refinements and translating them into a web-based platform (Renal-AI) showed the clinical applicability of deep learning-based tools for improving diagnostic efficiency and reducing interpretive variability in kidney pathology. Full article
Show Figures

Figure 1

38 pages, 3177 KB  
Review
Unveiling Scale-Dependent Statistical Physics: Connecting Finite-Size and Non-Equilibrium Systems for New Insights
by Agustín Pérez-Madrid and Iván Santamaría-Holek
Entropy 2026, 28(1), 99; https://doi.org/10.3390/e28010099 - 14 Jan 2026
Viewed by 237
Abstract
A scale-dependent effective temperature emerges as a unifying principle in the statistical physics of apparently different phenomena, namely quantum confinement in finite-size systems and non-equilibrium effects in thermodynamic systems. This concept effectively maps these inherently complex systems onto equilibrium states, thereby enabling the [...] Read more.
A scale-dependent effective temperature emerges as a unifying principle in the statistical physics of apparently different phenomena, namely quantum confinement in finite-size systems and non-equilibrium effects in thermodynamic systems. This concept effectively maps these inherently complex systems onto equilibrium states, thereby enabling the direct application of standard statistical physics methods. By offering a framework to analyze these systems as effectively at equilibrium, our approach provides powerful new tools that significantly expand the scope of the field. Just as the constant speed of light in Einstein’s theory of special relativity necessitates a relative understanding of space and time, our fixed ratio of energy to temperature suggests a fundamental rescaling of both quantities that allows us to recognize shared patterns across diverse materials and situations. Full article
(This article belongs to the Section Statistical Physics)
Show Figures

Figure 1

11 pages, 692 KB  
Article
Unmasking Early Cardiac Fibrosis in Sarcoidosis: The Role of Plasma Aldosterone and Cardiac MRI
by Elias Giallafos, Evangelos Oikonomou, Niki Lama, Spiros Katsanos, Lykourgos Kolilekas, Evaggelos Markozanes, Varvara Pantoleon, Kostas Zisimos, Ourania Katsarou, Panagiotis Theofilis, Gesthimani Seitaridi, Ioannis Ilias, Grigoris Stratakos, Nikos Kelekis, Effrosyni D. Manali, Spiros Papiris, Georgios Marinos, Konstantinos Tsioufis and Gerasimos Siasos
J. Clin. Med. 2026, 15(2), 650; https://doi.org/10.3390/jcm15020650 - 14 Jan 2026
Viewed by 85
Abstract
Background/Objectives: Cardiac sarcoidosis (CS) is a challenging diagnosis due to its subclinical progression and the limitations of existing screening tools. Cardiac magnetic resonance (CMR) and PET/CT imaging have improved diagnosis and detection. Aldosterone, a hormone with known profibrotic effects, may offer additional diagnostic [...] Read more.
Background/Objectives: Cardiac sarcoidosis (CS) is a challenging diagnosis due to its subclinical progression and the limitations of existing screening tools. Cardiac magnetic resonance (CMR) and PET/CT imaging have improved diagnosis and detection. Aldosterone, a hormone with known profibrotic effects, may offer additional diagnostic value. We therefore aimed to determine whether plasma aldosterone level is associated with myocardial fibrosis, independent of active inflammation, in CS. Methods: This observational study included 541 patients with biopsy-proven sarcoidosis and preserved left ventricular ejection fraction (LVEF ≥ 50%). All underwent CMR with extracellular volume (ECV) mapping and 18F-FDG PET/CT to assess myocardial fibrosis and inflammation, respectively. Plasma aldosterone levels were also measured. Results: Plasma aldosterone levels were significantly higher in patients with cardiac sarcoidosis (172 [IQR 106–235] pg/mL) compared to those without cardiac involvement (143 [100–205] pg/mL, p = 0.02). Aldosterone was independently associated with the presence of late gadolinium enhancement (LGE) on CMR (OR 1.002 per 1 pg/mL increase; 95% CI 1.001–1.004, p = 0.04) and with higher ECV values (β = 0.008 per 1 pg/mL, p = 0.001). Regression analysis showed that aldosterone is associated with ECV (b-0.009, CI: 0.002–0.016, p = 0.009) and there was no interaction according to LGE status indicating a relationship with diffuse myocardial fibrosis even in the absence of visible scarring. No association was observed with T1-, T2-, or PET/CT-defined inflammation. Conclusions: Plasma aldosterone is a robust marker of myocardial fibrosis in sarcoidosis, particularly in early or subclinical stages. Its correlation with ECV—but not with inflammatory imaging markers—suggests its link with myocardial diffuse fibrotic remodeling before, and independently of, overt scarring or inflammation. Full article
(This article belongs to the Section Cardiovascular Medicine)
Show Figures

Figure 1

21 pages, 20581 KB  
Article
Stereo-Based Single-Shot Hand-to-Eye Calibration for Robot Arms
by Pushkar Kadam, Gu Fang, Farshid Amirabdollahian, Ju Jia Zou and Patrick Holthaus
Computers 2026, 15(1), 53; https://doi.org/10.3390/computers15010053 - 13 Jan 2026
Viewed by 109
Abstract
Robot hand-to-eye calibration is a necessary process for a robot arm to perceive and interact with its environment. Past approaches required collecting multiple images using a calibration board placed at different locations relative to the robot. When the robot or camera is displaced [...] Read more.
Robot hand-to-eye calibration is a necessary process for a robot arm to perceive and interact with its environment. Past approaches required collecting multiple images using a calibration board placed at different locations relative to the robot. When the robot or camera is displaced from its calibrated position, hand–eye calibration must be redone using the same tedious process. In this research, we developed a novel method that uses a semi-automatic process to perform hand-to-eye calibration with a stereo camera, generating a transformation matrix from the world to the camera coordinate frame from a single image. We use a robot-pointer tool attached to the robot’s end-effector to manually establish a relationship between the world and the robot coordinate frame. Then, we establish the relationship between the camera and the robot using a transformation matrix that maps points observed in the stereo image frame from two-dimensional space to the robot’s three-dimensional coordinate frame. Our analysis of the stereo calibration showed a reprojection error of 0.26 pixels. An evaluation metric was developed to test the camera-to-robot transformation matrix, and the experimental results showed median root mean square errors of less than 1 mm in the x and y directions and less than 2 mm in the z directions in the robot coordinate frame. The results show that, with this work, we contribute a hand-to-eye calibration method that uses three non-collinear points in a single stereo image to map camera-to-robot coordinate-frame transformations. Full article
(This article belongs to the Special Issue Advanced Human–Robot Interaction 2025)
Show Figures

Figure 1

28 pages, 833 KB  
Review
An Integrative Review of the Cardiovascular Disease Spectrum: Integrating Multi-Omics and Artificial Intelligence for Precision Cardiology
by Gabriela-Florentina Țapoș, Ioan-Alexandru Cîmpeanu, Iasmina-Alexandra Predescu, Sergio Liga, Andra Tiberia Păcurar, Daliborca Vlad, Casiana Boru, Silvia Luca, Simina Crișan, Cristina Văcărescu and Constantin Tudor Luca
Diseases 2026, 14(1), 31; https://doi.org/10.3390/diseases14010031 - 13 Jan 2026
Viewed by 93
Abstract
Background/Objectives: Cardiovascular diseases (CVDs) remain the leading cause of morbidity and mortality worldwide and increasingly are recognized as a continuum of interconnected conditions rather than isolated entities. Methods: A structured narrative literature search was performed in PubMed, Scopus, and Google Scholar for publications [...] Read more.
Background/Objectives: Cardiovascular diseases (CVDs) remain the leading cause of morbidity and mortality worldwide and increasingly are recognized as a continuum of interconnected conditions rather than isolated entities. Methods: A structured narrative literature search was performed in PubMed, Scopus, and Google Scholar for publications from 2015 to 2025 using combinations of different keywords: “cardiovascular disease spectrum”, “multi-omics”, “precision cardiology”, “machine learning”, and “artificial intelligence in cardiology”. Results: Evidence was synthesized across seven major clusters of cardiovascular conditions, and across these domains, common biological pathways were mapped onto heterogeneous clinical phenotypes, and we summarize how multi-omics integration, AI-enabled imaging and digital tools contribute to improved risk prediction and more informed clinical decision-making within this spectrum. Conclusions: Interpreting cardiovascular conditions as components of a shared disease spectrum clarifies cross-disease interactions and supports a shift from organ- and syndrome-based classifications toward mechanism- and data-driven precision cardiology. The convergence of multi-omics, and AI offers substantial opportunities for earlier detection, individualized prevention, and tailored therapy, but requires careful attention to data quality, equity, interpretability, and practical implementation in routine care. Full article
(This article belongs to the Section Cardiology)
Show Figures

Figure 1

18 pages, 1165 KB  
Review
Bridging Silence: A Scoping Review of Technological Advancements in Augmentative and Alternative Communication for Amyotrophic Lateral Sclerosis
by Filipe Gonçalves, Carla S. Fernandes, Margarida I. Teixeira, Cláudia Melo and Cátia Dias
Sclerosis 2026, 4(1), 2; https://doi.org/10.3390/sclerosis4010002 - 13 Jan 2026
Viewed by 158
Abstract
Background: Amyotrophic lateral sclerosis (ALS) progressively impairs motor function, compromising speech and limiting communication. Augmentative and alternative communication (AAC) is essential to maintain autonomy, social participation, and quality of life for people with ALS (PALS). This review maps technological developments in AAC, from [...] Read more.
Background: Amyotrophic lateral sclerosis (ALS) progressively impairs motor function, compromising speech and limiting communication. Augmentative and alternative communication (AAC) is essential to maintain autonomy, social participation, and quality of life for people with ALS (PALS). This review maps technological developments in AAC, from low-tech tools to advanced brain–computer interface (BCI) systems. Methods: We conducted a scoping review following the PRISMA extension for scoping reviews. PubMed, Web of Science, SciELO, MEDLINE, and CINAHL were screened for studies published up to 31 August 2025. Peer-reviewed RCT, cohort, cross-sectional, and conference papers were included. Single-case studies of invasive BCI technology for ALS were also considered. Methodological quality was evaluated using JBI Critical Appraisal Tools. Results: Thirty-seven studies met inclusion criteria. High-tech AAC—particularly eye-tracking systems and non-invasive BCIs—were most frequently studied. Eye tracking showed high usability but was limited by fatigue, calibration demands, and ocular impairments. EMG- and EOG-based systems demonstrated promising accuracy and resilience to environmental factors, though evidence remains limited. Invasive BCIs showed the highest performance in late-stage ALS and locked-in syndrome, but with small samples and uncertain long-term feasibility. No studies focused exclusively on low-tech AAC interventions. Conclusions: AAC technologies, especially BCIs, EMG and eye-tracking systems, show promise in supporting autonomy in PALS. Implementation gaps persist, including limited attention to caregiver burden, healthcare provider training, and the real-world use of low-tech and hybrid AAC. Further research is needed to ensure that communication solutions are timely, accessible, and effective, and that they are tailored to functional status, daily needs, social participation, and interaction with the environment. Full article
Show Figures

Figure 1

16 pages, 320 KB  
Systematic Review
Mapping the Outcomes of Low-Vision Rehabilitation: A Scoping Review of Interventions, Challenges, and Research Gaps
by Kingsley Ekemiri, Onohomo Adebo, Chioma Ekemiri, Samuel Osuji, Maureen Amobi, Linda Ekwe, Kathy-Ann Lootawan, Carlene Oneka Williams and Esther Daniel
Vision 2026, 10(1), 3; https://doi.org/10.3390/vision10010003 - 12 Jan 2026
Viewed by 94
Abstract
Introduction: Low vision affects more than visual acuity; it substantially disrupts daily functioning and may contribute to long-term cognitive, emotional, and social consequences. When medical or surgical treatment options are no longer effective, structured low-vision rehabilitation becomes essential, providing strategies and tools that [...] Read more.
Introduction: Low vision affects more than visual acuity; it substantially disrupts daily functioning and may contribute to long-term cognitive, emotional, and social consequences. When medical or surgical treatment options are no longer effective, structured low-vision rehabilitation becomes essential, providing strategies and tools that support functional adaptation and promote independence. This review aims to map the current outcomes of rehabilitation services, identify gaps in existing research, and highlight opportunities for further study. Methods: An article search was conducted via PubMed, Scopus, PsycInfo, and Google Scholar. Then, title, abstract, and full-text screenings for inclusion were performed by all the authors independently, and disagreements were resolved through discussion. The relevant outcomes from the eligible publications were extracted by four authors and then cross-checked by the other authors. The results are presented via the Preferred Reporting Items for Systematic Reviews and Meta-analysis extension for Scoping Reviews checklist. Results: A total of 13 studies met the inclusion criteria. Most were randomized controlled trials (n = 10,77%), with the majority conducted in the United States and the United Kingdom. Study populations consisted of adults aged 18 years and older. Across the included studies, low-vision rehabilitation interventions particularly visual training, magnification-based programs, and multidisciplinary approaches, were associated with significant improvements in visual function, activities of daily living, and vision-related quality of life. Conclusions: Low vision rehabilitation interventions demonstrate clear benefits for visual acuity, contrast sensitivity, reading speed, and functional independence. However, substantial gaps remain, including limited evidence on long-term outcomes, inconsistent assessment of psychosocial influences, and underrepresentation of diverse populations. Standardized outcome measures and long-term, inclusive research designs are needed to better understand the sustained and equitable impact of low-vision rehabilitation. Full article
Show Figures

Figure 1

20 pages, 1985 KB  
Systematic Review
Evaluating the Effectiveness of Environmental Impact Assessment in Flood-Prone Areas: A Systematic Review of Methodologies, Hydrological Integration, and Policy Evolution
by Phumzile Nosipho Nxumalo, Phindile T. Z. Sabela-Rikhotso, Daniel Kibirige, Philile Mbatha and Nicholas Byaruhanga
Sustainability 2026, 18(2), 768; https://doi.org/10.3390/su18020768 - 12 Jan 2026
Viewed by 176
Abstract
Environmental Impact Assessments (EIAs) are crucial for mitigating flood risks in vulnerable ecosystems, yet their effective application remains inconsistent. This study synthesises global literature to systematically map EIA methodologies, evaluate the extent of hydrological integration, and analyse the evolution of practices against policy [...] Read more.
Environmental Impact Assessments (EIAs) are crucial for mitigating flood risks in vulnerable ecosystems, yet their effective application remains inconsistent. This study synthesises global literature to systematically map EIA methodologies, evaluate the extent of hydrological integration, and analyse the evolution of practices against policy frameworks for flood-prone areas. A scoping review of 144 peer-reviewed articles, conference papers, and one book chapter (2005–2025) was conducted using PRISMA protocols, complemented by bibliometric analysis. Quantitative findings reveal a significant gap where 72% of studies lacked specialised hydrological impact assessments (HIAs), with only 28% incorporating them. Post-2016, advanced tools like GIS, remote sensing, and hydrological modelling were used in less than 32% of studies, revealing reliance on outdated checklist methods. In South Africa, despite wetlands covering 7.7% of its territory, merely 12% of studies applied flood modelling. Furthermore, 40% of EIAs conducted after 2016 excluded climate adaptation strategies, undermining resilience. The literature is geographically skewed, with developed nations dominating publications at a 3:1 ratio over African contributions. The study’s novelty is its systematic global mapping of global EIA practices for flood-prone areas and its proposal for mandatory HIAs, predictive modelling, and strengthened policy enforcement. Practically, these reforms can transform EIAs from reactive compliance tools into proactive instruments for disaster risk reduction and climate resilience, directly supporting Sustainable Development Goals 11 (Sustainable Cities), 13 (Climate Action), and 15 (Life on Land). This is essential for guiding future policy and improving EIA efficacy in the face of rapid urbanisation and climate change. Full article
Show Figures

Figure 1

20 pages, 15923 KB  
Article
Sub-Canopy Topography Inversion Using Multi-Baseline Bistatic InSAR Without External Vegetation-Related Data
by Huiqiang Wang, Zhimin Feng, Ruiping Li and Yanan Yu
Remote Sens. 2026, 18(2), 231; https://doi.org/10.3390/rs18020231 - 11 Jan 2026
Viewed by 124
Abstract
Previous studies on single-polarized InSAR-based sub-canopy topography inversion have mainly relied on simplified or empirical models that only consider the volume scattering process. In a boreal forest area, the canopy layer is often discontinuous. In such a case, the radar backscattering echoes are [...] Read more.
Previous studies on single-polarized InSAR-based sub-canopy topography inversion have mainly relied on simplified or empirical models that only consider the volume scattering process. In a boreal forest area, the canopy layer is often discontinuous. In such a case, the radar backscattering echoes are mainly dominated by ground surface and volume scattering processes. However, interferometric scattering models like Random Volume over Ground (RVoG) have been little utilized in the case of single-polarized InSAR. In this study, we propose a novel method for retrieving sub-canopy topography by combining the RVoG model with multi-baseline InSAR data. Prior to the RVoG model inversion, a SAR-based dimidiate pixel model and a coherence-based penetration depth model are introduced to quantify the initial values of the unknown parameters, thereby minimizing the reliance on external vegetation datasets. Building on this, a nonlinear least-squares algorithm is employed. Then, we estimate the scattering phase center height and subsequently derive the sub-canopy topography. Two frames of multi-baseline TanDEM-X co-registered single-look slant-range complex (CoSSC) data (resampled to 10 m × 10 m) over the Krycklan catchment in northern Sweden are used for the inversion. Validation from airborne light detection and ranging (LiDAR) data shows that the root-mean-square error (RMSE) for the two test sites is 3.82 m and 3.47 m, respectively, demonstrating a significant improvement over the InSAR phase-measured digital elevation model (DEM). Furthermore, diverse interferometric baseline geometries and different initial values are identified as key factors influencing retrieval performance. In summary, our work effectively addresses the limitations of the traditional RVoG model and provides an advanced and practical tool for sub-canopy topography mapping in forested areas. Full article
Show Figures

Figure 1

Back to TopTop