Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,256)

Search Parameters:
Keywords = open-access application

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 3900 KB  
Review
A Survey on the Computing Continuum and Meta-Operating Systems: Perspectives, Architectures, Outcomes, and Open Challenges
by Panagiotis K. Gkonis, Anastasios Giannopoulos, Nikolaos Nomikos, Lambros Sarakis, Vasileios Nikolakakis, Gerasimos Patsourakis and Panagiotis Trakadas
Sensors 2026, 26(3), 799; https://doi.org/10.3390/s26030799 (registering DOI) - 25 Jan 2026
Abstract
The goal of the study presented in this work is to analyze all recent advances in the context of the computing continuum and meta-operating systems (meta-OSs). The term continuum includes a variety of diverse hardware and computing elements, as well as network protocols, [...] Read more.
The goal of the study presented in this work is to analyze all recent advances in the context of the computing continuum and meta-operating systems (meta-OSs). The term continuum includes a variety of diverse hardware and computing elements, as well as network protocols, ranging from lightweight Internet of Things (IoT) components to more complex edge or cloud servers. To this end, the rapid penetration of IoT technology in modern-era networks, along with associated applications, poses new challenges towards efficient application deployment over heterogeneous network infrastructures. These challenges involve, among others, the interconnection of a vast number of IoT devices and protocols, proper resource management, and threat protection and privacy preservation. Hence, unified access mechanisms, data management policies, and security protocols are required across the continuum to support the vision of seamless connectivity and diverse device integration. This task becomes even more important as discussions on sixth generation (6G) networks are already taking place, which they are envisaged to coexist with IoT applications. Therefore, in this work the most significant technological approaches to satisfy the aforementioned challenges and requirements are presented and analyzed. To this end, a proposed architectural approach is also presented and discussed, which takes into consideration all key players and components in the continuum. In the same context, indicative use cases and scenarios that are leveraged from a meta-OSs in the computing continuum are presented as well. Finally, open issues and related challenges are also discussed. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

23 pages, 718 KB  
Review
Artificial Intelligence in the Evaluation and Intervention of Developmental Coordination Disorder: A Scoping Review of Methods, Clinical Purposes, and Future Directions
by Pantelis Pergantis, Konstantinos Georgiou, Nikolaos Bardis, Charalabos Skianis and Athanasios Drigas
Children 2026, 13(2), 161; https://doi.org/10.3390/children13020161 - 23 Jan 2026
Viewed by 54
Abstract
Background: Developmental coordination Disorder (DCD) is a prevalent and persistent neurodevelopmental condition characterized by motor learning difficulties that significantly affect daily functioning and participation. Despite growing interest in artificial intelligence (AI) applications within healthcare, the extent and nature of AI use in the [...] Read more.
Background: Developmental coordination Disorder (DCD) is a prevalent and persistent neurodevelopmental condition characterized by motor learning difficulties that significantly affect daily functioning and participation. Despite growing interest in artificial intelligence (AI) applications within healthcare, the extent and nature of AI use in the evaluation and intervention of DCD remain unclear. Objective: This scoping review aimed to systematically map the existing literature on the use of AI and AI-assisted approaches in the evaluation, screening, monitoring, and intervention of DCD, and to identify current trends, methodological characteristics, and gaps in the evidence base. Methods: A scoping review was conducted in accordance with the PRISMA extension for Scoping Reviews (PRISMA-ScR) guidelines and was registered on the Open Science Framework. Systematic searches were performed in Scopus, PubMed, Web of Science, and IEEE Xplore, supplemented by snowballing. Peer-reviewed studies applying AI methods to DCD-relevant populations were included. Data was extracted and charted to summarize study designs, populations, AI methods, data modalities, clinical purposes, outcomes, and reported limitations. Results: Seven studies published between 2021 and 2025 met the inclusion criteria following a literature search covering the period from January 2010 to 2025. One study listed as 2026 was included based on its early access online publication in 2025. Most studies focused on AI applications for assessment, screening, and classification, using supervised machine learning or deep learning models applied to movement-based data, wearable sensors, video recordings, neurophysiological signals, or electronic health records. Only one randomized controlled trial evaluated an AI-assisted intervention. The evidence base was dominated by early-phase development and validation studies, with limited external validation, heterogeneous diagnostic definitions, and scarce intervention-focused research. Conclusions: Current AI research in DCD is primarily centered on evaluation and early identification, with comparatively limited evidence supporting AI-assisted intervention or rehabilitation. While existing findings suggest that AI has the potential to enhance objectivity and sensitivity in DCD assessment, significant gaps remain in clinical translation, intervention development, and implementation. Future research should prioritize theory-informed, clinician-centered AI applications, including adaptive intervention systems and decision-support tools, to better support occupational therapy and physiotherapy practice in DCD care. Full article
32 pages, 4251 KB  
Article
Context-Aware ML/NLP Pipeline for Real-Time Anomaly Detection and Risk Assessment in Cloud API Traffic
by Aziz Abibulaiev, Petro Pukach and Myroslava Vovk
Mach. Learn. Knowl. Extr. 2026, 8(1), 25; https://doi.org/10.3390/make8010025 - 22 Jan 2026
Viewed by 19
Abstract
We present a combined ML/NLP (Machine Learning, Natural Language Processing) pipeline for protecting cloud-based APIs (Application Programming Interfaces), which works both at the level of individual HTTP (Hypertext Transfer Protocol) requests and at the access log file reading mode, linking explicitly technical anomalies [...] Read more.
We present a combined ML/NLP (Machine Learning, Natural Language Processing) pipeline for protecting cloud-based APIs (Application Programming Interfaces), which works both at the level of individual HTTP (Hypertext Transfer Protocol) requests and at the access log file reading mode, linking explicitly technical anomalies with business risks. The system processes each event/access log through parallel numerical and textual branches: a set of anomaly detectors trained on traffic engineering characteristics and a hybrid NLP stack that combines rules, TF-IDF (Term Frequency-Inverse Document Frequency), and character-level models trained on enriched security datasets. Their results are integrated using a risk-aware policy that takes into account endpoint type, data sensitivity, exposure, and authentication status, and creates a discrete risk level with human-readable explanations and recommended SOC (Security Operations Center) actions. We implement this design as a containerized microservice pipeline (input, preprocessing, ML, NLP, merging, alerting, and retraining services), orchestrated using Docker Compose and instrumented using OpenSearch Dashboards. Experiments with OWASP-like (Open Worldwide Application Security Project) attack scenarios show a high detection rate for injections, SSRF (Server-Side Request Forgery), Data Exposure, and Business Logic Abuse, while the processing time for each request remains within real-time limits even in sequential testing mode. Thus, the pipeline bridges the gap between ML/NLP research for security and practical API protection channels that can evolve over time through feedback and retraining. Full article
(This article belongs to the Section Safety, Security, Privacy, and Cyber Resilience)
Show Figures

Figure 1

12 pages, 5353 KB  
Review
State-of-the-Art Overview of Smooth-Edged Material Distribution for Optimizing Topology (SEMDOT) Algorithm
by Minyan Liu, Wanghua Hu, Xuhui Gong, Hao Zhou and Baolin Zhao
Computation 2026, 14(1), 27; https://doi.org/10.3390/computation14010027 - 21 Jan 2026
Viewed by 63
Abstract
Topology optimization is a powerful and efficient design tool, but the structures obtained by element-based topology optimization methods are often limited by fuzzy or jagged boundaries. The smooth-edged material distribution for optimizing topology algorithm (SEMDOT) can effectively deal with this problem and promote [...] Read more.
Topology optimization is a powerful and efficient design tool, but the structures obtained by element-based topology optimization methods are often limited by fuzzy or jagged boundaries. The smooth-edged material distribution for optimizing topology algorithm (SEMDOT) can effectively deal with this problem and promote the practical application of topology optimization structures. This review outlines the theoretical evolution of SEMDOT, including both penalty-based and non-penalty-based formulations, while also providing access to open access codes. SEMDOT’s applications cover diverse areas, including self-supporting structures, energy-efficient manufacturing, bone tissue scaffolds, heat transfer systems, and building parts, demonstrating the versatility of SEMDOT. While SEMDOT addresses boundary issues in topology optimization structures, further theoretical refinement is needed to develop it into a comprehensive platform. This work consolidates the advances in SEMDOT, highlights its interdisciplinary impact, and identifies future research and implementation directions. Full article
(This article belongs to the Special Issue Advanced Topology Optimization: Methods and Applications)
Show Figures

Figure 1

31 pages, 5687 KB  
Article
A Hybrid Ensemble Learning Framework for Accurate Photovoltaic Power Prediction
by Wajid Ali, Farhan Akhtar, Asad Ullah and Woo Young Kim
Energies 2026, 19(2), 453; https://doi.org/10.3390/en19020453 - 16 Jan 2026
Viewed by 129
Abstract
Accurate short-term forecasting of solar photovoltaic (PV) power output is essential for efficient grid integration and energy management, especially given the widespread global adoption of PV systems. To address this research gap, the present study introduces a scalable, interpretable ensemble learning model of [...] Read more.
Accurate short-term forecasting of solar photovoltaic (PV) power output is essential for efficient grid integration and energy management, especially given the widespread global adoption of PV systems. To address this research gap, the present study introduces a scalable, interpretable ensemble learning model of PV power prediction with respect to a large PVOD v1.0 dataset, which encompasses more than 270,000 points representing ten PV stations. The proposed methodology involves data preprocessing, feature engineering, and a hybrid ensemble model consisting of Random Forest, XGBoost, and CatBoost. Temporal features, which included hour, day, and month, were created to reflect the diurnal and seasonal characteristics, whereas feature importance analysis identified global irradiance, temperature, and temporal indices as key indicators. The hybrid ensemble model presented has a high predictive power, with an R2 = 0.993, a Mean Absolute Error (MAE) = 0.227 kW, and a Root Mean Squared Error (RMSE) = 0.628 kW when applied to the PVOD v1.0 dataset to predict short-term PV power. These findings were achieved on standardized, multi-station, open access data and thus are not in an entirely rigorous sense comparable to previous studies that may have used other datasets, forecasting horizons, or feature sets. Rather than asserting numerical dominance over other approaches, this paper focuses on the real utility of integrating well-known tree-based ensemble techniques with time-related feature engineering to derive real, interpretable, and computationally efficient PV power prediction models that can be used in smart grid applications. This paper shows that a mixture of conventional ensemble methods and extensive temporal feature engineering is effective in producing consistent accuracy in PV forecasting. The framework can be reproduced and run efficiently, which makes it applicable in the integration of smart grid applications. Full article
(This article belongs to the Special Issue Advanced Control Strategies for Photovoltaic Energy Systems)
Show Figures

Figure 1

36 pages, 10413 KB  
Article
An Open-Source CAD Framework Based on Point-Cloud Modeling and Script-Based Rendering: Development and Application
by Angkush Kumar Ghosh
Machines 2026, 14(1), 107; https://doi.org/10.3390/machines14010107 - 16 Jan 2026
Viewed by 177
Abstract
Script-based computer-aided design tools offer accessible and customizable environments, but their broader adoption is limited by the cognitive and computational difficulty of describing curved, irregular, or free-form geometries through code. This study addresses this challenge by contributing a unified, open-source framework that enables [...] Read more.
Script-based computer-aided design tools offer accessible and customizable environments, but their broader adoption is limited by the cognitive and computational difficulty of describing curved, irregular, or free-form geometries through code. This study addresses this challenge by contributing a unified, open-source framework that enables concept-to-model transformation through 2D point-based representations. Unlike previous ad hoc methods, this framework systematically integrates an interactive point-cloud modeling layer with modular systems for curve construction, point generation, transformation, sequencing, and formatting, together with script-based rendering functions. This framework allows users to generate geometrically valid models without navigating the heavy geometric calculations, strict syntax requirements, and debugging demands typical of script-based workflows. Structured case studies demonstrate the underlying workflow across mechanical, artistic, and handcrafted forms, contributing empirical evidence of its applicability to diverse tasks ranging from mechanical component modeling to cultural heritage digitization and reverse engineering. Comparative analysis demonstrates that the framework reduces user-facing code volume by over 97% compared to traditional scripting and provides a lightweight, noise-free alternative to traditional hardware-based reverse engineering by allowing users to define clean geometry from the outset. The findings confirm that the framework generates fabrication-ready outputs—including volumetric models and vector representations—suitable for various manufacturing contexts. All systems and rendering functions are made publicly available, enabling the entire pipeline to be performed using free tools. By establishing a practical and reproducible basis for point-based modeling, this study contributes to the advancement of computational design practice and supports the wider adoption of script-based design workflows. Full article
(This article belongs to the Special Issue Advances in Computer-Aided Technology, 3rd Edition)
Show Figures

Graphical abstract

16 pages, 260 KB  
Commentary
COMPASS Guidelines for Conducting Welfare-Focused Research into Behaviour Modification of Animals
by Paul D. McGreevy, David J. Mellor, Rafael Freire, Kate Fenner, Katrina Merkies, Amanda Warren-Smith, Mette Uldahl, Melissa Starling, Amy Lykins, Andrew McLean, Orla Doherty, Ella Bradshaw-Wiley, Rimini Quinn, Cristina L. Wilkins, Janne Winther Christensen, Bidda Jones, Lisa Ashton, Barbara Padalino, Claire O’ Brien, Caleigh Copelin, Colleen Brady and Cathrynne Henshalladd Show full author list remove Hide full author list
Animals 2026, 16(2), 206; https://doi.org/10.3390/ani16020206 - 9 Jan 2026
Viewed by 792
Abstract
Researchers are increasingly engaged in studies to determine and correct negative welfare consequences of animal husbandry and behaviour modification procedures, not least in response to industries’ growing need to maintain their social licence through demonstrable welfare standards that address public expectations. To ensure [...] Read more.
Researchers are increasingly engaged in studies to determine and correct negative welfare consequences of animal husbandry and behaviour modification procedures, not least in response to industries’ growing need to maintain their social licence through demonstrable welfare standards that address public expectations. To ensure that welfare recommendations are scientifically credible, the studies must be rigorously designed and conducted, and the data produced must be interpreted with full regard to conceptual, methodological, and experimental design limitations. This commentary provides guidance on these matters. In addition to, and complementary with, the ARRIVE guidelines that deal with animal studies in general, there is a need for additional specific advice on the design of studies directed at procedures that alter behaviour, whether through training, handling, or restraint. The COMPASS Guidelines offer clear direction for conducting welfare-focused behaviour modification research. They stand for the following: Controls and Calibration, emphasising rigorous design, baseline measures, equipment calibration, and replicability; Objectivity and Open data, ensuring transparency, validated tools, and data accessibility; Motivation and Methods, with a focus on learning theory, behavioural science, and evidence-based application of positive reinforcers and aversive stimuli; Precautions and Protocols, embedding the precautionary principle, minimising welfare harms, listing stop criteria, and using real-time monitoring; Animal-centred Assessment, with multimodal welfare evaluation, using physiological, behavioural, functional, and objective indicators; Study ethics and Standards, noting the 3Rs (replacement, reduction, and refinement), welfare endpoints, long-term effects, industry independence, and risk–benefit analysis; and Species-relevance and Scientific rigour, facilitating cross-species applicability with real-world relevance and robust methodology. To describe these guidelines, the current article is organised into seven major sections that outline detailed, point-by-point considerations for ethical and scientifically rigorous design. It concludes with a call for continuous improvement and collaboration. A major purpose is to assist animal ethics committees when considering the design of experiments. It is also anticipated that these Guidelines will assist reviewers and editorial teams in triaging manuscripts that report studies in this context. Full article
(This article belongs to the Section Companion Animals)
21 pages, 4706 KB  
Article
Near-Real-Time Integration of Multi-Source Seismic Data
by José Melgarejo-Hernández, Paula García-Tapia-Mateo, Juan Morales-García and Jose-Norberto Mazón
Sensors 2026, 26(2), 451; https://doi.org/10.3390/s26020451 - 9 Jan 2026
Viewed by 176
Abstract
The reliable and continuous acquisition of seismic data from multiple open sources is essential for real-time monitoring, hazard assessment, and early-warning systems. However, the heterogeneity among existing data providers such as the United States Geological Survey, the European-Mediterranean Seismological Centre, and the Spanish [...] Read more.
The reliable and continuous acquisition of seismic data from multiple open sources is essential for real-time monitoring, hazard assessment, and early-warning systems. However, the heterogeneity among existing data providers such as the United States Geological Survey, the European-Mediterranean Seismological Centre, and the Spanish National Geographic Institute creates significant challenges due to differences in formats, update frequencies, and access methods. To overcome these limitations, this paper presents a modular and automated framework for the scheduled near-real-time ingestion of global seismic data using open APIs and semi-structured web data. The system, implemented using a Docker-based architecture, automatically retrieves, harmonizes, and stores seismic information from heterogeneous sources at regular intervals using a cron-based scheduler. Data are standardized into a unified schema, validated to remove duplicates, and persisted in a relational database for downstream analytics and visualization. The proposed framework adheres to the FAIR data principles by ensuring that all seismic events are uniquely identifiable, source-traceable, and stored in interoperable formats. Its lightweight and containerized design enables deployment as a microservice within emerging data spaces and open environmental data infrastructures. Experimental validation was conducted using a two-phase evaluation. This evaluation consisted of a high-frequency 24 h stress test and a subsequent seven-day continuous deployment under steady-state conditions. The system maintained stable operation with 100% availability across all sources, successfully integrating 4533 newly published seismic events during the seven-day period and identifying 595 duplicated detections across providers. These results demonstrate that the framework provides a robust foundation for the automated integration of multi-source seismic catalogs. This integration supports the construction of more comprehensive and globally accessible earthquake datasets for research and near-real-time applications. By enabling automated and interoperable integration of seismic information from diverse providers, this approach supports the construction of more comprehensive and globally accessible earthquake catalogs, strengthening data-driven research and situational awareness across regions and institutions worldwide. Full article
(This article belongs to the Special Issue Advances in Seismic Sensing and Monitoring)
Show Figures

Figure 1

21 pages, 7832 KB  
Article
Application of Ground Penetrating Radar (GPR) in the Survey of Historical Metal Ore Mining Sites in Lower Silesia (Poland)
by Maciej Madziarz and Danuta Szyszka
Appl. Sci. 2026, 16(2), 638; https://doi.org/10.3390/app16020638 - 7 Jan 2026
Viewed by 390
Abstract
This study presents the application of ground-penetrating radar (GPR) in the investigation of historical metal ore mining sites in the Lower Silesia region of Poland. The paper outlines the principles of the GPR method and details the measurement procedures used during fieldwork. GPR [...] Read more.
This study presents the application of ground-penetrating radar (GPR) in the investigation of historical metal ore mining sites in the Lower Silesia region of Poland. The paper outlines the principles of the GPR method and details the measurement procedures used during fieldwork. GPR has proven to be an effective, non-invasive tool for identifying inaccessible or previously unknown underground mining structures, such as shafts, tunnels, and remnants of mining infrastructure. This capability is particularly valuable in the context of extensive and complex post-mining landscapes characteristic of Lower Silesia. The research presents findings from selected sites, demonstrating how GPR surveys facilitated the detection and subsequent archaeological exploration of historical workings. In several cases, the method enabled the recovery of access to underground features, which were then subjected to detailed documentation and preservation efforts. Following necessary safety and adaptation measures, some of these sites have been successfully opened to the public as part of regional tourism initiatives. The study confirms the utility of GPR as a key instrument in post-mining archaeology and mining heritage conservation, offering a rapid and reliable means of mapping subsurface structures without disturbing the terrain. Full article
(This article belongs to the Special Issue Surface and Underground Mining Technology and Sustainability)
Show Figures

Figure 1

13 pages, 6933 KB  
Article
Genome-Wide Association Analysis Reveals Genetic Loci and Candidate Genes Related to Soybean Leaf Shape
by Yan Zhang, Yuan Li, Xiuli Rui, Yina Zhu, Jie Wang, Xue Zhao and Xunchao Zhao
Agriculture 2026, 16(2), 150; https://doi.org/10.3390/agriculture16020150 - 7 Jan 2026
Viewed by 228
Abstract
Soybean is the world’s foremost oilseed crop, and leaf morphology significantly influences yield potential by affecting light interception, canopy structure, and photosynthetic efficiency. In this study, leaf length, leaf width, maximum leaf width, leaf apex opening angle, and leaf area were measured in [...] Read more.
Soybean is the world’s foremost oilseed crop, and leaf morphology significantly influences yield potential by affecting light interception, canopy structure, and photosynthetic efficiency. In this study, leaf length, leaf width, maximum leaf width, leaf apex opening angle, and leaf area were measured in 216 soybean accessions, and genome-wide association studies (GWAS) were conducted using genomic resequencing data to identify genetic variants associated with leaf morphological traits. A total of 824 SNP loci were found to be significantly associated with leaf shape, and 130 candidate genes were identified in the genomic regions flanking these significant loci. KEGG enrichment analysis revealed that the above candidate genes were significantly enriched in arginine biosynthesis (ko00220), nitrogen metabolism (ko00910), carbon metabolism (ko01200), pyruvate metabolism (ko00620), glycolysis/glycogenolysis (ko00010), starch and sucrose metabolism (ko00500), plant–pathogen interaction (ko04626), and amino acid biosynthesis (ko01230). By combining KEGG and GO enrichment analysis as well as expression level analysis, four candidate genes related to leaf shape (Glyma.10G141600, Glyma.13G062700, Glyma.16G041200 and Glyma.20G115500) were identified. Further, through candidate gene association analysis, it was found that the Glyma.10G141600 gene was divided into two major haplotypes. The leaf area of haplotype 1 was significantly smaller than that of haplotype 2. Subsequently, the cutting amplification polymorphism sequence (CAPS) molecular marker was developed. The marker Chr.10:37502955 can effectively distinguish the differences in leaf size through enzymatic digestion technology, and has excellent typing ability and application potential. The above results can provide a theoretical basis for molecular-assisted selection (MAS) of soybean leaf morphology. Full article
(This article belongs to the Section Crop Genetics, Genomics and Breeding)
Show Figures

Figure 1

27 pages, 3862 KB  
Review
Unlocking the Potential of Digital Twin Technology for Energy-Efficient and Sustainable Buildings: Challenges, Opportunities, and Pathways to Adoption
by Muhyiddine Jradi
Sustainability 2026, 18(1), 541; https://doi.org/10.3390/su18010541 - 5 Jan 2026
Viewed by 414
Abstract
Digital Twin technology is transforming how buildings are designed, operated, and optimized, serving as a key enabler of smarter, more energy-efficient, and sustainable built environments. By creating dynamic, data-driven virtual replicas of physical assets, Digital Twins support continuous monitoring, predictive maintenance, and performance [...] Read more.
Digital Twin technology is transforming how buildings are designed, operated, and optimized, serving as a key enabler of smarter, more energy-efficient, and sustainable built environments. By creating dynamic, data-driven virtual replicas of physical assets, Digital Twins support continuous monitoring, predictive maintenance, and performance optimization across a building’s lifecycle. This paper provides a structured review of current developments and future trends in Digital Twin applications within the building sector, particularly highlighting their contribution to decarbonization, operational efficiency, and performance enhancement. The analysis identifies major challenges, including data accessibility, interoperability among heterogeneous systems, scalability limitations, and cybersecurity concerns. It emphasizes the need for standardized protocols and open data frameworks to ensure seamless integration across Building Management Systems (BMSs), Building Information Models (BIMs), and sensor networks. The paper also discusses policy and regulatory aspects, noting how harmonized standards and targeted incentives can accelerate adoption, particularly in retrofit and renovation projects. Emerging directions include Artificial Intelligence integration for autonomous optimization, alignment with circular economy principles, and coupling with smart grid infrastructures. Overall, realizing the full potential of Digital Twins requires coordinated collaboration among researchers, industry, and policymakers to enhance building performance and advance global decarbonization and urban resilience goals. Full article
Show Figures

Figure 1

40 pages, 4849 KB  
Systematic Review
A Review of Drones in Smart Agriculture: Issues, Models, Trends, and Challenges
by Javier Gamboa-Cruzado, Jhon Estrada-Gutierrez, Cesar Bustos-Romero, Cristina Alzamora Rivero, Jorge Nolasco Valenzuela, Carlos Andrés Tavera Romero, Juan Gamarra-Moreno and Flavio Amayo-Gamboa
Sustainability 2026, 18(1), 507; https://doi.org/10.3390/su18010507 - 4 Jan 2026
Viewed by 493
Abstract
This systematic literature review examines the rapid growth of research on the use of drones applied to smart agriculture, a key field for the digital and sustainable transformation of the agricultural sector. The study aimed to synthesize the current state of knowledge regarding [...] Read more.
This systematic literature review examines the rapid growth of research on the use of drones applied to smart agriculture, a key field for the digital and sustainable transformation of the agricultural sector. The study aimed to synthesize the current state of knowledge regarding the application of drones in smart agriculture by applying the Kitchenham protocol (SLR), complemented with Petersen’s systematic mapping (SMS). A search was conducted in high-impact academic databases (Scopus, IEEE Xplore, Taylor & Francis Online, Google Scholar, and ProQuest), covering the period 2019–2025 (July). After applying the inclusion, exclusion, and quality criteria, 73 relevant studies were analyzed. The results reveal that 90% of the publications appear in Q1 journals, with China and the United States leading scientific production. The thematic analysis identified “UAS Phenotyping” as the main driving theme in the literature, while “precision agriculture,” “machine learning,” and “remote sensing” were the most recurrent and highly interconnected keywords. An exponential increase in publications was observed between 2022 and 2024. The review confirms the consolidation of drones as a central tool in digital agriculture, with significant advances in yield estimation, pest detection, and 3D modeling, although challenges remain in standardization, model generalization, and technological equity. It is recommended to promote open access repositories and interdisciplinary studies that integrate socioeconomic and environmental dimensions to strengthen the sustainable adoption of drone technologies in agriculture. Full article
(This article belongs to the Special Issue Remote Sensing for Sustainable Environmental Ecology)
Show Figures

Figure 1

35 pages, 13079 KB  
Article
Walking, Jogging, and Cycling: What Differs? Explainable Machine Learning Reveals Differential Responses of Outdoor Activities to Built Environment
by Musong Xiao, Peng Zhong and Runjiao Liu
Sustainability 2026, 18(1), 485; https://doi.org/10.3390/su18010485 - 3 Jan 2026
Viewed by 375
Abstract
The development of street-based outdoor physical activities plays a vital role in improving public health issues and advancing the goals of the “Healthy China” initiative, and the built environment is widely considered a key factor in promoting such activities and urban sustainability. Existing [...] Read more.
The development of street-based outdoor physical activities plays a vital role in improving public health issues and advancing the goals of the “Healthy China” initiative, and the built environment is widely considered a key factor in promoting such activities and urban sustainability. Existing studies have paid limited attention to the nonlinear relationships between the built environment and outdoor physical activity and have mostly focused on a single type of activity (such as walking or cycling), with few comparative analyses across different activity types. With the purpose of addressing these limitations and providing cross-sectional empirical evidence for sustainable street design and active-transport policy, this study examines streets within the Second Ring Road of Changsha and uses large-scale street-level outdoor activity trajectory data to investigate associations between built environment indicators and outdoor activity flows. A Random Forest model, followed by the application of SHapley Additive exPlanations (SHAP), is used to characterize the nonlinear associations and interactions among variables, capturing patterns relevant to sustainable mobility, public health and urban form. The results indicate the following: (1) The built environment indicators are associated with walking, jogging, and cycling in distinctly different patterns—walking shows stronger associations with population density and access to bus stops; jogging demonstrates stronger associations with the accessibility of large open spaces; and cycling is more associated with land use mix and road intersection density. (2) Nonlinear associations and threshold-like patterns commonly exist between built environment variables and activity flows, with indicators such as bus stop density and walking continuity exhibiting pronounced effects within specific intervals. (3) Interaction effects among variables contribute importantly to model predictions, especially for jogging where their influence can even exceed the main effects of individual factors. These results underscore the potential value of implementing tailored street design strategies for different activity types and provide empirical evidence relevant to health-oriented urban planning. Full article
Show Figures

Figure 1

22 pages, 1555 KB  
Article
Toothbrush-Driven Handheld Droplet Generator for Digital LAMP and Rapid CFU Assays
by Xiaochen Lai, Yong Zhu, Mingpeng Yang and Xicheng Wang
Biosensors 2026, 16(1), 30; https://doi.org/10.3390/bios16010030 - 1 Jan 2026
Viewed by 296
Abstract
Droplet microfluidics enables high-throughput, compartmentalized reactions using minimal reagent volumes, but most implementations rely on precision-fabricated chips and external pumping systems that limit portability and accessibility. Here, we present a handheld vibrational droplet generator that repurposes a consumer electric toothbrush and a modified [...] Read more.
Droplet microfluidics enables high-throughput, compartmentalized reactions using minimal reagent volumes, but most implementations rely on precision-fabricated chips and external pumping systems that limit portability and accessibility. Here, we present a handheld vibrational droplet generator that repurposes a consumer electric toothbrush and a modified disposable pipette tip to produce nearly monodisperse water-in-oil droplets without microfluidic channels or syringe pumps. The device is powered by the toothbrush’s built-in motor and controlled by a simple 3D-printed adapter and adjustable counterweight that tune the vibration amplitude transmitted to the pipette tip. By varying the aperture of the pipette tip, droplets with diameters from ~100–300 µm were generated at rates of ~100 droplets s−1. Image analysis revealed narrow size distributions with coefficients of variation below 5% in typical operating conditions. We further demonstrate proof-of-concept applications in digital loop-mediated isothermal amplification (LAMP) and microbiological colony-forming unit (CFU) assays. A commercial feline parvovirus (FPV) kit manufactured by Beyotime Biotechnology Co., Ltd. (Shanghai, China), three template concentrations yielded emulsified reaction droplets that remained stable at 65 °C for 45 min and produced distinct fractions of fluorescent-positive droplets, allowing estimation of template concentration via a Poisson model. In a second set of experiments, the device was used as a droplet-based spreader to dispense diluted Escherichia coli suspensions onto LB agar plates, achieving uniform colony distributions across the plate at different dilution factors. The proposed handheld vibrational generator is inexpensive, easy to assemble from off-the-shelf components, and minimizes dead volume and cross-contamination because only the pipette tip contacts the sample. Although the current prototype still exhibits device-to-device variability and moving droplets in open containers complicate real-time imaging, these results indicate that toothbrush-based vibrational actuation can provide a practical and scalable route toward “lab-in-hand” droplet assays in resource-limited or educational settings. Full article
Show Figures

Figure 1

26 pages, 1919 KB  
Systematic Review
Federated Learning for Histopathology Image Classification: A Systematic Review
by Meriem Touhami, Mohammad Faizal Ahmad Fauzi, Zaka Ur Rehman and Sarina Mansor
Diagnostics 2026, 16(1), 137; https://doi.org/10.3390/diagnostics16010137 - 1 Jan 2026
Viewed by 530
Abstract
Background/Objective: The integration of machine learning (ML) and deep learning (DL) has significantly enhanced medical image classification, especially in histopathology, by improving diagnostic accuracy and aiding clinical decision making. However, data privacy concerns and restrictions on sharing patient data limit the development [...] Read more.
Background/Objective: The integration of machine learning (ML) and deep learning (DL) has significantly enhanced medical image classification, especially in histopathology, by improving diagnostic accuracy and aiding clinical decision making. However, data privacy concerns and restrictions on sharing patient data limit the development of effective DL models. Federated learning (FL) offers a promising solution by enabling collaborative model training across institutions without exposing sensitive data. This systematic review aims to comprehensively evaluate the current state of FL applications in histopathological image classification by identifying prevailing methodologies, datasets, and performance metrics and highlighting existing challenges and future research directions. Methods: Following PRISMA guidelines, 24 studies published between 2020 and 2025 were analyzed. The literature was retrieved from ScienceDirect, IEEE Xplore, MDPI, Springer Nature Link, PubMed, and arXiv. Eligible studies focused on FL-based deep learning models for histopathology image classification with reported performance metrics. Studies unrelated to FL in histopathology or lacking accessible full texts were excluded. Results: The included studies utilized 10 datasets (8 public, 1 private, and 1 unspecified) and reported classification accuracies ranging from 69.37% to 99.72%. FedAvg was the most commonly used aggregation algorithm (14 studies), followed by FedProx, FedDropoutAvg, and custom approaches. Only two studies reported their FL frameworks (Flower and OpenFL). Frequently employed model architectures included VGG, ResNet, DenseNet, and EfficientNet. Performance was typically evaluated using accuracy, precision, recall, and F1-score. Federated learning demonstrates strong potential for privacy-preserving digital pathology applications. However, key challenges remain, including communication overhead, computational demands, and inconsistent reporting standards. Addressing these issues is essential for broader clinical adoption. Conclusions: Future work should prioritize standardized evaluation protocols, efficient aggregation methods, model personalization, robustness, and interpretability, with validation across multi-institutional clinical environments to fully realize the benefits of FL in histopathological image classification. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

Back to TopTop