Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (882)

Search Parameters:
Keywords = network access selection

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
32 pages, 2758 KB  
Article
A Hybrid Convolutional Neural Network–Long Short-Term Memory (CNN–LSTM)–Attention Model Architecture for Precise Medical Image Analysis and Disease Diagnosis
by Md. Tanvir Hayat, Yazan M. Allawi, Wasan Alamro, Salman Md Sultan, Ahmad Abadleh, Hunseok Kang and Aymen I. Zreikat
Diagnostics 2025, 15(21), 2673; https://doi.org/10.3390/diagnostics15212673 - 23 Oct 2025
Abstract
Background: Deep learning (DL)-based medical image classification is becoming increasingly reliable, enabling physicians to make faster and more accurate decisions in diagnosis and treatment. A plethora of algorithms have been developed to classify and analyze various types of medical images. Among them, Convolutional [...] Read more.
Background: Deep learning (DL)-based medical image classification is becoming increasingly reliable, enabling physicians to make faster and more accurate decisions in diagnosis and treatment. A plethora of algorithms have been developed to classify and analyze various types of medical images. Among them, Convolutional Neural Networks (CNNs) have proven highly effective, particularly in medical image analysis and disease detection. Methods: To further enhance these capabilities, this research introduces MediVision, a hybrid DL-based model that integrates a vision backbone based on CNNs for feature extraction, capturing detailed patterns and structures essential for precise classification. These features are then processed through Long Short-Term Memory (LSTM), which identifies sequential dependencies to better recognize disease progression. An attention mechanism is then incorporated that selectively focuses on salient features detected by the LSTM, improving the model’s ability to highlight critical abnormalities. Additionally, MediVision utilizes a skip connection, merging attention outputs with LSTM outputs along with Grad-CAM heatmap to visualize the most important regions of the analyzed medical image and further enhance feature representation and classification accuracy. Results: Tested on ten diverse medical image datasets (including, Alzheimer’s disease, breast ultrasound, blood cell, chest X-ray, chest CT scans, diabetic retinopathy, kidney diseases, bone fracture multi-region, retinal OCT, and brain tumor), MediVision consistently achieved classification accuracies above 95%, with a peak of 98%. Conclusions: The proposed MediVision model offers a robust and effective framework for medical image classification, improving interpretability, reliability, and automated disease diagnosis. To support research reproducibility, the codes and datasets used in this study have been publicly made available through an open-access repository. Full article
(This article belongs to the Special Issue Machine-Learning-Based Disease Diagnosis and Prediction)
Show Figures

Figure 1

21 pages, 1453 KB  
Review
Current Trends and Future Opportunities of AI-Based Analysis in Mesenchymal Stem Cell Imaging: A Scoping Review
by Maksim Solopov, Elizaveta Chechekhina, Viktor Turchin, Andrey Popandopulo, Dmitry Filimonov, Anzhelika Burtseva and Roman Ishchenko
J. Imaging 2025, 11(10), 371; https://doi.org/10.3390/jimaging11100371 - 18 Oct 2025
Viewed by 214
Abstract
This scoping review explores the application of artificial intelligence (AI) methods for analyzing mesenchymal stem cells (MSCs) images. The aim of this study was to identify key areas where AI-based image processing techniques are utilized for MSCs analysis, assess their effectiveness, and highlight [...] Read more.
This scoping review explores the application of artificial intelligence (AI) methods for analyzing mesenchymal stem cells (MSCs) images. The aim of this study was to identify key areas where AI-based image processing techniques are utilized for MSCs analysis, assess their effectiveness, and highlight existing challenges. A total of 25 studies published between 2014 and 2024 were selected from six databases (PubMed, Dimensions, Scopus, Google Scholar, eLibrary, and Cochrane) for this review. The findings demonstrate that machine learning algorithms outperform traditional methods in terms of accuracy (up to 97.5%), processing speed and noninvasive capabilities. Among AI methods, convolutional neural networks (CNNs) are the most widely employed, accounting for 64% of the studies reviewed. The primary applications of AI in MSCs image analysis include cell classification (20%), segmentation and counting (20%), differentiation assessment (32%), senescence analysis (12%), and other tasks (16%). The advantages of AI methods include automation of image analysis, elimination of subjective biases, and dynamic monitoring of live cells without the need for fixation and staining. However, significant challenges persist, such as the high heterogeneity of the MSCs population, the absence of standardized protocols for AI implementation, and limited availability of annotated datasets. To advance this field, future efforts should focus on developing interpretable and multimodal AI models, creating standardized validation frameworks and open-access datasets, and establishing clear regulatory pathways for clinical translation. Addressing these challenges is crucial for accelerating the adoption of AI in MSCs biomanufacturing and enhancing the efficacy of cell therapies. Full article
Show Figures

Figure 1

43 pages, 6017 KB  
Article
An Efficient Framework for Automated Cyber Threat Intelligence Sharing
by Muhammad Dikko Gambo, Ayaz H. Khan, Ahmad Almulhem and Basem Almadani
Electronics 2025, 14(20), 4045; https://doi.org/10.3390/electronics14204045 - 15 Oct 2025
Viewed by 415
Abstract
As cyberattacks grow increasingly sophisticated, the timely exchange of Cyber Threat Intelligence (CTI) has become essential to enhancing situational awareness and enabling proactive defense. Several challenges exist in CTI sharing, including the timely dissemination of threat information, the need for privacy and confidentiality, [...] Read more.
As cyberattacks grow increasingly sophisticated, the timely exchange of Cyber Threat Intelligence (CTI) has become essential to enhancing situational awareness and enabling proactive defense. Several challenges exist in CTI sharing, including the timely dissemination of threat information, the need for privacy and confidentiality, and the accessibility of data even in unstable network conditions. In addition to security and privacy, latency and throughput are critical performance metrics when selecting a suitable platform for CTI sharing. Substantial efforts have been devoted to developing effective solutions for CTI sharing. Several existing CTI sharing systems adopt either centralized or blockchain-based architectures. However, centralized models suffer from scalability bottlenecks and single points of failure, while the slow and limited transactions of blockchain make it unsuitable for real-time and reliable CTI sharing. To address these challenges, we propose a DDS-based framework that automates data sanitization, STIX-compliant structuring, and real-time dissemination of CTI. Our prototype evaluation demonstrates low latency, linear throughput scaling at configured send rates up to 125 messages per second, with 100% delivery success across all scenarios, while sustaining low CPU and memory overheads. The findings of this study highlight the unique ability of DDS to overcome the timeliness, security, automation, and reliability challenges of CTI sharing. Full article
(This article belongs to the Special Issue New Trends in Cryptography, Authentication and Information Security)
Show Figures

Figure 1

28 pages, 22364 KB  
Article
Assessment and Layout Optimization of Urban Parks Based on Accessibility and Green Space Justice: A Case Study of Zhengzhou City, China
by Shengnan Zhao, Xirui Wen, Yuhang Ge, Xuning Qiao, Yu Wang, Jing Zhang and Wenfei Luan
Land 2025, 14(10), 2055; https://doi.org/10.3390/land14102055 - 15 Oct 2025
Viewed by 473
Abstract
Addressing the imbalance between supply and demand for urban parks necessitates an assessment of their service accessibility and spatial equity. This study integrates multi-source geographic data, uses multiple data sources to generate a population distribution with high spatial resolution, and constructs park service [...] Read more.
Addressing the imbalance between supply and demand for urban parks necessitates an assessment of their service accessibility and spatial equity. This study integrates multi-source geographic data, uses multiple data sources to generate a population distribution with high spatial resolution, and constructs park service areas with multiple time thresholds based on travel preference surveys. The network analysis method is used to evaluate the supply–demand ratio and spatial equity by using location entropy, Lorenz curves, and the Gini coefficient to identify the optimal location. The results reveal a significant difference in the supply–demand ratio of parks. Within the 5 min time threshold, only 14.68% of the pixels in the park supply area meet the needs of residents, while the proportions for the 15 min and 30 min time service area expands to 71.74% and 86.34%, respectively. The distribution of parks exhibits apparent spatial inequity. Equity is highest for the 15 min service area (Gini coefficient = 0.25), followed by the 30 min area (Gini coefficient = 0.27) and 5 min areas (Gini coefficient = 0.37). Among the 80 streets in the study area, the per capita green space location entropy of 11 streets is zero. A targeted site selection analysis for areas with park supply deficiencies led to the proposed addition of 11 new parks. After this optimization, the proportion of regions achieving supply–demand balance or better reached 80.38%, significantly alleviating the supply–demand conflict. This study reveals the characteristics of park supply–demand imbalance and spatial equity under different travel modes and time thresholds, providing a scientific basis for the precise planning and equity enhancement of parks in high-density cities. Full article
(This article belongs to the Special Issue Green Spaces and Urban Morphology: Building Sustainable Cities)
Show Figures

Figure 1

30 pages, 2764 KB  
Article
A Cloud Integrity Verification and Validation Model Using Double Token Key Distribution Model
by V. N. V. L. S. Swathi, G. Senthil Kumar and A. Vani Vathsala
Math. Comput. Appl. 2025, 30(5), 114; https://doi.org/10.3390/mca30050114 - 13 Oct 2025
Viewed by 253
Abstract
Numerous industries have begun using cloud computing. Among other things, this presents a plethora of novel security and dependability concerns. Thoroughly verifying cloud solutions to guarantee their correctness is beneficial, just like with any other computer system that is security- and correctness-sensitive. While [...] Read more.
Numerous industries have begun using cloud computing. Among other things, this presents a plethora of novel security and dependability concerns. Thoroughly verifying cloud solutions to guarantee their correctness is beneficial, just like with any other computer system that is security- and correctness-sensitive. While there has been much research on distributed system validation and verification, nobody has looked at whether verification methods used for distributed systems can be directly applied to cloud computing. To prove that cloud computing necessitates a unique verification model/architecture, this research compares and contrasts the verification needs of distributed and cloud computing. Distinct commercial, architectural, programming, and security models necessitate distinct approaches to verification in cloud and distributed systems. The importance of cloud-based Service Level Agreements (SLAs) in testing is growing. In order to ensure service integrity, users must upload their selected services and registered services to the cloud. Not only does the user fail to update the data when they should, but external issues, such as the cloud service provider’s data becoming corrupted, lost, or destroyed, also contribute to the data not becoming updated quickly enough. The data saved by the user on the cloud server must be complete and undamaged for integrity checking to be effective. Damaged data can be recovered if incomplete data is discovered after verification. A shared resource pool with network access and elastic extension is realized by optimizing resource allocation, which provides computer resources to consumers as services. The development and implementation of the cloud platform would be greatly facilitated by a verification mechanism that checks the data integrity in the cloud. This mechanism should be independent of storage services and compatible with the current basic service architecture. The user can easily see any discrepancies in the necessary data. While cloud storage does make data outsourcing easier, the security and integrity of the outsourced data are often at risk when using an untrusted cloud server. Consequently, there is a critical need to develop security measures that enable users to verify data integrity while maintaining reasonable computational and transmission overheads. A cryptography-based public data integrity verification technique is proposed in this research. In addition to protecting users’ data from harmful attacks like replay, replacement, and forgery, this approach enables third-party authorities to stand in for users while checking the integrity of outsourced data. This research proposes a Cloud Integrity Verification and Validation Model using the Double Token Key Distribution (CIVV-DTKD) model for enhancing cloud quality of service levels. The proposed model, when compared with the traditional methods, performs better in verification and validation accuracy levels. Full article
Show Figures

Figure 1

33 pages, 6336 KB  
Article
A Spatiotemporal Analysis of Potential Demand for Urban Parks Using Long-Term Population Projections
by Daeho Kim, Yoonji Kim, Hyun Chan Sung and Seongwoo Jeon
Land 2025, 14(10), 2045; https://doi.org/10.3390/land14102045 - 13 Oct 2025
Viewed by 366
Abstract
In the Republic of Korea, the problems of low birth rate and population aging are accelerating population decline at the regional level, leading to the phenomena of local extinction and urban shrinkage. These phenomena, coupled with the projected nationwide population decline, pose a [...] Read more.
In the Republic of Korea, the problems of low birth rate and population aging are accelerating population decline at the regional level, leading to the phenomena of local extinction and urban shrinkage. These phenomena, coupled with the projected nationwide population decline, pose a fundamental threat to the sustainability of essential infrastructure such as urban parks. The conventional growth-oriented paradigm of urban planning has shown clear limitations in quantitatively forecasting future demand, constraining proactive management strategies for the era of population decline. To address this gap, this study develops a policy-decision-support framework that integrates long-term population projections, grid-based population data, the DEGURBA urban classification system—a global standard for delineating urban and rural areas— and network-based accessibility analysis. For the entire Republic of Korea, we (1) constructed a 1 km resolution time-series population dataset for 2022–2072; (2) applied DEGURBA to quantify transitions among urban, semi-urban, and rural types; and (3) assessed changes in potential user populations within the defined service catchments. The results indicate that while population concentration in the Seoul Capital Area persists, under the low-variant scenario, a projected average decline of 40% in potential user populations by 2072 will lead to significant functional changes, with 53.6% of municipalities nationwide transitioning to “semi-urban” or “rural” areas. This spatial shift is projected to decrease the proportion of urban parks located in “urban” areas from 83.3% to 75.0%, while the total potential user population is expected to plummet from approximately 44.4 million to 25.8 million, a 42.0% reduction. This study underscores the need for urban park policy to move beyond quantitative expansion and toward quality-oriented management based on selection and concentration. By uniquely integrating long-term demographic scenarios, the Degree of Urbanization (DEGURBA), and spatial accessibility analysis, this study provides a foundational scientific basis for forecasting future demand and supports the formulation of sustainable, data-driven strategies for urban park restructuring under conditions of demographic change. Full article
(This article belongs to the Section Land Planning and Landscape Architecture)
Show Figures

Figure 1

26 pages, 2658 KB  
Review
Microwave Pretreatment for Biomass Pyrolysis: A Systematic Review on Efficiency and Environmental Aspects
by Diego Venegas-Vásconez, Lourdes M. Orejuela-Escobar, Yanet Villasana, Andrea Salgado, Luis Tipanluisa-Sarchi, Romina Romero-Carrillo and Serguei Alejandro-Martín
Processes 2025, 13(10), 3194; https://doi.org/10.3390/pr13103194 - 8 Oct 2025
Viewed by 613
Abstract
Microwave pretreatment (MWP) has emerged as a promising strategy to enhance the pyrolysis of lignocellulosic biomass due to its rapid, volumetric, and selective heating. By disrupting the recalcitrant structure of cellulose, hemicellulose, and lignin, MWP improves biomass deconstruction, increases carbohydrate accessibility, and enhances [...] Read more.
Microwave pretreatment (MWP) has emerged as a promising strategy to enhance the pyrolysis of lignocellulosic biomass due to its rapid, volumetric, and selective heating. By disrupting the recalcitrant structure of cellulose, hemicellulose, and lignin, MWP improves biomass deconstruction, increases carbohydrate accessibility, and enhances yields of bio-oil, syngas, and biochar. When combined with complementary pretreatments—such as alkali, acid, hydrothermal, ultrasonic, or ionic-liquid methods—MWP further reduces activation energies, facilitating more efficient saccharification and thermal conversion. This review systematically evaluates scientific progress in this field through bibliometric analysis, mapping research trends, evolution, and collaborative networks. Key research questions are addressed regarding the technical advantages of MWP, the physicochemical transformations induced in biomass, and associated environmental benefits. Findings indicate that microwave irradiation promotes hemicellulose depolymerization, reduces cellulose crystallinity, and weakens lignin–carbohydrate linkages, which facilitates subsequent thermal decomposition and contributes to improved pyrolysis efficiency and product quality. From an environmental perspective, MWP contributes to energy savings, mitigates greenhouse gas emissions, and supports the integration of renewable electricity in biomass conversion. Full article
(This article belongs to the Special Issue Biomass Pretreatment for Thermochemical Conversion)
Show Figures

Figure 1

23 pages, 401 KB  
Article
BRT Systems in Brazil: Technical Analysis of Advances, Challenges, and Operational Gaps
by Luciana Costa Brizon, Joyce Azevedo Caetano, Cintia Machado de Oliveira and Rômulo Dante Orrico Filho
Urban Sci. 2025, 9(10), 414; https://doi.org/10.3390/urbansci9100414 - 8 Oct 2025
Viewed by 571
Abstract
This paper examines the advances and challenges of Bus Rapid Transit (BRT) systems in Brazil, considering their potential in promoting sustainable urban mobility. Rapid urbanization and the predominance of private motorized transport have intensified the need for efficient, accessible, and environmentally sound collective [...] Read more.
This paper examines the advances and challenges of Bus Rapid Transit (BRT) systems in Brazil, considering their potential in promoting sustainable urban mobility. Rapid urbanization and the predominance of private motorized transport have intensified the need for efficient, accessible, and environmentally sound collective transport solutions. BRT has emerged as a cost-effective alternative to rail systems, combining high capacity, lower implementation costs, and operational flexibility. The study focuses on three Brazilian cities (Rio de Janeiro, Belo Horizonte, and Fortaleza) selected for their regional diversity and distinct BRT models. Using the Delphi method, the analysis was structured around three dimensions: road infrastructure, transport planning and networks, and system operation and performance. Results indicate significant progress in terms of exclusive corridors, integration terminals, express services, and the adoption of Intelligent Transport Systems. However, structural gaps persist, particularly regarding incomplete infrastructure, weak integration between trunk and feeder lines, limited monitoring of feeder services, and insufficient adaptation of networks to urban dynamics. The findings highlight that the effectiveness of Brazilian BRT systems depends on strengthening feeder lines, improving physical and fare integration, and expanding sustainable infrastructure. Full article
Show Figures

Figure 1

21 pages, 4796 KB  
Article
Early Oral Cancer Detection with AI: Design and Implementation of a Deep Learning Image-Based Chatbot
by Pablo Ormeño-Arriagada, Gastón Márquez, Carla Taramasco, Gustavo Gatica, Juan Pablo Vasconez and Eduardo Navarro
Appl. Sci. 2025, 15(19), 10792; https://doi.org/10.3390/app151910792 - 7 Oct 2025
Viewed by 648
Abstract
Oral cancer remains a critical global health challenge, with delayed diagnosis driving high morbidity and mortality. Despite progress in artificial intelligence, computer vision, and medical imaging, early detection tools that are accessible, explainable, and designed for patient engagement remain limited. This study presents [...] Read more.
Oral cancer remains a critical global health challenge, with delayed diagnosis driving high morbidity and mortality. Despite progress in artificial intelligence, computer vision, and medical imaging, early detection tools that are accessible, explainable, and designed for patient engagement remain limited. This study presents a novel system that combines a patient-centred chatbot with a deep learning framework to support early diagnosis, symptom triage, and health education. The system integrates convolutional neural networks, class activation mapping, and natural language processing within a conversational interface. Five deep learning models were evaluated (CNN, DenseNet121, DenseNet169, DenseNet201, and InceptionV3) using two balanced public datasets. Model performance was assessed using accuracy, sensitivity, specificity, diagnostic odds ratio (DOR), and Cohen’s Kappa. InceptionV3 consistently outperformed the other models across these metrics, achieving the highest diagnostic accuracy (77.6%) and DOR (20.67), and was selected as the core engine of the chatbot’s diagnostic module. The deployed chatbot provides real-time image assessments and personalised conversational support via multilingual web and mobile platforms. By combining automated image interpretation with interactive guidance, the system promotes timely consultation and informed decision-making. It offers a prototype for a chatbot, which is scalable and serves as a low-cost solution for underserved populations and demonstrates strong potential for integration into digital health pathways. Importantly, the system is not intended to function as a formal screening tool or replace clinical diagnosis; rather, it provides preliminary guidance to encourage early medical consultation and informed health decisions. Full article
Show Figures

Figure 1

23 pages, 9213 KB  
Article
Hospital-Oriented Development (HOD): A Quantitative Morphological Analysis for Collaborative Development of Healthcare and Daily Life
by Ziyi Chen, Yizhuo Wang, Hua Zhang, Jingmeng Lei, Haochun Tan, Xuan Wang and Yu Ye
Land 2025, 14(10), 1996; https://doi.org/10.3390/land14101996 - 4 Oct 2025
Viewed by 374
Abstract
With the global trend of population aging, human-centered development that integrates medical convenience with daily life quality has become a critical necessity. However, conceptual frameworks, evaluation methods, and spatial prototypes for such ‘healthcare–daily-life’ development remain limited. This study proposes Hospital-Oriented Development (HOD) as [...] Read more.
With the global trend of population aging, human-centered development that integrates medical convenience with daily life quality has become a critical necessity. However, conceptual frameworks, evaluation methods, and spatial prototypes for such ‘healthcare–daily-life’ development remain limited. This study proposes Hospital-Oriented Development (HOD) as a framework to promote collaborative development by considering both hospital accessibility and urban development intensity, derived from multi-sourced urban data. First, a conceptual framework was established, consisting of three dimensions, i.e., network accessibility, facility completeness, and environmental comfort, which was then characterized by twelve indicators based on urban morphological features. Second, these indicators were quantitatively evaluated through detailed values measured among 20 exemplary hospitals in Shanghai selected via user-generated content. Finally, HOD performance and morphology informed the spatial prototype. The results reveal confidence intervals for each indicator and recommended spatial features. Numerically, there was a positive correlation between facility completeness and network accessibility, but a negative correlation with environmental comfort. Spatially, a context-specific HOD prototype for China was developed. This study proposes the concept of HOD, delivers quantitative measurements, and develops a spatial prototype via empirical research, providing theoretical insights and evidence to support the improvement in healthcare environments from a human-centered perspective. Full article
(This article belongs to the Special Issue Feature Papers on Land Use, Impact Assessment and Sustainability)
Show Figures

Figure 1

22 pages, 2815 KB  
Article
Optimization of Pavement Maintenance Planning in Cambodia Using a Probabilistic Model and Genetic Algorithm
by Nut Sovanneth, Felix Obunguta, Kotaro Sasai and Kiyoyuki Kaito
Infrastructures 2025, 10(10), 261; https://doi.org/10.3390/infrastructures10100261 - 29 Sep 2025
Viewed by 430
Abstract
Optimizing pavement maintenance and rehabilitation (M&R) strategies is essential, especially in developing countries with limited budgets. This study presents an integrated framework combining a deterioration prediction model and a genetic algorithm (GA)-based optimization model to plan cost-effective M&R strategies for flexible pavements, including [...] Read more.
Optimizing pavement maintenance and rehabilitation (M&R) strategies is essential, especially in developing countries with limited budgets. This study presents an integrated framework combining a deterioration prediction model and a genetic algorithm (GA)-based optimization model to plan cost-effective M&R strategies for flexible pavements, including asphalt concrete (AC) and double bituminous surface treatment (DBST). The GA schedules multi-year interventions by accounting for varied deterioration rates and budget constraints to maximize pavement performance. The optimization process involves generating a population of candidate solutions representing a set of selected road sections for maintenance, followed by fitness evaluation and solution evolution. A mixed Markov hazard (MMH) model is used to model uncertainty in pavement deterioration, simulating condition transitions influenced by pavement bearing capacity, traffic load, and environmental factors. The MMH model employs an exponential hazard function and Bayesian inference via Markov Chain Monte Carlo (MCMC) to estimate deterioration rates and life expectancies. A case study on Cambodia’s road network evaluates six budget scenarios (USD 12–27 million) over a 10-year period, identifying the USD 18 million budget as the most effective. The framework enables road agencies to access maintenance strategies under various financial and performance conditions, supporting data-driven, sustainable infrastructure management and optimal fund allocation. Full article
Show Figures

Figure 1

20 pages, 2799 KB  
Article
Evaluating Spatial Representativity in a Stakeholder-Driven Honeybee Monitoring Network Across Italy
by Sergio Albertazzi, Irene Guerra, Laura Bortolotti, Piotr Medrzycki and Manuela Giovanetti
Land 2025, 14(10), 1957; https://doi.org/10.3390/land14101957 - 27 Sep 2025
Viewed by 329
Abstract
Stakeholder participation is increasingly promoted in ecological monitoring programmes, yet it raises critical questions about the spatial representativity and scientific robustness of resulting datasets. This study evaluates the representativeness of BeeNet, Italy’s national honeybee monitoring network (2019–2025), in depicting the agricultural landscape despite [...] Read more.
Stakeholder participation is increasingly promoted in ecological monitoring programmes, yet it raises critical questions about the spatial representativity and scientific robustness of resulting datasets. This study evaluates the representativeness of BeeNet, Italy’s national honeybee monitoring network (2019–2025), in depicting the agricultural landscape despite the non-randomised placement of selected apiaries. Apiaries were selected from voluntary beekeepers, balancing stakeholder participation with the objectives of the project. The distribution of over 300 workstations was assessed across Italian regions in relation to surface area and agricultural land-use composition, using Corine Land Cover (CLC) data aggregated into macro-categories. The analysis revealed that, although regional imbalances persist, particularly in mountainous areas or regions with challenging climatic conditions, the network broadly reflects the agricultural landscape in accordance with project objectives. Agricultural categories such as “orchards,” “meadows,” and “complex agricultural surfaces” are often well represented, though limitations in CLC classification likely lead to underestimation in mosaic agroecosystems, such as mixed olive groves and vineyards. An overrepresentation of “anthropic” areas indicated a tendency to situate apiaries in rural yet accessible locations. By combining spatial analyses with field observations and apiary-level data, a refined categorisation of land types and explicit consideration of beekeeping practices, such as nomadism, could strengthen the interpretative capacity of such network. The results underline the importance of spatial validation of stakeholder-driven monitoring to ensure ecological datasets are reliable, policy-relevant, and scientifically robust. Full article
Show Figures

Figure 1

26 pages, 7282 KB  
Article
Simulation of Urban Sprawl Factors in Medium-Scale Metropolitan Areas Using a Cellular Automata-Based Model: The Case of Erzurum, Turkey
by Şennur Arınç Akkuş, Ahmet Tortum and Dilan Kılıç
Appl. Sci. 2025, 15(19), 10377; https://doi.org/10.3390/app151910377 - 24 Sep 2025
Viewed by 428
Abstract
Urban development is the planned growth of cities that takes into account ecological issues, the needs of urban life, social and technical equipment standards, and quality of life. However, as a result of policies implemented by decision-makers and users, both planned and unplanned, [...] Read more.
Urban development is the planned growth of cities that takes into account ecological issues, the needs of urban life, social and technical equipment standards, and quality of life. However, as a result of policies implemented by decision-makers and users, both planned and unplanned, urban space is expanding spatially outwards from the city, while also experiencing densification in vacant areas within the city and functional transformations in land use. This process, known as urban sprawl, has been intensely debated over the past century. Making the negative effects of urban sprawl measurable and understandable from a scientific perspective is critically important for sustainable urban planning and management. Transportation surfaces hold a significant share in the land use patterns of expanding cities in physical space, and accessibility is one of the main driving forces behind land use change. Therefore, the most significant consequence of urban sprawl is the increase in urban mobility, which is shaped by the needs of urban residents to access urban functions. This increase poses risk factors for the planning period in terms of time, cost, and especially environmental impact. Urban space has a dynamic and complex structure. Planning is based on being able to guess how this structure will change over time. At first, geometric models were used to study cities, but as time went on and the network of relationships became more complicated, more modern and technological methods were needed. Artificial Neural Networks, Support Vector Machines, Agent-Based Models, Markov Chain Models, and Cellular Automata, developed using computer-aided design technologies, can be cited as examples of these approaches. In this study, the temporal change in urban sprawl and its relationship with influencing factors will be revealed using the SLEUTH model, which is one of the cellular automata-based urban simulation models. Erzurum, one of the medium-sized metropolitan cities that gained importance after the conversion of provincial borders into municipal borders with the Metropolitan Law No. 6360, has been selected as the case study area for this research. The urban sprawl process and determining factors of Erzurum will be analyzed using the SLEUTH model. By creating a simulation model of the current situation within the specified time periods and generating future scenarios, the aim is to develop planning decisions with sustainable, ecological, and optimal size and density values. Full article
(This article belongs to the Section Civil Engineering)
Show Figures

Figure 1

33 pages, 978 KB  
Article
An Interpretable Clinical Decision Support System Aims to Stage Age-Related Macular Degeneration Using Deep Learning and Imaging Biomarkers
by Ekaterina A. Lopukhova, Ernest S. Yusupov, Rada R. Ibragimova, Gulnaz M. Idrisova, Timur R. Mukhamadeev, Elizaveta P. Grakhova and Ruslan V. Kutluyarov
Appl. Sci. 2025, 15(18), 10197; https://doi.org/10.3390/app151810197 - 18 Sep 2025
Viewed by 507
Abstract
The use of intelligent clinical decision support systems (CDSS) has the potential to improve the accuracy and speed of diagnoses significantly. These systems can analyze a patient’s medical data and generate comprehensive reports that help specialists better understand and evaluate the current clinical [...] Read more.
The use of intelligent clinical decision support systems (CDSS) has the potential to improve the accuracy and speed of diagnoses significantly. These systems can analyze a patient’s medical data and generate comprehensive reports that help specialists better understand and evaluate the current clinical scenario. This capability is particularly important when dealing with medical images, as the heavy workload on healthcare professionals can hinder their ability to notice critical biomarkers, which may be difficult to detect with the naked eye due to stress and fatigue. Implementing a CDSS that uses computer vision (CV) techniques can alleviate this challenge. However, one of the main obstacles to the widespread use of CV and intelligent analysis methods in medical diagnostics is the lack of a clear understanding among diagnosticians of how these systems operate. A better understanding of their functioning and of the reliability of the identified biomarkers will enable medical professionals to more effectively address clinical problems. Additionally, it is essential to tailor the training process of machine learning models to medical data, which are often imbalanced due to varying probabilities of disease detection. Neglecting this factor can compromise the quality of the developed CDSS. This article presents the development of a CDSS module focused on diagnosing age-related macular degeneration. Unlike traditional methods that classify diseases or their stages based on optical coherence tomography (OCT) images, the proposed CDSS provides a more sophisticated and accurate analysis of biomarkers detected through a deep neural network. This approach combines interpretative reasoning with highly accurate models, although these models can be complex to describe. To address the issue of class imbalance, an algorithm was developed to optimally select biomarkers, taking into account both their statistical and clinical significance. As a result, the algorithm prioritizes the selection of classes that ensure high model accuracy while maintaining clinically relevant responses generated by the CDSS module. The results indicate that the overall accuracy of staging age-related macular degeneration increased by 63.3% compared with traditional methods of direct stage classification using a similar machine learning model. This improvement suggests that the CDSS module can significantly enhance disease diagnosis, particularly in situations with class imbalance in the original dataset. To improve interpretability, the process of determining the most likely disease stage was organized into two steps. At each step, the diagnostician could visually access information explaining the reasoning behind the intelligent diagnosis, thereby assisting experts in understanding the basis for clinical decision-making. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

42 pages, 2583 KB  
Review
Wind Field Modeling over Hilly Terrain: A Review of Methods, Challenges, Limitations, and Future Directions
by Weijia Wang and Fubin Chen
Appl. Sci. 2025, 15(18), 10186; https://doi.org/10.3390/app151810186 - 18 Sep 2025
Viewed by 831
Abstract
Accurate wind field modeling over hilly terrain is critical for wind energy, infrastructure safety, and environmental assessment, yet its inherent complexity poses significant simulation challenges. This paper systematically reviews this field’s major advances by analyzing 610 key publications from 2015 to 2024, selected [...] Read more.
Accurate wind field modeling over hilly terrain is critical for wind energy, infrastructure safety, and environmental assessment, yet its inherent complexity poses significant simulation challenges. This paper systematically reviews this field’s major advances by analyzing 610 key publications from 2015 to 2024, selected from core databases (e.g., Web of Science and Scopus) through targeted keyword searches (e.g., ‘wind flow’, ‘complex terrain’, ‘CFD’, ‘hilly’) and subsequent rigorous relevance screening. We critique four primary modeling paradigms—field measurements, wind tunnel experiments, Computational Fluid Dynamics (CFD), and data-driven methods—across three key application areas, filling a gap left by previous single-focus reviews. The analysis confirms CFD’s dominance (75% of studies), with a clear shift from idealized 2D to real 3D terrain. Key findings indicate that high-fidelity coupled models (e.g., LES), validated against benchmark field experiments such as Perdigão, can reduce mean wind speed prediction bias to below 0.1 m/s; and optimized engineering designs for mountainous infrastructure can mitigate local wind speed amplification effects by 15–20%. Data-driven surrogate models, represented by FuXi-CFD, show revolutionary potential, reducing the inference time for high-resolution wind fields from hours to seconds, though they currently lack standardized validation. Finally, this review summarizes persistent challenges and outlines future directions, advocating for physics-informed neural networks, high-fidelity multi-scale models, and the establishment of open-access benchmark datasets. Full article
Show Figures

Figure 1

Back to TopTop