Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (4,319)

Search Parameters:
Keywords = core-scale

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 2585 KB  
Article
Characterizing the Spatiotemporal Complexity of Power Outages in the U.S. Power Grid: A Reliability Assessment Perspective
by Qun Yu, Zhiyi Zhou, Tongshuai Jin, Weimin Sun and Jiongcheng Yan
Energies 2026, 19(5), 1252; https://doi.org/10.3390/en19051252 - 2 Mar 2026
Abstract
With the intensification of climate change, deepening energy transition, and increasing social vulnerability, extreme power outage events pose escalating challenges to the governance capacity of modern power systems. Existing evaluation frameworks primarily focus on engineering reliability and economic loss estimation, lacking systematic quantification [...] Read more.
With the intensification of climate change, deepening energy transition, and increasing social vulnerability, extreme power outage events pose escalating challenges to the governance capacity of modern power systems. Existing evaluation frameworks primarily focus on engineering reliability and economic loss estimation, lacking systematic quantification of the governance complexity arising from multidimensional interacting pressures behind outage events. This creates a blind spot in both theoretical research and governance practice, hindering differentiated resilience decision-making. To address this gap, this study develops a four-dimensional evaluation framework of power outage governance complexity encompassing event attributes, external environment, internal system, and social impacts. Based on county-level outage data and multi-source auxiliary data in the United States from 2015 to 2024 and employing the XGBoost–SHAP interpretable machine learning approach, we construct the Power Outage Complexity Index (POCI) for all U.S. counties and systematically analyze its spatiotemporal evolution and core driving factors. The results show that outage governance complexity in the U.S. power grid exhibits a significant upward trend during 2015–2024, with an average annual growth rate of 1.84%. Spatially, significant positive autocorrelation is observed, and 146 high-complexity hotspot counties are identified, mainly clustered along the East and West Coasts, the Gulf Coast, and the Southwest. Driver analysis reveals that social impact and event attribute dimensions together account for nearly 90% of the variance in complexity, with cumulative outage exposure burden, outage frequency, and large-scale event ratio being the most critical drivers. Theoretically, this study extends power resilience research from an engineering-physical paradigm to a socio-technical governance paradigm and provides a reproducible methodological framework for assessing governance complexity in critical infrastructure systems. Practically, the POCI can serve as a governance diagnostic tool for the power industry and regulators, supporting resilience investment prioritization, emergency resource optimization, and differentiated governance strategy formulation. It also provides empirical evidence for safeguarding energy security in highly vulnerable communities and promoting energy resilience equity. Full article
24 pages, 1346 KB  
Systematic Review
Artificial Intelligence in Cadastre: A Systematic Review of Methods, Applications, and Trends
by Jingshu Chen, Majid Nazeer, Bo Sum Lee and Man Sing Wong
Land 2026, 15(3), 411; https://doi.org/10.3390/land15030411 - 2 Mar 2026
Abstract
Surveying and register administration are core to land administration, and accordingly, land surveying and registration are essential to socio-economic development due to their potential accuracy and efficiency. Until now, customary land surveying and registration have relied on human input, which is a situation [...] Read more.
Surveying and register administration are core to land administration, and accordingly, land surveying and registration are essential to socio-economic development due to their potential accuracy and efficiency. Until now, customary land surveying and registration have relied on human input, which is a situation that undermines efficiency and is prone to errors in data handling. During the last decade, the exponential growth in artificial intelligence (AI), in particular, geospatial artificial intelligence (GeoAI), has provided new methodologies that can overcome these deficiencies. This review examines AI in cadastral management by analyzing technical solutions and trends across three areas including data collection, modeling, and common applications. This review aims to provide a comprehensive survey of the current use of AI in cadastral management to the extent of defining a future research avenue. Based on the comprehensive review of literature, this study has reached the following three conclusions. (1) Automated extraction of parcel boundaries has been achieved through deep learning in data collection and processing, removing the bottlenecks of manual interpretation. Models such as convolutional neural networks (CNNs) and Transformers have been used for pixel-level semantic segmentation of high-resolution remote sensing images, leading to significant improvements in efficiency and accuracy. (2) Non-spatial data have been processed with natural language processing techniques to automatically extract information and construct relationships, thus overcoming the limitations of paper-based archives and traditional relational databases. (3) Deep learning models have been applied to automatically detect parcel changes and to enable integrated analysis of spatial and non-spatial data, which has supported the transition of cadastral management from two-dimensional to three-dimensional. However, several challenges remain, including differences in multi-temporal data processing, spatial semantic ambiguity, and the lack of large-scale, high-quality annotated data. Future research can focus on improving model generalization, advancing cross-modal data fusion, and providing recommendations for the development of a reliable and practical intelligent cadastral system. Full article
Show Figures

Figure 1

38 pages, 9716 KB  
Article
Research on Spatial Information Network Vulnerability Analysis Methodology Based on Multi-Layer Hypernetworks
by Xiaolan Yu, Wei Xiong and Yali Liu
Sensors 2026, 26(5), 1570; https://doi.org/10.3390/s26051570 - 2 Mar 2026
Abstract
As the core infrastructure for providing all-weather, full-coverage, high-speed, and diversified information services, spatial information networks (SINs) possess significant social, economic, and military value. However, due to the inherent characteristics of their network architecture, SINs are susceptible to core service paralysis and functional [...] Read more.
As the core infrastructure for providing all-weather, full-coverage, high-speed, and diversified information services, spatial information networks (SINs) possess significant social, economic, and military value. However, due to the inherent characteristics of their network architecture, SINs are susceptible to core service paralysis and functional failure under large-scale targeted attacks or random disturbances, posing a critical bottleneck that constrains their stable operation. Current research on SIN vulnerability is predominantly confined to a single network topology perspective, lacking an integrated consideration of the task execution perspective. Consequently, it fails to accommodate the dual requirements of “network topology stability” and “task execution effectiveness”. To address the aforementioned research needs and challenges, this study adopts a “topology-task” dual-perspective fusion approach and proposes a vulnerability analysis framework for SINs that integrates multi-layer networks and hypernetworks. First, a two-layer SIN topology model encompassing the user layer and the satellite layer is constructed. Leveraging hypernetwork theory, information tasks involving multiple network entities are formally defined, and an integrated multi-layer hypernetwork model is established. Second, based on distinct task types, three categories of task efficiency evaluation metrics are defined, and corresponding quantitative methods for calculating SIN vulnerability are derived. Third, during the vulnerability analysis phase, a novel strategy for identifying and removing overlapping nodes in hypernetworks is introduced to enable precise localization of critical nodes within the network. Concurrently, a pre-attack node hardening strategy is designed to minimize the impact of attacks on network performance. Finally, through systematic analysis of vulnerability performance and critical node characteristics under different node removal strategies, the results demonstrate enhanced network performance. The effectiveness of the proposed method is validated by comparing the defense performance of the hardening strategy across various attack scenarios. To verify the feasibility and superiority of the proposed method, this study designs 5 × 5 groups of simulation experiments with varying network parameters. The results indicate that, compared with traditional methods, the proposed strategy can more accurately identify core nodes affecting the stable operation of SINs, significantly reducing network vulnerability and improving network survivability. In addition, a comprehensive sensitivity analysis of SIN vulnerability is conducted from three key influencing dimensions—mission scale, satellite count, and constellation configuration—clarifying the impact of each dimension on network invulnerability. Thus, this paper provides a reliable theoretical foundation and technical support for the planning, design, optimal deployment, and operation and maintenance management of SINs. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

30 pages, 507 KB  
Article
How Does Data Factor Allocation Drive the Niche Leap of Startups? The Mediating Role of Digital Capability Integration and the Moderating Effect of Data Governance Maturity
by Tong Shi, Haiqing Hu and Xinyue Qin
Sustainability 2026, 18(5), 2422; https://doi.org/10.3390/su18052422 - 2 Mar 2026
Abstract
Against the backdrop of the digital economy reshaping the global competitive landscape and the urgent demand for sustainable development, how data factors drive startups to break through resource constraints, achieve a niche leap, and realize long-term sustainable growth has become a critical issue [...] Read more.
Against the backdrop of the digital economy reshaping the global competitive landscape and the urgent demand for sustainable development, how data factors drive startups to break through resource constraints, achieve a niche leap, and realize long-term sustainable growth has become a critical issue of common concern in academia and policy circles. Drawing on resource orchestration theory and the dynamic capability view, this study constructs a theoretical framework of “Data Factor Allocation→Digital Capability Integration→Niche Leap→Sustainable Growth” and conducts an empirical test, using 412 technology-based startups as samples. The findings are as follows: (1) Data factor allocation (encompassing scenario-based access, lightweight tool penetration, and ecological sharing) exerts a significant inverted U-shaped relationship impact on both digital capability integration and the startup niche leap (range of quadratic term coefficients for core dimensions: −0.165~−0.203, p < 0.01), with turning points between 3.41 and 3.72 on a 5-point scale. Excessive data investment may trigger risks of capability hollowing and niche lock-in, hindering sustainable growth. (2) Digital capability integration (including technology application, resource coordination, and dynamic adaptation capabilities) plays a non-linear mediating role, with mediation proportions ranging from 18.7% to 32.4%. Among them, the technology application capability exhibits the highest transmission efficiency between lightweight tool penetration and the niche leap (32.4%), thereby promoting sustainable value creation. (3) The moderating effect of data governance maturity is heterogeneous: governance adaptability significantly strengthens the mediating path of the technology application capability (β = 0.187, p < 0.01) and security compliance enhances the transmission efficiency of the resource coordination capability (β = 0.165, p < 0.01), while the moderating effect of open sharing is insignificant. These findings provide a dynamic framework for the non-linear and sustainable leap of startups by integrating two core theories. They offer a decision-making basis for enterprises to optimize data allocation strategies (e.g., controlling allocation thresholds to avoid resource waste) and for governments to improve governance policies (e.g., data vouchers, trusted data spaces), thereby facilitating the implementation of the “Data Factor × Innovation and Entrepreneurship × Sustainable Development” initiative and promoting the sustainable growth of the digital economy ecosystem. Full article
20 pages, 5116 KB  
Article
Improvement of the Nattokinase Production in Bacillus subtilis by Multiscale Breeding Strategies
by Jia-Chang Li, Shu-Ping Tian and Jian-Zhong Xu
Fermentation 2026, 12(3), 130; https://doi.org/10.3390/fermentation12030130 - 2 Mar 2026
Abstract
This study aims to construct a nattokinase (NK) high-yielding strain using the multiple-scale breeding method. First, an NK-producing strain Bacillus subtilis A-1 was isolated from fermented soybean, which produces 254 FU/mL of NK. Subsequently, ARTP mutagenesis was employed to screen high-yield mutants with [...] Read more.
This study aims to construct a nattokinase (NK) high-yielding strain using the multiple-scale breeding method. First, an NK-producing strain Bacillus subtilis A-1 was isolated from fermented soybean, which produces 254 FU/mL of NK. Subsequently, ARTP mutagenesis was employed to screen high-yield mutants with resistance to rifampicin (i.e., strain R-F7), kanamycin (i.e., strain K-E11), and gentamicin (i.e., strain G-D5), and the resulted strains showed NK activity increases of 113.78%, 76.38%, and 62.99%, respectively. Moreover, a fusion strain C-D7 with resistant to the above three antibiotics (i.e., rifampicin, kanamycin, and gentamicin) was obtained by protoplast fusion, which produced 610 FU/mL of NK and represents a 140.16% higher that of strain A-1. The fermenting property of strain C-D7 was also done in a 5-L bioreactor, and results indicated that strain C-D7 produced 1020 ± 35 FU/mL of NK under a two-stage pH control strategy and a two-step feeding strategy. To elucidate the genetic basis for the high-yield phenotype of C-D7. comparative whole-genome analysis was performed between C-D7 and the parental strain A-1. The results revealed that C-D7 harbors specific mutations across multiple functional categories, primarily in genes related to transcription, translation, global regulation, as well as metabolism and secretion. The biological processes affected by these mutations show a strong correlation with the high-yield trait, suggesting their potential collective role in contributing to the observed increase in nattokinase production. Lastly, ituD and srfAC were knocked out to reduce foam during fermentation, thus reducing the use of antifoaming agents and mitigating the negative effects on cell growth. In a word, a genetically stable, high-yield, and low-foaming Bacillus subtilis strain C-D7-ΔDouble was constructed in this study, which provides a core microbial resource and process foundation for the low-cost industrial production of nattokinase. Full article
(This article belongs to the Special Issue Metabolic Engineering, Strain Modification and Industrial Application)
Show Figures

Figure 1

23 pages, 919 KB  
Article
A Hybrid Deep Learning Architecture for Intrusion Detection Deploying Multi-Scale Feature Interaction and Temporal Modeling
by Eva Jakubcova, Maros Jakubec and Peter Pocta
AI 2026, 7(3), 87; https://doi.org/10.3390/ai7030087 (registering DOI) - 2 Mar 2026
Abstract
Network intrusion detection is a core component of modern cybersecurity, but it remains challenging due to highly imbalanced traffic, heterogeneous feature types, and a presence of short-term temporal dependencies in network flows. Traditional machine learning models often rely on handcrafted features and struggle [...] Read more.
Network intrusion detection is a core component of modern cybersecurity, but it remains challenging due to highly imbalanced traffic, heterogeneous feature types, and a presence of short-term temporal dependencies in network flows. Traditional machine learning models often rely on handcrafted features and struggle with complex attack patterns, while deep learning approaches may become overly complex or difficult to interpret. In this paper, we propose a neural intrusion detection method that combines structured feature preprocessing with a compact hybrid architecture. Numerical and categorical traffic features are processed separately using robust normalisation and trainable embeddings, and then merged into an unified representation. The proposed model builds on a multi-scale feature interaction block, followed by channel-wise attention and a single bidirectional gated recurrent unit layer with attention pooling to capture short-term temporal behavior. The method is evaluated on two widely used benchmark datasets, i.e., the CIC-IDS2017 and CSE-CIC-IDS2018 dataset. Experimental results show that the proposed approach consistently outperforms the classical machine learning baselines and achieves competitive or superior performance compared to the recent deep learning methods proposed in the literature. The results confirm that the proposed architectural choices effectively capture both feature interactions and temporal patterns in network traffic. Full article
(This article belongs to the Section AI Systems: Theory and Applications)
Show Figures

Figure 1

13 pages, 243 KB  
Article
Does Clinical Training Influence Empathy in Dental Students? Evidence from a Cross-Sectional Study in Lithuania
by Kornelija Rogalnikovaitė, Julija Narbutaitė, Vilija Andruškevičienė, Vilma Brukienė and Eglė Aida Bendoraitienė
Dent. J. 2026, 14(3), 137; https://doi.org/10.3390/dj14030137 - 2 Mar 2026
Abstract
Background/Objectives: Empathy is a core component of professional competence in dentistry, influencing patient-centered care and treatment outcomes. Evidence suggests that empathy may decline during clinical training, but data from Lithuanian dental students are lacking. This study aimed to assess empathy levels and [...] Read more.
Background/Objectives: Empathy is a core component of professional competence in dentistry, influencing patient-centered care and treatment outcomes. Evidence suggests that empathy may decline during clinical training, but data from Lithuanian dental students are lacking. This study aimed to assess empathy levels and subscale patterns among Lithuanian dental students and examine their association with academic year. Methods: A cross-sectional study was conducted among third- to fifth-year dental students at the two universities in Lithuania. The Lithuanian version of the Jefferson Scale of Empathy–Health Professions Students (JSE-HPS) was used to measure total empathy and three subscales: Perspective Taking (PT), Compassionate Care (CC), and Standing in the Patient’s Shoes (SPS). Internal consistency was assessed using Cronbach’s alpha. Factor validity was examined via principal component analysis with Varimax rotation and Kaiser normalization. Differences across academic years were analyzed using Kruskal–Wallis tests. Results: A total of 252 students completed the questionnaire (response rate: 93%). The Lithuanian JSE-HPS demonstrated good internal consistency (α = 0.808) and confirmed a three-factor structure. The mean total empathy score was 106.07 ± 12.55. JSE-HPS scores differed significantly between dental classes (p < 0.001). Fifth-year students had significantly lower JSE-HPS scores than third- and fourth-year students (101.65 vs. 107.05 and 109.36; p = 0.035 and p = 0.007). PT and CC scores significantly declined in fifth-year students compared with earlier years, whereas SPS scores remained stable. Conclusions: The Lithuanian version of the JSE-HPS is a reliable and psychometrically sound tool for assessing empathy. Clinical training was significantly associated with a decline in total empathy scores among Lithuanian dental students, highlighting the impact of academic progression on both cognitive and affective components of empathy. Given the cross-sectional design, causal inferences cannot be drawn. Full article
(This article belongs to the Special Issue Dental Education: Innovation and Challenge)
Show Figures

Graphical abstract

25 pages, 3940 KB  
Article
GDEIM-SF: A Lightweight UAV Detection Framework Coupling Dehazing and Low-Light Enhancement
by Jihong Zheng and Leqi Li
Sensors 2026, 26(5), 1557; https://doi.org/10.3390/s26051557 - 2 Mar 2026
Abstract
In complex traffic environments, image degradation caused by haze, low illumination, and occlusion significantly undermines the reliability of vehicle and pedestrian detection. To address these challenges, this paper proposes an aerial vision framework that tightly couples multi-level image enhancement with a lightweight detection [...] Read more.
In complex traffic environments, image degradation caused by haze, low illumination, and occlusion significantly undermines the reliability of vehicle and pedestrian detection. To address these challenges, this paper proposes an aerial vision framework that tightly couples multi-level image enhancement with a lightweight detection architecture. At the image preprocessing stage, a cascaded “dehazing + enhancement” module is constructed, where a learning-based dehazing method is employed to restore long-range details affected by scattering artifacts. Additionally, structural fidelity is enhanced in low-light regions, while global brightness consistency is achieved. On the detection side, a lightweight yet robust detection architecture, termed GDEIM-SF, is designed. It adopts GoldYOLO as the lightweight backbone and integrates D-FINE as an anchor-free decoder. Moreover, two key modules, CAPR and ASF, are incorporated to enhance high-frequency edge modeling and multi-scale semantic alignment. Through evaluation on the VisDrone dataset, the proposed method achieves improvements of approximately 2.5 to 2.7 percentage points in core metrics such as mAP@50-90 compared to similar lightweight models, while maintaining a low parameter count and computational overhead. This ensures a balanced trade-off among detection accuracy, inference efficiency, and deployment adaptability, providing a practical and efficient solution for UAV-based visual perception tasks under challenging imaging conditions. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

20 pages, 1584 KB  
Article
Solving the High-Speed Railway Crew Matching Problem: From the Group Skill Balance and Crew Member Preference Perspective
by Wen Li, Yinzhen Li, Guiqian Luo, Tao Feng, Xiaorong Wang and Rui Xue
Mathematics 2026, 14(5), 845; https://doi.org/10.3390/math14050845 (registering DOI) - 2 Mar 2026
Abstract
The crew matching problem (CMP) is a fundamental component of the crew scheduling problem, serving as a core element that determines the service quality of crew operations and the satisfaction of crew members. It refers to the planning of forming crew teams by [...] Read more.
The crew matching problem (CMP) is a fundamental component of the crew scheduling problem, serving as a core element that determines the service quality of crew operations and the satisfaction of crew members. It refers to the planning of forming crew teams by combining chief stewards and stewards. However, existing studies have not sufficiently explored this issue. Based on this, this study constructs a multi-objective optimization model for the crew matching plan from the dual perspectives of skill balance and crew team collaboration preferences. The GUROBI solver is employed to obtain exact solutions to the problem, and the model’s effectiveness is validated through small-scale numerical examples. Tests are further conducted from dimensions such as weight variation and problem scale, clarifying the maximum tractable problem size within acceptable computation time. A comparative analysis is performed against manually formulated crew matching plans. The results show that, compared with manually formulated plans, the crew matching plan improves the skill balance of crew team services by 5% and increases satisfaction by 80%, providing a quantitative basis for decision-making for high-speed railway crew management departments. Full article
Show Figures

Figure 1

21 pages, 1458 KB  
Review
Microbial Metabolic Pathways for Synergistic Biomethane Augmentation and CO2 Sequestration in Coalbed Systems: A Mini-Review
by Yang Li, Longxi Shuai and Qian Zhang
Microorganisms 2026, 14(3), 566; https://doi.org/10.3390/microorganisms14030566 (registering DOI) - 2 Mar 2026
Abstract
Natural gas represents a pivotal transitional clean energy resource, and biogenic coalbed methane (CBM) is ubiquitously distributed in coal reservoirs worldwide. In the context of carbon neutrality targets and the growing demand for large-scale commercial CBM exploitation, innovative technological solutions are urgently required. [...] Read more.
Natural gas represents a pivotal transitional clean energy resource, and biogenic coalbed methane (CBM) is ubiquitously distributed in coal reservoirs worldwide. In the context of carbon neutrality targets and the growing demand for large-scale commercial CBM exploitation, innovative technological solutions are urgently required. CBM bioengineering aims to substantially enhance CBM production by stimulating biomethane generation, promoting gas desorption, and improving reservoir permeability, while simultaneously enabling effective CO2 sequestration. The potential for biomethane generation is largely governed by the intrinsic physicochemical characteristics of coal, including aromatic structures, maceral composition, and pore–fracture architecture. In addition, hydrogeological conditions—such as geothermal gradients, pH variability, and redox potential—play critical roles in regulating microbial functional gene expression and metabolic enzyme synthesis. Core pretreatment strategies in coalbed gas bioengineering can be broadly classified into approaches that enhance coal bioconversion potential and those that optimize functional microbial consortia. Electric fields and conductive materials can influence microbial community structure by enriching electroactive microorganisms and facilitating interspecies electron transfer. In addition to engineered conductive interventions, reservoir environmental conditions also play an important role in shaping methanogenic community structure. Experimental observations under reservoir-relevant CO2 pressure and temperature conditions indicate that deep coalbed environments are associated with shifts in methanogenic community composition, including an increased relative abundance of hydrogenotrophic methanogens. These observations suggest that physicochemical conditions in deep coal seams may favor hydrogen-dependent CO2 reduction pathways, thereby supporting hydrogenotrophic methanogenesis and contributing to biomethane generation. The integration of supercritical CO2 with microbially acclimated stimulation fluids as an innovative reservoir fracturing strategy offers multiple advantages, including effective reservoir stimulation, permanent carbon sequestration, and sustainable biomethane generation. Future research should focus on modulating coal matrix bioavailability, optimizing microbial consortia, enhancing interspecies metabolic synergies, and advancing carbon fixation bioprocesses to facilitate the large-scale implementation of coalbed gas bioengineering systems. This review synthesizes recent advances in microbially mediated CBM enhancement and CO2 sequestration, with a particular focus on field-scale evidence and the key challenges that must be addressed for large-scale implementation. Full article
(This article belongs to the Section Microbial Biotechnology)
Show Figures

Figure 1

14 pages, 462 KB  
Article
International Tourists’ Perceptions of Smart Tourism Features in Small Island Developing Countries
by Anaísa Dias and Nuno Abranja
Tour. Hosp. 2026, 7(3), 66; https://doi.org/10.3390/tourhosp7030066 (registering DOI) - 2 Mar 2026
Abstract
Small islands in developing countries often face infrastructural limitations, environmental fragility, and heavy economic dependence on tourism, making smart and sustainable innovation crucial. This study investigates what international tourists value in a destination to perceive it as a “smart island,” applying the smart [...] Read more.
Small islands in developing countries often face infrastructural limitations, environmental fragility, and heavy economic dependence on tourism, making smart and sustainable innovation crucial. This study investigates what international tourists value in a destination to perceive it as a “smart island,” applying the smart city paradigm to the context of small island developing countries. A structured survey was conducted with 420 international tourists from diverse nationalities, using a five-point Likert scale to assess the importance of smart tourism attributes. Descriptive statistics, Pearson correlations, t-tests, and regression analyses were performed to identify significant predictors of overall satisfaction with smart tourism experiences. This study provides empirical evidence that international tourists primarily perceive destination smartness through core digital and infrastructural features rather than advanced technological sophistication. Real-time information systems emerged as the strongest predictor of perceived smartness, followed by free Wi-Fi access, sustainability-related technologies, and smart transport systems. The findings further reveal that demographic and cultural factors influence technology preferences, while immersive tools such as augmented reality play a secondary role. Overall, the results indicate that, in Small Island Developing Countries, smart tourism should be understood as a strategic approach to improving accessibility, connectivity, sustainability, and destination resilience rather than merely adopting high-end technologies. Full article
Show Figures

Figure 1

17 pages, 1309 KB  
Article
Path Loss Considering Atmospheric Impact in 5G Networks: A Comparison of Machine Learning Models
by Vasileios P. Rekkas, Leandro dos Santos Coelho, Viviana Cocco Mariani, Adamantini Peratikou and Sotirios K. Goudos
Technologies 2026, 14(3), 151; https://doi.org/10.3390/technologies14030151 - 2 Mar 2026
Abstract
Accurate estimation of wireless propagation characteristics is essential for guiding the design and deployment of fifth-generation (5G) communication systems. As network demand increases and 5G infrastructure is introduced in progressive phases, reliable path loss (PL) prediction models are required to refine deployment strategies [...] Read more.
Accurate estimation of wireless propagation characteristics is essential for guiding the design and deployment of fifth-generation (5G) communication systems. As network demand increases and 5G infrastructure is introduced in progressive phases, reliable path loss (PL) prediction models are required to refine deployment strategies and improve network efficiency. Conventional propagation models frequently display limited flexibility when applied to diverse environmental conditions and often entail considerable computational expense, reducing their practicality for large-scale 5G planning. Recent developments in data-centric artificial intelligence (AI) have enabled more adaptive and analytically powerful approaches to propagation modeling, resulting in notable gains in PL prediction accuracyThis study employs a comprehensive dataset produced using the NYUSIM channel simulator, integrating a wide spectrum of atmospheric parameters and seasonal variations within South Asian urban microcell environments, complemented by broad empirical observations. The core objective is to construct, optimize, and evaluate four machine learning (ML) models capable of accurately predicting PL at high-frequency bands critical to 5G performance. A fully automated hyperparameter tuning pipeline, based on the Optuna framework, is applied to twelve regression algorithms, including advanced ensemble methods, regularized linear techniques, and classical baseline models. Performance assessment emphasizes predictive reliability, stability, and cross-model generalization. Furthermore, statistical analysis utilizing bootstrap confidence intervals and paired t-tests indicates that all ML methods perform equivalently (p > 0.4), while SHapley Additive exPlanations (SHAP) analysis across all models supports a consistent feature importance distribution, supporting the statistical analysis results. To showcase the superiority of the ML approaches, a comparison with conventional free-space PL modeling methods is presented, with the AI methodology demonstrating robust performance across seasonal variations and a 95.3% improvement. Full article
(This article belongs to the Section Information and Communication Technologies)
Show Figures

Figure 1

14 pages, 224 KB  
Communication
Hydrogen Integration in Future Local Energy Markets
by Pratik Mochi
Energies 2026, 19(5), 1234; https://doi.org/10.3390/en19051234 - 2 Mar 2026
Abstract
Local energy markets (LEMs) are increasingly promoted as coordinated market frameworks for distributed electricity resources in low-carbon-level energy systems. In parallel, green hydrogen is emerging as an energy carrier used for long-duration storage and sector coupling. Yet hydrogen is typically treated as a [...] Read more.
Local energy markets (LEMs) are increasingly promoted as coordinated market frameworks for distributed electricity resources in low-carbon-level energy systems. In parallel, green hydrogen is emerging as an energy carrier used for long-duration storage and sector coupling. Yet hydrogen is typically treated as a technological extension of the existing flexibility options rather than as a separate market participant. This paper argues that such a perspective is conceptually insufficient for future LEM design. It is proposed that hydrogen should be understood as a hybrid market participant in LEMs, rather than as a special case for load, storage or generation. Hydrogen can simultaneously be used to meet a flexible electricity demand, be stored for a long duration, and act as a dispatchable electricity supply. These combined roles violate the core assumptions embedded in electricity-only LEMs, including one-direction energy flow, short-term time prospects, symmetric storage behavior and there being an electricity-only supply option. Particular attention is given to small-to-medium-scale electrolyzers, which are likely to dominate hydrogen participation in local contexts. Rather than proposing a specific market mechanism or numerical model, this paper suggests market design considerations for future local energy markets and highlights open challenges for electricity–hydrogen market coordination. Full article
(This article belongs to the Special Issue Transitioning to Green Energy: The Role of Hydrogen)
27 pages, 34376 KB  
Article
Sedimentary Dynamic Mechanism and Spatial Differentiation Law of Little Ice Age Storm Surges in the Shallow-Buried Abandoned Yellow River Delta
by Haojian Wang, Teng Su, Hongyuan Shi, Yan Li, Hongshi Wu, Tao Lu, Shiqi Yao and Baomu Liu
Water 2026, 18(5), 598; https://doi.org/10.3390/w18050598 (registering DOI) - 28 Feb 2026
Abstract
The shallow-buried abandoned Yellow River Delta (893–1855 AD) exhibits a distinctive geomorphic system shaped by coupled fluvial sediment reduction, climatic transition, and relative sea-level fluctuations, with its intact deposits recording key temperate delta evolution during climate change. Using four sediment cores, we applied [...] Read more.
The shallow-buried abandoned Yellow River Delta (893–1855 AD) exhibits a distinctive geomorphic system shaped by coupled fluvial sediment reduction, climatic transition, and relative sea-level fluctuations, with its intact deposits recording key temperate delta evolution during climate change. Using four sediment cores, we applied optically stimulated luminescence (OSL) dating, sedimentary facies analysis, and grain-size techniques (C-M diagram, end-member modeling), integrated with geomorphic interpretation and historical data, to reconstruct the delta’s evolutionary sequence and clarify storm surge-driven geomorphic reworking and its diagnostic indicators. Results indicate that the delta’s evolution was governed by abrupt fluvial sediment loss, intensified storm dynamics, and relative sea-level rise. The 893 AD Yellow River avulsion triggered delta abandonment (893–1482 AD), driving a shift from a fluvially dominated muddy coast to a wave-controlled sandy system. Sandy deposits initially formed at M04A and prograded landward to M03A. During the Little Ice Age (1482–1855 AD), frequent storm surges further expanded and elevated these sandy accumulations, while weak sedimentation persisted in the inland depression (B03). This differential process generated a unique plain lowland–coastal highland system, a rare geomorphic type among large river deltas that differs from classic island–continent and barrier–lagoon systems. This study elucidates the phased response of temperate monsoon abandoned deltas to millennial-scale climate change, advances theories of multi-factor coupled delta evolution, and provides scientific support for coastal protection, stability assessment, and evolutionary prediction under global warming. Full article
(This article belongs to the Special Issue Coastal Engineering and Fluid–Structure Interactions, 2nd Edition)
36 pages, 2422 KB  
Article
PDGV-DETR: Object Detection for Secure On-Site Weapon and Personnel Location Based on Dynamic Convolution and Cross-Scale Semantic Fusion
by Nianfeng Li, Peizeng Xin, Jia Tian, Xinlu Bai, Hongjie Ding, Zhiguo Xiao and Qian Liu
Sensors 2026, 26(5), 1542; https://doi.org/10.3390/s26051542 - 28 Feb 2026
Abstract
In public safety scenarios, the precise detection and positioning of prohibited weapons such as firearms and knives along with the involved personnel are the core pre-requisite technologies for violent risk warning and emergency response. However, in security surveillance scenarios, there are common problems [...] Read more.
In public safety scenarios, the precise detection and positioning of prohibited weapons such as firearms and knives along with the involved personnel are the core pre-requisite technologies for violent risk warning and emergency response. However, in security surveillance scenarios, there are common problems such as object occlusion, difficulty in capturing small-sized weapons, and complex background interference, which lead to the shortcomings of existing general object detection models in the tasks of detecting and locating security-related objects, including poor adaptability, low detection accuracy, and insufficient robustness in complex scenarios. Therefore, this paper proposes a threat object detection framework for security scenarios (PDGV-DETR) based on adaptive dynamic convolution and cross-scale semantic fusion, specifically optimized for the detection and positioning tasks of weapons and personnel objects in static security surveillance images. This research focuses on category recognition at the object level and pixel-level spatial positioning, and does not involve the classification and identification of violent behaviors based on temporal information. There are clear technical boundaries and scene limitations between the two. This framework is optimized through three core modules: designing a dynamic hierarchical channel interaction convolution module to reduce computational complexity while enhancing the ability to detect occluded and incomplete objects; constructing an improved bidirectional hybrid feature pyramid network, combining the cross-scale fusion module to strengthen multi-scale feature expression, and adapting to the simultaneous detection requirements of small weapon objects and large personnel objects; and introducing a global semantic weaving and elastic feature alignment network to solve the problem of low discrimination between objects and complex backgrounds. Under the same experimental configuration, the proposed model is verified against current mainstream models on typical datasets: on a dataset of 2421 conflict scene personnel violent images, the peak average precision mAP50 of PDGV-DETR reached 85.9%. Through statistical verification, compared with the baseline model RT-DETR with an average value ± standard deviation of 0.840 ± 0.007, the average value ± standard deviation of PDGV-DETR reached 0.858 ± 0.004, demonstrating statistically significant performance improvement, with a p-value less than 0.01. This model can accurately complete the task of locating the object area of personnel, and compared with the deformable DETR, the accuracy improvement rate reached 15.1%.; on the weapon-specific dataset OD-WeaponDetection, the mAP for gun and knife detection reached 93.0%, improving by 2.2% compared to RT-DETR. Compared to the performance fluctuations of other general object detection models in complex security scenarios, PDGV-DETR not only has better detection and positioning accuracy for security-related objects, but also significantly improves the generalization and stability of the model. The results show that PDGV-DETR effectively balances the accuracy of positioning, detection, and computational efficiency, accurately completing end-to-end detection and positioning of weapon and personnel objects in static security surveillance images, demonstrating highly competitive performance in the detection and positioning of security-related objects in security scenes, providing core object-level pre-processing technology support for scenarios such as public area monitoring, intelligent video monitoring, and early warning of violent risks, and providing basic data for subsequent violent behavior recognition based on temporal data. Full article
Back to TopTop