Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (341)

Search Parameters:
Keywords = entropy constrain

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 1349 KB  
Article
Identification of Obstacles to Culture–Tourism Integration and Revitalization Strategies for Traditional Villages from the Perspective of Cultural Landscape Genes: A Case Study of Dayuwan Village
by Xuesong Yang, Xudong Li and Kailing Deng
Land 2026, 15(4), 681; https://doi.org/10.3390/land15040681 - 20 Apr 2026
Abstract
Traditional villages embody regional culture and local knowledge, yet culture–tourism integration often suffers from a mismatch between resource value and effective transformation. To address this problem, this study proposes a two-dimensional “benefit–obstacle” diagnostic and strategy-matching framework and tests its case-based applicability in Dayuwan [...] Read more.
Traditional villages embody regional culture and local knowledge, yet culture–tourism integration often suffers from a mismatch between resource value and effective transformation. To address this problem, this study proposes a two-dimensional “benefit–obstacle” diagnostic and strategy-matching framework and tests its case-based applicability in Dayuwan Village. First, a cultural landscape gene (CLG) atlas was constructed for the village based on a geo-information coding scheme, covering both tangible and intangible CLGs. Second, a four-dimensional evaluation system was operationalized through five expert judgments and 106 valid on-site questionnaires collected from tourists (n = 67) and residents (n = 39). Criterion weights were determined using an AHP–entropy combination approach, and the comprehensive benefit closeness coefficient was calculated via TOPSIS. Third, an obstacle degree identification model was employed to pinpoint key constraints and derive composite obstacle degrees. Results within the Dayuwan case show that the TOPSIS closeness coefficients of the 17 genes ranged from 0.653 to 0.782 (mean = 0.714), with 4, 6, and 7 genes classified as excellent, good, and medium, respectively; composite obstacle degrees ranged from 0.0228 to 0.1975. In Dayuwan Village, higher obstacle degrees clustered mainly in intangible CLGs, whereas Ming–Qing architecture and frequently practiced folk-cultural genes showed comparatively lower obstacle degrees. The transformation process is constrained by four mechanisms—landscape character protection, economic transformation, social identity, and market demand—with economic transformation constraints being the most prominent. Based on the benefit–obstacle matrix, 17 CLGs were classified into five activation scenarios and matched with corresponding revitalization strategies. This framework links benefit ranking, obstacle diagnosis, and strategy matching, and provides a case-based diagnostic reference for the conservation and culture–tourism integration of villages with comparable heritage conditions, subject to local recalibration of indicators, weights, and thresholds. Full article
13 pages, 8854 KB  
Brief Report
Effect of Data Length on Nonlinear Analysis of Human Motion During Locomotor Activities
by Arash Mohammadzadeh Gonabadi and Judith M. Burnfield
Appl. Sci. 2026, 16(8), 3939; https://doi.org/10.3390/app16083939 - 18 Apr 2026
Viewed by 134
Abstract
Nonlinear analysis provides a framework for understanding the complexity and stability of human locomotion by capturing dynamic patterns beyond linear methods. This study examined the effect of data length on seven nonlinear measures: Sample Entropy (SpEn), Approximate Entropy (ApEn), Lyapunov Exponents using Wolf’s [...] Read more.
Nonlinear analysis provides a framework for understanding the complexity and stability of human locomotion by capturing dynamic patterns beyond linear methods. This study examined the effect of data length on seven nonlinear measures: Sample Entropy (SpEn), Approximate Entropy (ApEn), Lyapunov Exponents using Wolf’s (LyEW) and Rosenstein’s (LyER) algorithms, Detrended Fluctuation Analysis (DFA), Correlation Dimension (CD), and the Hurst–Kolmogorov process (HK). A 3500-frame kinematic dataset from a healthy adult performing motor-assisted elliptical training and treadmill walking was segmented from 100 to 3500 frames in 10-frame increments. Data from treadmill and elliptical conditions were analyzed and presented in a combined manner to highlight general stabilization trends across locomotor tasks. Results revealed that increasing data length significantly affected all nonlinear metrics (p ≤ 0.0005). Stabilization occurred at varying minimum lengths: SpEn at ~4.5–8.8 s (540–1060 frames), ApEn at ~5.4–7.7 s (650–920 frames), LyEW at ~19.1–29.2 s (2290–3500 frames), LyER at ~1.3–1.5 s (150–180 frames), DFA at ~29.2 s (3500 frames), CD at ~1.7–15.9 s (200–1910 frames), and HK at ~9.1–9.8 s (1090–1180 frames). Notably, HK achieved stable estimates in approximately one-third of the time required for DFA and substantially less than LyEW, supporting its suitability for time-constrained or clinical settings. These findings suggest the need to tailor data collection to each nonlinear metric and to report data length explicitly to improve accuracy, reproducibility, and methodological rigor in gait variability research. However, these findings should be interpreted within the limitations of a single-participant, exploratory design. Full article
Show Figures

Figure 1

19 pages, 637 KB  
Article
The Physical Cost of a Complete Substitution of Fossil Fuels
by Allan Kardec Barros
Energies 2026, 19(8), 1901; https://doi.org/10.3390/en19081901 - 14 Apr 2026
Viewed by 479
Abstract
Proposals for the complete substitution of fossil fuels have become central to energy policy debates. However, historical data show that global primary energy consumption has grown approximately linearly since the 1950s, with changes in the energy mix occurring mainly through diversification rather than [...] Read more.
Proposals for the complete substitution of fossil fuels have become central to energy policy debates. However, historical data show that global primary energy consumption has grown approximately linearly since the 1950s, with changes in the energy mix occurring mainly through diversification rather than absolute substitution. This work examines the physical and operational constraints of complete substitution proposals by grounding the analysis in the observed evolution of global energy use and in a dynamical framework of system adequacy and stability. A normalized model balancing firm capacity, intermittency, and corrective power was developed and applied to four 20-year scenarios: (A) constant demand with diversification, (B) continued linear demand growth, (C1) fossil-fuel phase-out at constant demand, and (C2) phase-out with continued growth. The results show that gradual diversification remains within operationally ordered regimes, whereas rapid phase-out trajectories approach or cross stability boundaries associated with supply–demand bifurcations. Quantitative estimates indicate that full substitution over two decades requires cumulative additional energy investments on the order of 106 TWh, corresponding to total system costs of US$50–100 trillion under conservative assumptions. These costs arise from the cumulative energetic and entropic burdens of maintaining operational order in increasingly complex and intermittent systems. Our analysis indicates that rapid fossil-fuel substitution over short time horizons is constrained not only by technology or finance but also by cumulative energy investment, entropy production, and erosion of operational stability margins. Full article
(This article belongs to the Section I: Energy Fundamentals and Conversion)
Show Figures

Figure 1

28 pages, 1445 KB  
Article
Cost-Aware Lightweight Deep Learning for Intrusion Detection: A Comparative Study on UNSW-NB15 and CIC-IDS2017
by Marija Gombar, Amir Topalović and Mirjana Pejić Bach
Electronics 2026, 15(8), 1603; https://doi.org/10.3390/electronics15081603 - 12 Apr 2026
Viewed by 313
Abstract
Lightweight intrusion detection systems (IDSs) are increasingly integrated into applied data science workflows for cybersecurity and process monitoring, where limited computational resources and asymmetric error costs constrain model design. This paper presents a comparative study of two lightweight deep learning IDS architectures: ForNet [...] Read more.
Lightweight intrusion detection systems (IDSs) are increasingly integrated into applied data science workflows for cybersecurity and process monitoring, where limited computational resources and asymmetric error costs constrain model design. This paper presents a comparative study of two lightweight deep learning IDS architectures: ForNet, a convolutional model optimized for feature-centric detection, and SigNet, a gated recurrent model designed for sequence-oriented modeling of ordered flow-feature representations. Both models are trained with Cost-Robust Focal Loss (CRF-Loss), a cost-aware objective that penalizes false positives and false negatives according to deployment-specific risk preferences. We evaluate the models on the UNSW-NB15 and CIC-IDS2017 benchmarks using six standard metrics (accuracy, precision, recall, F1-score, Matthews correlation coefficient (MCC), and the area under the receiver operating characteristic curve (AUROC)), complemented by an analysis of false-positive behavior. On CIC-IDS2017, ForNet achieves precision up to 0.95 and MCC up to 0.93 with AUROC above 0.94, while SigNet shows a stronger recall-oriented profile on UNSW-NB15. In an ablation study, replacing Binary Cross-Entropy with CRF-Loss reduces the false-positive rate by approximately 15–20% and improves robustness-oriented metrics such as MCC by up to 12% on CIC-IDS2017. Rather than claiming universal state-of-the-art performance, the study focuses on performance–risk trade-offs under realistic operational constraints. The results highlight how architectural bias and cost-aware optimisation jointly shape IDS behaviour and offer benchmark-based guidance for interpreting performance–risk trade-offs in lightweight intrusion detection. Full article
Show Figures

Graphical abstract

18 pages, 844 KB  
Article
EGD: Error-Entropy-Guided Distillation for Noisy Multi-View Classification
by Xiaoyu Yang, Yanan Li, Shilin Xu and Yuan Sun
Electronics 2026, 15(8), 1596; https://doi.org/10.3390/electronics15081596 - 10 Apr 2026
Viewed by 263
Abstract
In recent years, multi-view learning has received extensive research interest. Most existing multi-view learning methods often rely on well-annotated data to improve decision accuracy. However, noisy labels are ubiquitous in multi-view data due to imperfect annotations. Although some methods have achieved promising performance [...] Read more.
In recent years, multi-view learning has received extensive research interest. Most existing multi-view learning methods often rely on well-annotated data to improve decision accuracy. However, noisy labels are ubiquitous in multi-view data due to imperfect annotations. Although some methods have achieved promising performance using robust-loss designs and implicit regularization, they fail to explicitly model the reliability of the supervision signal and fail to dynamically correct noisy labels during training. Clearly, this largely constrains their performance ceiling. To deal with this problem, we propose an Error-Entropy-Guided Distillation network (EGD) for noisy multi-view classification. In this framework, we first design an Error-Entropy (EE) metric to explicitly evaluate the reliability of sample-wise supervision, which serves as the basis for identifying and filtering noisy labels. On this basis, we adopt the distillation paradigm based on Error-Entropy (EE). The teacher model provides the student with soft label distributions that are less affected by noisy labels in the early training stage. To further mitigate noise memorization and accumulated confirmation bias, we propose a periodic memory-clearing strategy and supervision signal update strategy to prevent the teacher from error memorization and accumulating confirmation bias. Meanwhile, the student model learns from the soft supervision of the teacher to capture structured inter-class relationships. Additionally, a consistency module is employed to enhance the consistency of the student across multiple views. Extensive experiments on five benchmark datasets demonstrate that EGD consistently outperforms state-of-the-art multi-view learning methods under various noise levels. Full article
(This article belongs to the Special Issue Applications in Computer Vision and Pattern Recognition)
Show Figures

Figure 1

11 pages, 3120 KB  
Communication
(FeNiMnMgCuCo)3O4 High-Entropy Cathode for Zinc-Ion Batteries
by Ningning Dong, Huanhuan Cui, Yuncheng Cai and Renzhi Jiang
Materials 2026, 19(8), 1520; https://doi.org/10.3390/ma19081520 - 10 Apr 2026
Viewed by 344
Abstract
As a result of the high safety, low cost, and environmental benignity, aqueous zinc-ion batteries are regarded as one of the most promising candidates for next-generation large-scale energy storage systems. However, their further development is constrained by performance bottlenecks in existing cathode materials, [...] Read more.
As a result of the high safety, low cost, and environmental benignity, aqueous zinc-ion batteries are regarded as one of the most promising candidates for next-generation large-scale energy storage systems. However, their further development is constrained by performance bottlenecks in existing cathode materials, including capacity, cycle life, and reaction kinetics. In this study, a high-entropy design strategy is employed to synthesize the metal oxide (FeNiMnMgCuCo)3O4 with a cubic spinel structure, and its electrochemical performance as a cathode for zinc-ion batteries is systematically evaluated. The prepared (FeNiMnMgCuCo)3O4 high-entropy cathode exhibits high reversible capacity (341.3 mA h g−1 at 0.1 A g−1) and remarkable long-term cycling stability (76.1% retention after 1000 cycles at 3 A g−1). This work not only demonstrates a high-entropy cathode material with practical potential but also provides new research insights for optimizing zinc-ion storage performance through composition design and entropy regulation. Full article
(This article belongs to the Special Issue Advanced Electrode Materials for Batteries: Design and Performance)
Show Figures

Figure 1

20 pages, 3559 KB  
Article
Ecological Niche Modeling of the Narrow-Range Endangered Endemic Lepidium olgae in Uzbekistan
by Khusniddin Abulfayzov, Bekhruz Khabibullaev, Khabibullo Shomurodov, Natalya Beshko, Suluv Sullieva, Yaoming Li and Lianlian Fan
Plants 2026, 15(7), 1125; https://doi.org/10.3390/plants15071125 - 7 Apr 2026
Viewed by 363
Abstract
Narrow-range endemic plant species are highly sensitive to environmental variability due to their restricted distributions and narrow ecological niches, yet quantitative assessments of such species in Central Asian mountain ecosystem remain limited. This study applied an ensemble species distribution modeling (SDM) approach to [...] Read more.
Narrow-range endemic plant species are highly sensitive to environmental variability due to their restricted distributions and narrow ecological niches, yet quantitative assessments of such species in Central Asian mountain ecosystem remain limited. This study applied an ensemble species distribution modeling (SDM) approach to assess the ecological constraints and conservation efforts of Lepidium olgae, a strict endemic species of the Nuratau Mountains in Uzbekistan. Species occurrence records from field surveys and herbarium data were integrated with remotely sensed climatic, vegetation, topographic, soil, and atmospheric variables. Parsimonious models (Generalized Linear Model (GLM), Maximum Entropy (MaxEnt), Multiple Adaptive Regression Splines (MARS), Surface Range Envelope (SRE)) were implemented in BIOMOD2 4.3.4, and ensemble predictions were used to reduce algorithmic uncertainty and identify core habitat patterns. Results showed that wet-season precipitation was the dominant driver of species distribution, followed by vegetation productivity (NDVI) and thermal stability, indicating a strong dependence on moisture availability and stable microhabitats. Ensemble projections revealed a highly fragmented potential distribution, with suitable habitats covering only 8% of the reserve area, closely matching the observed distribution of 6.5%. This strong spatial overlap confirms a narrowly constrained realized ecological niche. These findings highlight the critical role of microhabitat stability for the persistence of Lepidium olgae and provide a spatially explicit basis for prioritizing in situ conservation and guiding model informed translocation efforts. Full article
(This article belongs to the Section Plant Ecology)
Show Figures

Figure 1

23 pages, 1879 KB  
Article
Research on the Pathways and Spatial Effects of Digital–Intelligent Integration on Carbon Emission Intensity
by Xiaochun Zhao, Yumeng Liu and Xuehui Zhang
Land 2026, 15(4), 600; https://doi.org/10.3390/land15040600 - 5 Apr 2026
Viewed by 502
Abstract
In the context of global efforts to achieve carbon neutrality, understanding how digital–intelligent integration influences carbon emissions is crucial for advancing the ecological transition. Using panel data from 30 provincial-level regions in China (2014–2023), a digital–intelligent integration index was constructed via entropy weighting [...] Read more.
In the context of global efforts to achieve carbon neutrality, understanding how digital–intelligent integration influences carbon emissions is crucial for advancing the ecological transition. Using panel data from 30 provincial-level regions in China (2014–2023), a digital–intelligent integration index was constructed via entropy weighting and a coupling coordination model. Employing fixed-effects, mediation, and spatial Durbin models, the analysis shows that digital–intelligent integration is significantly associated with lower carbon intensity, a result that is robust to endogeneity concerns and alternative specifications. Industrial structure upgrading and green technology innovation were identified as mediating pathways. Furthermore, digital–intelligent integration generates positive spatial spillovers, reducing carbon intensity in neighboring provinces. Notably, these spillovers are geographically constrained and vary significantly across the regions. These findings indicate the need to formulate regionally differentiated strategies to harness the specific mechanisms through which digital–intelligent integration operates in different contexts. Full article
Show Figures

Figure 1

27 pages, 4837 KB  
Article
AI-Driven Adaptive Encryption Framework for a Modular Hardware-Based Data Security Device: Conceptual Architecture, Formal Foundations, and Security Analysis
by Pruthviraj Pawar and Gregory Epiphaniou
Appl. Sci. 2026, 16(7), 3522; https://doi.org/10.3390/app16073522 - 3 Apr 2026
Viewed by 270
Abstract
This paper presents a conceptual architecture for an AI-Driven Adaptive Encryption Device (AI-AED), a tri-modular hardware platform embodied in a registered industrial design. The device integrates a Secure Input Module, an AI-Enhanced Central Processing Unit with biometric authentication, and a Secure Output Module [...] Read more.
This paper presents a conceptual architecture for an AI-Driven Adaptive Encryption Device (AI-AED), a tri-modular hardware platform embodied in a registered industrial design. The device integrates a Secure Input Module, an AI-Enhanced Central Processing Unit with biometric authentication, and a Secure Output Module connected by unidirectional buses. We formalise the adaptive encryption policy as a constrained Markov decision process (CMDP) over a discrete action space of 216 cryptographic configurations, with safety constraints that provably prevent convergence to insecure states. A formal threat model based on extended Dolev–Yao assumptions with four physical access tiers defines attacker capabilities, and anti-downgrade safeguards enforce a monotonically non-decreasing security floor during threat escalation. An information-theoretic analysis shows that adaptive algorithm selection contributes an additional entropy term H(α) to ciphertext uncertainty, upper-bounded by log2(|L_enc|) ≈ 1.58 bits, while noting this represents increased attacker uncertainty rather than a strengthening of any individual cipher. A component-level latency model estimates 0.91–1.00 ms pipeline latency under normal operation and 3.14–3.42 ms under active threat, including integration overhead. Simulation validation over 1000 episodes compares a tabular Q-learning baseline against the proposed Deep Q-Network operating on the continuous state space: the DQN achieves 82% fewer constraint violations, 6× faster threat response, and more stable policy switching, demonstrating the advantage of continuous-state reinforcement learning for safety-critical adaptive encryption. All claims are positioned as theoretical contributions requiring empirical validation through prototype implementation. Full article
Show Figures

Figure 1

16 pages, 3040 KB  
Article
Rank-Aware Conditional Synthesis: Feasible Quantum Generative Modeling on Matrix Product State Manifolds
by Dongkyu Lee, Won-Gyeong Lee, Hyunjun Hong and Ohbyung Kwon
Symmetry 2026, 18(4), 605; https://doi.org/10.3390/sym18040605 - 2 Apr 2026
Viewed by 354
Abstract
Matrix Product States (MPSs) have become an indispensable symmetry-based representation for simulating quantum systems on near-term hardware by constraining entanglement entropy through a fixed bond dimension χ. This study identifies a critical “rank explosion” phenomenon that destabilizes this low-rank manifold during conditional [...] Read more.
Matrix Product States (MPSs) have become an indispensable symmetry-based representation for simulating quantum systems on near-term hardware by constraining entanglement entropy through a fixed bond dimension χ. This study identifies a critical “rank explosion” phenomenon that destabilizes this low-rank manifold during conditional quantum diffusion processes. We empirically demonstrate that the introduction of conditional guidance—essential for semantic control—injects global correlations that drive the effective Schmidt rank to increase by 4× (from χ=4 to 16), saturating the simulation limits and necessitating quantum circuits with approximately 1.8×103 Controlled-NOT (CNOT) gates. Such circuit depths fundamentally exceed the operational coherence budgets of Noisy Intermediate-Scale Quantum (NISQ) devices. To mitigate this structural instability, we propose Rank-Aware Conditional Synthesis (RACS), a sampling framework that maintains the latent trajectory within a prescribed MPS manifold through step-wise manifold projection and time-shift error correction. Experimental results on real-world semantic data reveal that RACS reduces reconstruction error, or Mean Squared Error (MSE) by 30.8% and enhances latent trajectory smoothness by 36.8% compared to conventional post hoc truncation. At a fixed hardware-efficient rank of χ=4, RACS achieves a +4.8% fidelity gain and exhibits superior robustness against depolarizing noise. By resolving the tension between conditional expressivity and entanglement constraints, RACS provides a principled, hardware-aware methodology for high-fidelity quantum generative modeling. Full article
Show Figures

Figure 1

29 pages, 3794 KB  
Article
Coupling Coordination and Driving Mechanisms Between Digital Productivity and High-Quality Development of the Energy Industry: Evidence from Guizhou, China
by Chengbin Yu, Ke Ding and Langang Feng
Sustainability 2026, 18(7), 3490; https://doi.org/10.3390/su18073490 - 2 Apr 2026
Viewed by 374
Abstract
In the context of the global dual-carbon goals and China’s DP strategy, strengthening the coupling between digital productivity (DP) and the high-quality development of the energy industry (HQDEI) is essential for resource-based regions. Doing so can help these regions overcome transition constraints and [...] Read more.
In the context of the global dual-carbon goals and China’s DP strategy, strengthening the coupling between digital productivity (DP) and the high-quality development of the energy industry (HQDEI) is essential for resource-based regions. Doing so can help these regions overcome transition constraints and advance green, low-carbon development. Using panel data for nine prefecture-level cities in Guizhou Province from 2014 to 2023, we construct composite indices for DP and HQDEI with an improved entropy-weight TOPSIS approach. We then characterize their spatiotemporal evolution using a coupling coordination degree (CCD) model and kernel density estimation. Finally, we examine the determinants of coupling coordination through panel regression and threshold models. The results show that: (1) The CCD between DP and HQDEI efficiency continues to increase, with regional differences displaying a periodic convergence–divergence pattern and a spatial structure characterized by core agglomeration and outward diffusion. Gradient disparities in coordinated development are evident between central and peripheral areas. (2) Consumption upgrading and fiscal self-sufficiency significantly promote CC, whereas a traditional resource-dependent growth model significantly suppresses it. Constrained by short-term adaptation and integration costs, digital innovation currently exerts a negative effect, and its enabling potential has not yet been fully realized. (3) Nonlinear tests identify a single digital-infrastructure threshold: the enabling effect of digital innovation turns positive only once infrastructure surpasses a critical level, revealing pronounced interval heterogeneity. This study advances the theoretical understanding of the bidirectional coupling between DP and HQDEI, provides empirical guidance for energy digital transformation and high-quality development in resource-based regions of western China, and offers transferable insights for green, low-carbon transitions in traditional energy regions worldwide. Full article
(This article belongs to the Section Energy Sustainability)
Show Figures

Figure 1

22 pages, 3325 KB  
Article
Top-Confidence Gapped Cross-Entropy for Compact Human Activity Recognition
by Khudran M. Alzhrani
Appl. Sci. 2026, 16(7), 3394; https://doi.org/10.3390/app16073394 - 31 Mar 2026
Viewed by 272
Abstract
Human Activity Recognition (HAR) in resource-constrained settings has been studied mainly through architecture design, compression, and deployment, while the role of the training objective has received less attention. This paper introduces Top-Confidence Gapped Cross-Entropy (TCG-CE), a lightweight modification of categorical cross-entropy in which [...] Read more.
Human Activity Recognition (HAR) in resource-constrained settings has been studied mainly through architecture design, compression, and deployment, while the role of the training objective has received less attention. This paper introduces Top-Confidence Gapped Cross-Entropy (TCG-CE), a lightweight modification of categorical cross-entropy in which each sample is weighted by the gap between the two most probable predicted classes. TCG-CE adds no trainable parameters and can be used as a drop-in replacement for standard cross-entropy. The method is evaluated on the UCI-HAR and WISDM benchmarks using compact recurrent models, namely TinyRNN, TinyGRU, and TinyLSTM. The evaluation focuses on macro-averaged predictive performance and also reports empirical runtime and memory observations under a fixed execution environment. Across datasets and models, TCG-CE improves balanced classification metrics, with the clearest gains observed on WISDM and in more capacity-limited settings. These results indicate that top-1/top-2 confidence-gap modulation is a practical loss-design strategy for improving macro-level predictive performance in compact HAR classification. Full article
Show Figures

Figure 1

43 pages, 41548 KB  
Article
Spatiotemporal Evolution and Dynamic Driving Mechanisms of Synergistic Rural Revitalization in Topographically Complex Regions: A Case Study of the Qinba Mountains, China
by Haozhe Yu, Jie Wu, Ning Cao, Lijuan Li, Lei Shi and Zhehao Su
Sustainability 2026, 18(7), 3307; https://doi.org/10.3390/su18073307 - 28 Mar 2026
Viewed by 392
Abstract
In ecologically fragile and geomorphologically complex mountainous regions, ensuring a smooth transition from poverty alleviation to multidimensional sustainable rural development remains a key issue in regional governance. Focusing on the Qinba Mountains, a typical former contiguous poverty-stricken region in China covering 18 prefecture-level [...] Read more.
In ecologically fragile and geomorphologically complex mountainous regions, ensuring a smooth transition from poverty alleviation to multidimensional sustainable rural development remains a key issue in regional governance. Focusing on the Qinba Mountains, a typical former contiguous poverty-stricken region in China covering 18 prefecture-level cities in six provinces, this study uses 2009–2023 prefecture-level panel data to examine the spatiotemporal evolution and driving mechanisms of coordinated rural revitalization. An integrated framework of “multi-dimensional evaluation–spatiotemporal tracking–attribution diagnosis” is developed by combining the improved AHP–entropy-weight TOPSIS method, the Coupling Coordination Degree (CCD) model, spatial Markov chains, spatial autocorrelation, and the Geodetector. The results show pronounced subsystem asynchrony. Livelihood and Well-being Security (U5) improves steadily, while Level of Industrial Development (U1), Civic Virtues and Cultural Vibrancy (U3), and Rural Governance (U4) also rise but with clear spatial differentiation; by contrast, Quality of Human Settlements (U2) fluctuates in stages under ecological fragility. Overall, the coupling coordination level advances from the Verge of Imbalance to Intermediate Coordination, yet the regional pattern remains uneven, with eastern basin cities leading and western deep mountainous cities lagging. State transitions display both policy responsiveness and path dependence: the probability of retaining the original state ranges from 50.0% to 90.5%; low-level neighborhoods reduce the upward transition probability to 25%, whereas medium-to-high-level neighborhoods raise the upward transition probability of low-level cities from 36.36% to 53.33%. Spatial dependence is also evident, with Global Moran’s I increasing, with fluctuations, from 0.331 in 2009 to 0.536 in 2023; high-value clusters extend along the Guanzhong Plain–Han River Valley corridor, while low-value clusters remain relatively locked in mountainous border areas. Driving mechanisms show clear stage-wise succession. At the single-factor level, the explanatory power of Road Network Density (F6) declines from 0.639 to 0.287, whereas Terrain Relief Amplitude (F1) becomes the dominant background constraint in the later stage (q = 0.772). Multi-factor interactions are generally enhanced. In particular, the traditional infrastructure-led pathway weakens markedly, with F1 ∩ F6 = 0.055 in 2023, while the interaction between terrain and consumer market vitality becomes dominant, with F1 ∩ F7 = 0.987 in 2023. On this basis, three major pathways are identified: government fiscal intervention and transportation accessibility improvement, capital agglomeration and market demand stimulation, and human–earth system adaptation and ecological value realization. These findings provide quantitative evidence for breaking spatial lock-in and improving cross-regional resource allocation in ecologically constrained mountainous regions. Full article
(This article belongs to the Section Sustainable Urban and Rural Development)
Show Figures

Figure 1

20 pages, 1060 KB  
Article
Closed-Form Approximations of Range Mutual Information for Integrated Sensing and Communication Systems
by Zhuoyun Lai, Hao Luo, Yinlu Wang, Yue Zhang and Biao Jin
Sensors 2026, 26(7), 2113; https://doi.org/10.3390/s26072113 - 28 Mar 2026
Viewed by 354
Abstract
Sensing mutual information (SMI) is widely adopted as a performance metric for integrated sensing and communication (ISAC) to enhance both sensing and communication capabilities. However, conventional approaches derive SMI from amplitude and phase, whereas an explicit evaluation of range mutual information (RMI) remains [...] Read more.
Sensing mutual information (SMI) is widely adopted as a performance metric for integrated sensing and communication (ISAC) to enhance both sensing and communication capabilities. However, conventional approaches derive SMI from amplitude and phase, whereas an explicit evaluation of range mutual information (RMI) remains absent. In this paper, we investigate a novel closed-form approximation of RMI for ISAC. We first derive an explicit expression for the posterior probability density function (PDF) of the target range, which is formulated as a function of the signal’s autocorrelation and cross-correlation. Furthermore, we show that under high signal-to-noise ratio (SNR), the estimated range PDF approximates a Gaussian distribution in the sensing-unconstrained scenario and a truncated Gaussian distribution in the sensing-constrained scenario. Finally, we derive closed-form approximations of the RMI in both scenarios under high SNR. In the sensing-unconstrained scenario, the RMI is proportional to the delay interval, root-mean-square bandwidth, and SNR. In the constrained scenario, we obtain a closed-form RMI approximation by introducing an entropy correction term that quantifies the impact of boundary constraints. Additionally, we employ a maximum likelihood estimation (MLE) method to assess range estimation performance. Simulation results validate the accuracy of the theoretical results and the effectiveness of the proposed approximations. Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

19 pages, 642 KB  
Article
Enhancing Type 1 Diabetes Polygenic Risk Prediction Through Neural Networks and Entropy-Derived Insights
by Antonio Nadal-Martínez, Guillermo Pérez-Solero, Sandra Ferreiro López, Jorge Blom-Dahl, Eduard Montanya, Marta Alonso-Bernáldez, Moises Shabot, Christian Binsch, Lukasz Szczerbinski, Adam Kretowski, Julián Nevado, Pablo Lapunzina, Robert Wagner and Jair Tenorio-Castano
Int. J. Mol. Sci. 2026, 27(7), 2966; https://doi.org/10.3390/ijms27072966 - 25 Mar 2026
Viewed by 467
Abstract
Type 1 diabetes (T1D) is an autoimmune disease with a strong genetic component (~70% heritability). Early identification of individuals at risk is crucial for early intervention or risk assessment. Although polygenic risk scores (PRS) have shown promise in risk assessment, most current approaches [...] Read more.
Type 1 diabetes (T1D) is an autoimmune disease with a strong genetic component (~70% heritability). Early identification of individuals at risk is crucial for early intervention or risk assessment. Although polygenic risk scores (PRS) have shown promise in risk assessment, most current approaches remain constrained by linear assumptions and limited generalizability. We aimed to develop a neural network-driven classifier using T1D-associated single nucleotide polymorphisms (SNPs). In addition, we explored the inclusion of an entropy-derived feature as a complementary variable, representing the degree of genetic variability within an individual’s genotype profile across the 67 T1D-associated SNPs, to evaluate its potential additive contribution to the model performance. We analyzed genotype data from 11,909 individuals in the UK BioBank (546 T1D cases and 11,363 controls). Sixty-seven well-known SNPs associated with T1D were utilized as inputs to the model, using two distinct allele-encoding strategies. A feed-forward neural network was evaluated under varying case–control ratios through five-fold cross-validation. Performance was assessed using the area under the receiver operating characteristic curve (AUC) on a held-out test set and on an external European cohort as a validation cohort. Across five-fold cross-validation, the best configuration achieved a median AUC of 0.903. On the held-out UK Biobank test set, the model generalized well, with an AUC of 0.8889 (95% CI: 0.8516–0.9262). A probability-based risk framework, constructed using five risk groups (“very low”, “low”, “intermediate”, “high”, and “very high” risk), yielded a negative predictive value (NPV) of 98.9% for the “very low” risk group and a Positive Predicted Value (PPV) of 61.9% with a specificity of 97.3% for the “very high” risk group, assuming a 10% T1D prevalence. External validation in the German Diabetes Study reproduced clear case–control separation; for individuals with recent onset diabetes and glutamic acid decarboxylase antibodies (GADA+) vs. controls, specificity reached 91.9% in the “high” risk group (PPV of 94.3%) and 97.6% in the “very high” risk group (PPV of 95.7%). The proposed neural network reliably predicts T1D genetic risk using a compact SNP panel of 67 SNPs and maintains accuracy in both internal and external European cohorts. Its probabilistic output enables clinically interpretable risk thresholds, while entropy features contributed modestly to performance. These results demonstrate that a neural network-based approach achieves discriminative performance that is comparable to established T1D genetic risk models, while offering flexible probability-based risk stratification and architectural extensibility for future integration of additional features. Full article
Show Figures

Figure 1

Back to TopTop