Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,308)

Search Parameters:
Keywords = maximum entropy

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 339 KB  
Article
Entropy-Based Portfolio Optimization in Cryptocurrency Markets: A Unified Maximum Entropy Framework
by Silvia Dedu and Florentin Șerban
Entropy 2026, 28(3), 285; https://doi.org/10.3390/e28030285 - 2 Mar 2026
Abstract
Traditional mean–variance portfolio optimization proves inadequate for cryptocurrency markets, where extreme volatility, fat-tailed return distributions, and unstable correlation structures undermine the validity of variance as a comprehensive risk measure. To address these limitations, this paper proposes a unified entropy-based portfolio optimization framework grounded [...] Read more.
Traditional mean–variance portfolio optimization proves inadequate for cryptocurrency markets, where extreme volatility, fat-tailed return distributions, and unstable correlation structures undermine the validity of variance as a comprehensive risk measure. To address these limitations, this paper proposes a unified entropy-based portfolio optimization framework grounded in the Maximum Entropy Principle (MaxEnt). Within this setting, Shannon entropy, Tsallis entropy, and Weighted Shannon Entropy (WSE) are formally derived as particular specifications of a common constrained optimization problem solved via the method of Lagrange multipliers, ensuring analytical coherence and mathematical transparency. Moreover, the proposed MaxEnt formulation provides an information-theoretic interpretation of portfolio diversification as an inference problem under uncertainty, where optimal allocations correspond to the least informative distributions consistent with prescribed moment constraints. In this perspective, entropy acts as a structural regularizer that governs the geometry of diversification rather than as a direct proxy for risk. This interpretation strengthens the conceptual link between entropy, uncertainty quantification, and decision-making in complex financial systems, offering a robust and distribution-free alternative to classical variance-based portfolio optimization. The proposed framework is empirically illustrated using a portfolio composed of major cryptocurrencies—Bitcoin (BTC), Ethereum (ETH), Solana (SOL), and Binance Coin (BNB)—based on weekly return data. The results reveal systematic differences in the diversification behavior induced by each entropy measure: Shannon entropy favors near-uniform allocations, Tsallis entropy imposes stronger penalties on concentration and enhances robustness to tail risk, while WSE enables the incorporation of asset-specific informational weights reflecting heterogeneous market characteristics. From a theoretical perspective, the paper contributes a coherent MaxEnt formulation that unifies several entropy measures within a single information-theoretic optimization framework, clarifying the role of entropy as a structural regularizer of diversification. From an applied standpoint, the results indicate that entropy-based criteria yield stable and interpretable allocations across turbulent market regimes, offering a flexible alternative to classical risk-based portfolio construction. The framework naturally extends to dynamic multi-period settings and alternative entropy formulations, providing a foundation for future research on robust portfolio optimization under uncertainty. Full article
34 pages, 13258 KB  
Article
A Robust Image Encryption Framework Using Deep Feature Extraction and AES Key Optimization
by Sahara A. S. Almola, Hameed A. Younis and Raidah S. Khudeyer
Cryptography 2026, 10(2), 16; https://doi.org/10.3390/cryptography10020016 - 2 Mar 2026
Abstract
This article presents a novel framework for encrypting color images to enhance digital data security using deep learning and artificial intelligence techniques. The system employs a two-model neural architecture: the first, a Convolutional Neural Network (CNN), verifies sender authenticity during user authentication, while [...] Read more.
This article presents a novel framework for encrypting color images to enhance digital data security using deep learning and artificial intelligence techniques. The system employs a two-model neural architecture: the first, a Convolutional Neural Network (CNN), verifies sender authenticity during user authentication, while the second extracts unique fingerprint features. These features are converted into high-entropy encryption keys using Particle Swarm Optimization (PSO), minimizing key similarity and ensuring that no key is reused or transmitted. Keys are generated in real time simultaneously at both the sender and receiver ends, preventing interception or leakage and providing maximum confidentiality. Encrypted images are secured using the Advanced Encryption Standard (AES-256) with keys uniquely bound to each user’s biometric identity, ensuring personalized privacy. Evaluation using security and encryption metrics yielded strong results: entropy of 7.9991, correlation coefficient below 0.00001, NPCR of 99.66%, UACI of 33.9069%, and key space of 2256. Although the final encryption employs an AES-256 key (key space of 2256), this key is derived from a much larger deep-key space of 28192 generated by multi-layer neural feature extraction and optimized via PSO, thereby significantly enhancing the overall cryptographic strength. The system also demonstrated robustness against common attacks, including noise and cropping, while maintaining recoverable original content. Furthermore, the neural models achieved classification accuracy exceeding 99.83% with an error rate below 0.05%, confirming the framework’s reliability and practical applicability. This approach provides a secure, dynamic, and efficient image encryption paradigm, combining biometric authentication and AI-based feature extraction for advanced cybersecurity applications. Full article
Show Figures

Figure 1

31 pages, 2317 KB  
Article
Convergent Multi-Algorithm Feature Selection for Single-Lead ECG Classification: Optimizing Accuracy–Complexity Trade-Offs in Wearable Applications
by Monica Fira, Hariton-Nicolae Costin and Liviu Goras
Eng 2026, 7(3), 117; https://doi.org/10.3390/eng7030117 - 2 Mar 2026
Abstract
The development of portable electrocardiographic analysis systems necessitates identifying an optimal balance between diagnostic precision and computational efficiency. This research addresses the challenge of optimal feature selection for automated cardiac arrhythmia classification in resource-constrained portable applications. We present a comparative investigation of three [...] Read more.
The development of portable electrocardiographic analysis systems necessitates identifying an optimal balance between diagnostic precision and computational efficiency. This research addresses the challenge of optimal feature selection for automated cardiac arrhythmia classification in resource-constrained portable applications. We present a comparative investigation of three distinct feature selection strategies for ECG classification: the MRMR (Minimum Redundancy Maximum Relevance) method, which maximizes relevance while minimizing feature interdependencies; the ReliefF technique, which evaluates discriminative power through proximity analysis in the feature space; and permutation-based importance analysis implemented with neural networks. Utilizing the Large-Scale 12-Lead Electrocardiogram Database for Arrhythmia Study, we construct a hybrid feature space integrating 12 conventional time- and frequency-domain parameters (previously validated and included in the database’s official documentation) with 26 advanced nonlinear descriptors, including the Hurst exponent, DFA scaling parameter, log-absolute correlation measures, mean standard increment from the Poincaré plot, and wavelet entropy. The experimental results demonstrate remarkable convergence among the three paradigms in selecting optimal feature subsets, achieving classification accuracies of 87–89% for four arrhythmia classes using compact configurations of 7–10 features, and 93.57% with an extended 12-parameter set. The 7-feature configuration achieves an 82% complexity reduction compared to the full 38-feature set. Multi-algorithmic analysis confirms the consistent discriminative contribution of the proposed nonlinear descriptors, demonstrating that MRMR, ReliefF, and permutation analyses yield convergent rankings of critical parameters for automated cardiac pathology diagnosis. Full article
(This article belongs to the Section Electrical and Electronic Engineering)
Show Figures

Figure 1

14 pages, 887 KB  
Article
On Maximum Entropy Density Estimation with Relaxed Moment Constraints
by Thi Lich Nghiem and Pierre Maréchal
Entropy 2026, 28(3), 282; https://doi.org/10.3390/e28030282 - 2 Mar 2026
Abstract
We study Maximum Entropy density estimation on continuous domains under finitely many moment constraints, formulated as the minimization of the Kullback–Leibler divergence with respect to a reference measure. To model uncertainty in empirical moments, constraints are relaxed through convex penalty functions, leading to [...] Read more.
We study Maximum Entropy density estimation on continuous domains under finitely many moment constraints, formulated as the minimization of the Kullback–Leibler divergence with respect to a reference measure. To model uncertainty in empirical moments, constraints are relaxed through convex penalty functions, leading to an infinite-dimensional convex optimization problem over probability densities. The main contribution of this work is a rigorous convex-analytic treatment of such relaxed Maximum Entropy problems in a functional setting, without discretization or smoothness assumptions on the density. Using convex integral functionals and an extension of Fenchel duality, we show that, under mild and explicit qualification conditions, the infinite-dimensional primal problem admits a dual formulation involving only finitely many variables. This reduction can be interpreted as a continuous-domain instance of partially finite convex programming. The resulting dual problem yields explicit primal–dual optimality conditions and characterizes Maximum Entropy solutions in exponential form. The proposed framework unifies exact and relaxed moment constraints, including box and quadratic relaxations, within a single variational formulation, and provides a mathematically sound foundation for relaxed Maximum Entropy methods previously studied mainly in finite or discrete settings. A brief numerical illustration demonstrates the practical tractability of the approach. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Graphical abstract

19 pages, 3211 KB  
Article
Can Ecological Niche Modeling of Prickly Juniper (Juniperus oxycedrus L.) Predict Future Forest Distribution Limits in Central Anatolia?
by Derya Gülçin, Javier Velázquez, Gamze Tuttu, Daniel Sánchez-Mata, Ebru Ersoy Tonyaloğlu, Kerim Çiçek, Sezgin Ayan, Mehmet Sezgin, Ahmet Varlı and Ali Uğur Özcan
Plants 2026, 15(5), 743; https://doi.org/10.3390/plants15050743 - 28 Feb 2026
Viewed by 128
Abstract
Climate change is expected to alter the distribution limits of woody species in Mediterranean and semi-arid regions, especially near forest–steppe transition zones. In this study, ecological niche modeling (ENM) was applied to examine the current and future habitat suitability of prickly juniper ( [...] Read more.
Climate change is expected to alter the distribution limits of woody species in Mediterranean and semi-arid regions, especially near forest–steppe transition zones. In this study, ecological niche modeling (ENM) was applied to examine the current and future habitat suitability of prickly juniper (Juniperus oxycedrus L.) in Türkiye under three Shared Socioeconomic Pathways (SSP1-2.6, SSP3-7.0, and SSP5-8.5) for the periods 2011–2040, 2041–2070, and 2071–2100. Species–environment relationships were quantified using the Maximum Entropy (MaxEnt) algorithm. From 48 candidate MaxEnt models, the optimal model was selected based on statistical performance and showed a high mean training AUC (AUC = 0.869, SD = 0.017). Null model testing confirmed that predictive performance exceeded random expectations (AUCnull = 0.593, SD = 0.011; Z = 28.294, p < 0.00001). Among all predictors, precipitation of the driest month (bio14) and slope showed the highest contributions, accounting for 24.9% and 24.3%, respectively. Present-day suitability reveals that J. oxycedrus has a wide distribution in the interior Anatolian and Mediterranean uplands. Future projections indicate limited habitat loss during the early projection period, followed by substantial reductions toward the end of the century, particularly under high-emission scenarios. Late-century projections suggest that suitable habitats become increasingly restricted to mountainous areas, including the Taurus range and selected highland regions of Central and northern Türkiye. Overall, the findings underline that climate adaptation is closely linked to how biome boundaries are managed in relation to ecological thresholds. Expanding forest cover beyond natural environmental limits may not represent an effective adaptation strategy. Full article
(This article belongs to the Collection Forest Environment and Ecology)
Show Figures

Figure 1

30 pages, 1153 KB  
Article
Some Additional Principles of Living Systems Functioning and Their Application for Expanding the Theory of a Possible Typology of National Food Systems Strategies
by Pavel Brazhnikov
Systems 2026, 14(3), 230; https://doi.org/10.3390/systems14030230 - 25 Feb 2026
Viewed by 188
Abstract
This article describes the basic principles of the functioning of living systems, which distinguish them from other systems. The concept of dividing living systems’ resources into matter and energy has been expanded by describing their contribution to systems’ entropy. Within social systems, human [...] Read more.
This article describes the basic principles of the functioning of living systems, which distinguish them from other systems. The concept of dividing living systems’ resources into matter and energy has been expanded by describing their contribution to systems’ entropy. Within social systems, human individuals serve as the functional equivalent of energy in ordinary living systems, acting as the driving and redistributive force with respect to matter. Furthermore, additional characteristics of system resources that impact the strategies of living systems regarding their resources have been introduced. Additionally, the maximum rate of development of living systems under ideal conditions has been demonstrated. Based on the above, this article presents the most natural sequence of changes of living systems in relation to their sources of matter and energy. Moreover, such a sequence of strategy changes is also considered for national food systems in which infrastructure elements and workers represent matter and energy. This article can provide a valuable initial insight into the degree of correspondence between the general structural organization of state food systems and the operational conditions under which they function. Full article
Show Figures

Figure 1

21 pages, 2986 KB  
Article
AC Series Arc Fault Detection Method Based on Composite Multiscale Entropy and MRMR-RF
by Bo Wang, Haihua Tang, Shuiwang Li and Yufang Lu
Appl. Sci. 2026, 16(5), 2190; https://doi.org/10.3390/app16052190 - 24 Feb 2026
Viewed by 144
Abstract
Series arc faults often occur in aging or faulty electrical systems due to insulation degradation, poor contact, or corrosion. These faults typically generate low current signatures, which are difficult to detect with traditional overcurrent protection methods. To address this measurement challenge, this paper [...] Read more.
Series arc faults often occur in aging or faulty electrical systems due to insulation degradation, poor contact, or corrosion. These faults typically generate low current signatures, which are difficult to detect with traditional overcurrent protection methods. To address this measurement challenge, this paper proposes a systematic fault detection framework that combines discriminative feature extraction, statistical validation, and optimized classification. To comprehensively characterize arc fault signals, a diverse set of time- and frequency-domain features is extracted, and composite multiscale entropy is introduced to quantify nonlinear and transient fault dynamics more effectively. The MRMR (Maximum Relevance Minimum Redundancy) algorithm is applied to select features with high information content and low redundancy, thereby improving model generalization. A random search algorithm is used to adaptively optimize the random forest hyperparameters, establishing a high-accuracy fault diagnosis model. The experimental setup was established based on the UL1699B standard using a 115 V/400 Hz arc fault platform, and 1800 sets of data under nine different load types were collected for training and validation. Experimental results show that the proposed method outperforms five mainstream machine learning algorithms in terms of fault detection accuracy and performance. The results confirm its metrological robustness and its potential for deployment in waveform-based fault electrical monitoring systems. Full article
Show Figures

Figure 1

22 pages, 1439 KB  
Article
A Thermodynamic Closure Model for Titan’s Surface Temperature: Its Long-Term Stability Anchored to Methane’s Triple Point
by Hsien-Wang Ou
Geosciences 2026, 16(2), 90; https://doi.org/10.3390/geosciences16020090 - 22 Feb 2026
Viewed by 146
Abstract
We develop a minimal thermodynamic model to predict Titan’s surface temperature based on radiative–convective equilibrium and the principle of maximum entropy production (MEP). The model retains only the essential atmospheric constituents: gaseous methane, which absorbs both longwave and near-infrared radiation, and stratospheric haze, [...] Read more.
We develop a minimal thermodynamic model to predict Titan’s surface temperature based on radiative–convective equilibrium and the principle of maximum entropy production (MEP). The model retains only the essential atmospheric constituents: gaseous methane, which absorbs both longwave and near-infrared radiation, and stratospheric haze, which scatters and absorbs solar flux. Subject to Clausius–Clapeyron scaling of methane vapor pressure together with energy balances at the surface, tropopause, and stratopause, the model links the convective flux to the surface temperature, which exhibits a pronounced maximum due to competing radiative effects of tropospheric methane. As the surface warms, enhanced greenhouse effect would strengthen the convection, whereas the rising anti-greenhouse effect would suppress convection. The resulting convective peak corresponds to MEP, which thus selects a surface temperature slightly above methane’s triple point. To assess its long-term evolution, we consider a 20% dimmer early Sun and a hypothetical 20% enrichment of the oceanic methane. Even in combination, they only cool the surface by ~2 K, in sharp contrast to the ~20 K cooling inferred in studies that prescribe haze abundance. This study suggests a critical role of self-adjusting haze in providing the internal degree of freedom necessary for MEP closure, thereby stabilizing Titan’s temperature. Full article
(This article belongs to the Section Climate and Environment)
Show Figures

Figure 1

11 pages, 292 KB  
Article
On the Unitarity of the Stueckelberg Wave Equation and Measurement as Bayesian Update from Maximum Entropy Prior Distribution
by Jussi Lindgren
Quantum Rep. 2026, 8(1), 18; https://doi.org/10.3390/quantum8010018 - 22 Feb 2026
Viewed by 398
Abstract
The Stueckelberg wave equation is transformed into a quantum telegraph equation and a set of stationary states is obtained as unitary solutions. As it has been shown previously that this PDE relates to the Dirac operator, and on the other hand it is [...] Read more.
The Stueckelberg wave equation is transformed into a quantum telegraph equation and a set of stationary states is obtained as unitary solutions. As it has been shown previously that this PDE relates to the Dirac operator, and on the other hand it is a linearized Hamilton–Jacobi–Bellman PDE, from which the Schrödinger equation can be deduced in a nonrelativistic limit, it is clear that it is the key equation in relativistic quantum mechanics. We give a Bayesian interpretation for the measurement problem. The stationary solution is understood as a maximum entropy prior distribution and measurement is understood as a Bayesian update. We discuss the interpretation of the single electron experiments in the light of finite speed propagation of the transition probability field and how it relates to the interpretation of quantum mechanics more broadly. Full article
(This article belongs to the Special Issue 100 Years of Quantum Mechanics)
20 pages, 309 KB  
Article
A Comparison of Algorithms to Achieve the Maximum Entropy in the Theory of Evidence
by Joaquín Abellán, Aina López-Gay, Maria Isabel A. Benítez and Francisco Javier G. Castellano
Entropy 2026, 28(2), 247; https://doi.org/10.3390/e28020247 - 21 Feb 2026
Viewed by 178
Abstract
Within the framework of evidence theory, maximum entropy is regarded as a measure of total uncertainty that satisfies a comprehensive set of mathematical properties and behavioral requirements. However, its practical applicability is severely questioned due to the high computational complexity of its calculation, [...] Read more.
Within the framework of evidence theory, maximum entropy is regarded as a measure of total uncertainty that satisfies a comprehensive set of mathematical properties and behavioral requirements. However, its practical applicability is severely questioned due to the high computational complexity of its calculation, which involves the manipulation of the power set of the frame of discernment. In the literature, attempts have been made to reduce this complexity by restricting the computation to singleton elements, leading to a formulation based on reachable probability intervals. Although this approach relies on a less specific representation of evidential information, it has been shown to provide an equivalent maximum entropy value under certain conditions. In this paper, we present an experimental comparative study of two algorithms for calculating maximum entropy in evidence theory: the classical algorithm, which operates directly on belief functions, and an alternative algorithm based on reachable probability intervals. Through numerical experiments, we demonstrate that the differences between these approaches are less pronounced than previously suggested in the literature. Depending on the type of information representations to which it is applied, the original algorithm based on belief functions can be more efficient than the one using the reachable probability interval approach. This is an interesting result, and a reason for choosing one algorithm over the other depending on the situation. Full article
18 pages, 608 KB  
Article
TDI-SF: Trustworthy Dynamic Inference via Uncertainty-Gated Retrieval and Similarity-Gated Strict Fallback
by Yiyi Xu, Siyuan Li, Zhouxiang Yu, Jiahao Hu and Pengfei Liu
Appl. Sci. 2026, 16(4), 2023; https://doi.org/10.3390/app16042023 - 18 Feb 2026
Viewed by 105
Abstract
Retrieval-time augmentation can correct hard test samples but may also introduce harmful interference when retrieved neighbors are unreliable. We propose TDI-SF (trustworthy dynamic inference via similarity-gated strict fallback), a safety-oriented dynamic inference strategy that intervenes only when needed and falls back to a [...] Read more.
Retrieval-time augmentation can correct hard test samples but may also introduce harmful interference when retrieved neighbors are unreliable. We propose TDI-SF (trustworthy dynamic inference via similarity-gated strict fallback), a safety-oriented dynamic inference strategy that intervenes only when needed and falls back to a frozen baseline when retrieval quality is insufficient. Uncertainty-gated selective retrieval triggers on a hard subset, defined by high entropy or low margin predictions (q=0.3), and similarity-gated fusion weights neighbor evidence by maximum similarity with a strict fallback threshold (alpha-mode=maxsim, min_maxsim). We evaluate on ImageNet-100 (ResNet-50) and CICIDS2017 (MLP) and report overall accuracy, hard-subset accuracy, calibration, negative flips, and risk–coverage behavior alongside efficiency. Comprehensive evaluation under both clean and degraded retrieval conditions demonstrates the value of each component. On ImageNet-100, TDI-SF improves hard-subset accuracy by 0.92% and overall accuracy by 0.30%, applying retrieval to only 32.6% of samples with 1.38 ms overhead per triggered sample. On CICIDS2017, the same mechanism yields +1.30% hard-subset gains with only 0.43 ms/hard overhead. These results show a simple, auditable recipe for safer retrieval-augmented inference across heterogeneous domains. Full article
(This article belongs to the Special Issue Latest Research on Computer Vision and Its Application)
Show Figures

Figure 1

20 pages, 2304 KB  
Article
Genomic Insights into Adaptation of Lagerstroemia suprareticulata to Limestone Karst Habitats
by Shuo Zhang, Yi Li, Ying Xie, Xiaomei Deng and Ye Sun
Plants 2026, 15(4), 629; https://doi.org/10.3390/plants15040629 - 16 Feb 2026
Viewed by 401
Abstract
Lagerstroemia suprareticulata, an endemic ornamental species in limestone karst ecosystems of Guangxi—a global biodiversity hotspot—holds significant ecological value. However, habitat degradation and anthropogenic pressures have driven this species to the brink of extinction, leading to its classification as Endangered (EN) on the [...] Read more.
Lagerstroemia suprareticulata, an endemic ornamental species in limestone karst ecosystems of Guangxi—a global biodiversity hotspot—holds significant ecological value. However, habitat degradation and anthropogenic pressures have driven this species to the brink of extinction, leading to its classification as Endangered (EN) on the China Biodiversity Red List. To address this crisis, we conducted whole-genome resequencing to generate single-nucleotide polymorphisms (SNPs) for comprehensive analyses of genetic diversity, population structure, demographic history, and adaptive variation. Our results reveal four distinct genetic clusters in L. suprareticulata, all of which share a history of population expansion followed by contraction. Maximum entropy modeling (MaxEnt) projects a severe contraction in the range under high-carbon-emission scenarios. Selective sweep analysis identified genomic regions under positive selection, including those associated with protein homeostasis, metabolism, signal transduction, and developmental regulation. Genotype–environment association (GEA) analysis further identified adaptive SNPs linked to temperature and precipitation, which were enriched in genes regulating transmembrane transport, stress response, and the immune system. Additionally, risk of non-adaptedness (RONA) analysis identified high-risk populations. By integrating genomic data with advanced analytical approaches, this study enhances our understanding of the adaptive mechanisms of L. suprareticulata to limestone karst habitats and provides critical insights for its conservation. Full article
(This article belongs to the Section Plant Systematics, Taxonomy, Nomenclature and Classification)
Show Figures

Figure 1

26 pages, 2084 KB  
Article
Adversarial Distributed Multi-Task Meta-Inverse Reinforcement Learning with Theory of Mind and Mean-Field Method
by Li Song, Kun Yang and Chao Chen
Mathematics 2026, 14(4), 691; https://doi.org/10.3390/math14040691 - 15 Feb 2026
Viewed by 246
Abstract
Maximum entropy adversarial inverse reinforcement learning (ME-AIRL) has garnered widespread attention for its ability to learn rewards and optimize policies from expert demonstrations. In complex multi-task environments, applying meta-learning ME-AIRL to acquire rewards requires a substantial volume of homogeneous expert demonstrations across all [...] Read more.
Maximum entropy adversarial inverse reinforcement learning (ME-AIRL) has garnered widespread attention for its ability to learn rewards and optimize policies from expert demonstrations. In complex multi-task environments, applying meta-learning ME-AIRL to acquire rewards requires a substantial volume of homogeneous expert demonstrations across all tasks, which is often impractical in real-world scenarios. Moreover, interference between tasks further escalates computational complexity. To solve these challenges, this paper proposes a distributed multi-task meta ME-AIRL framework based on theory of mind and mean field, referred to as TMMF-MTAIRL. In TMMF-MTAIRL, the theory of mind is used to capture the relationships and representational information among multiple tasks. Furthermore, TMMF-MTAIRL integrates mean-field theory to transform interactions between complex tasks into interactions between the main task and the average of the remaining tasks. Furthermore, additional latent variables are introduced to enhance adaptation to novel tasks. We evaluate the proposed TMMF-MTAIRL on point-maze benchmarks and a real-world rolling bearing fault diagnosis dataset using metrics such as classification accuracy, mean rewards or cumulative rewards. TMMF-MTAIRL achieves the best performance across all tasks, with an average improvement of 0.16 in accuracy of fault classification over the strongest baseline. Full article
Show Figures

Figure 1

29 pages, 3196 KB  
Review
The Remote Sensing Geostatistical Paradigm: A Review of Key Technologies and Applications
by Junyu He
Remote Sens. 2026, 18(4), 600; https://doi.org/10.3390/rs18040600 - 14 Feb 2026
Viewed by 189
Abstract
Advancements in earth observation technologies are ushering in the big data era, yet this potential is compromised by intrinsic challenges: inherent uncertainty, spatiotemporal heterogeneity, multi-scale character, and pervasive data gaps. Traditional methods often fail to address these issues within a single, coherent system. [...] Read more.
Advancements in earth observation technologies are ushering in the big data era, yet this potential is compromised by intrinsic challenges: inherent uncertainty, spatiotemporal heterogeneity, multi-scale character, and pervasive data gaps. Traditional methods often fail to address these issues within a single, coherent system. The main contributions of this review are to systematically establish the Remote Sensing Geostatistical Paradigm (RSGP) as a comprehensive, unified framework. Powered by its core theory, Bayesian Maximum Entropy (BME), RSGP is a broadly designed epistemic framework that transcends a mere conceptual reorganization of established methods. It addresses the above challenges by highlighting two pivotal concepts within a spatiotemporal random field: (1) uncertainty quantification via probabilistic soft data, which redefines observations as probability density functions, representing a fundamental epistemological shift from deterministic scalars to probabilistic entities, and provides a universal interface for rigorous assimilation of heterogeneous remote sensing or in situ observations and synergy with other computational models, such as machine learning; and (2) spatiotemporal structure exploitation, which integrates the underlying structure embedded in remote sensing data of natural attributes, moving beyond mere optical properties to incorporate a broader range of available spatiotemporal information, for robust estimation and mapping purposes. Furthermore, the evolution of key technologies is illustrated by using real-world application cases, guiding how to implement RSGP in terms of different scenarios. Finally, the paradigm’s features and limitations are discussed. This synthesis provides the remote sensing community with a robust foundation for uncertainty-aware analysis and multi-source integration, bridging geostatistical logic with next-generation AI-driven Earth observation. Full article
(This article belongs to the Section Remote Sensing for Geospatial Science)
Show Figures

Figure 1

17 pages, 4839 KB  
Article
Maxent Modeling of Habitat Suitability for Alpine Musk Deer (Moschus chrysogaster) and Blue Sheep (Pseudois nayaur) in the Typical Canyons of the Sanjiangyuan Region
by Le Niu, Ping Li, Zhenzhen Hao and Junyong Ma
Sustainability 2026, 18(4), 1976; https://doi.org/10.3390/su18041976 - 14 Feb 2026
Viewed by 164
Abstract
Habitat degradation and fragmentation driven by climate change and human activities are major threats to wildlife, particularly in the ecologically sensitive Sanjiangyuan region on the Qinghai–Tibet Plateau. Alpine musk deer (Moschus chrysogaster) and blue sheep (Pseudois nayaur), two key [...] Read more.
Habitat degradation and fragmentation driven by climate change and human activities are major threats to wildlife, particularly in the ecologically sensitive Sanjiangyuan region on the Qinghai–Tibet Plateau. Alpine musk deer (Moschus chrysogaster) and blue sheep (Pseudois nayaur), two key ungulate species, face severe habitat challenges due to these environmental pressures. Understanding their habitat requirements and distribution patterns is critical for developing effective conservation strategies. This study applied the Maximum Entropy (MaxEnt) model to predict the habitat suitability of alpine musk deer and blue sheep in the characteristic canyons of the Sanjiangyuan region. Data from 55 infrared camera traps and 26 environmental variables, including climate, topography, land use, and human disturbance, were analyzed. The results indicated that annual mean temperature, altitude, temperature annual range, and distance to water were the most influential factors for both species. The suitable habitats for alpine musk deer and blue sheep were limited, covering only 9.61% and 10.84% of the study area, respectively. These areas were primarily distributed along the main stream of the Yellow River and its primary tributary canyons. The limited availability of high-quality habitats underscores the vulnerability of these species to ongoing habitat degradation and fragmentation. To effectively protect ungulate populations, we suggest continuously monitoring the trends of critical habitats, strengthening the protection of existing habitats, and improving the current conservation systems. The findings provide critical insights for conservation planning and management in the Sanjiangyuan region. Full article
Show Figures

Figure 1

Back to TopTop