Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (4,518)

Search Parameters:
Keywords = entropy generation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 1572 KB  
Article
Task-Aware Decoupled State-Space Model for Multi-Task Satellite Internet Evaluation
by Erlong Wei, Peixuan (Nolan) Kang, Yihong Wen and Kejian Song
Electronics 2026, 15(7), 1369; https://doi.org/10.3390/electronics15071369 (registering DOI) - 25 Mar 2026
Abstract
Multi-task learning (MTL) is essential for satellite internet systems requiring simultaneous optimization of beam management, interference mitigation, resource allocation, and traffic prediction. However, existing evaluation methods rely predominantly on external performance metrics, neglecting internal dynamics governing task interactions. We propose TDS-Mamba (Task-Aware Decoupled [...] Read more.
Multi-task learning (MTL) is essential for satellite internet systems requiring simultaneous optimization of beam management, interference mitigation, resource allocation, and traffic prediction. However, existing evaluation methods rely predominantly on external performance metrics, neglecting internal dynamics governing task interactions. We propose TDS-Mamba (Task-Aware Decoupled State-Space Model), integrating selective state-space models with task-specific modulation for satellite networks. Our contributions include: (1) Task-Aware Decoupled S6 (TA-DS6) with hypernetwork-generated task-conditioned projection matrices; (2) Shared–Private State Decomposition disentangling cross-task representations from task-specific features; (3) Value-at-Risk (VaR) Gating for risk-sensitive optimization under varying orbital conditions; and (4) an internal diagnostic framework with Task-Specific Entropy and Interference Coefficient metrics. Experiments on LEO satellite constellation benchmarks show consistent improvements over the selected baselines and provide enhanced interpretability of multi-task dynamics via internal diagnostics. Full article
Show Figures

Figure 1

25 pages, 29137 KB  
Article
An Empirical Study on Enhancing Large Language Models for Long-Term Conversations in Korean
by Hongjin Kim, Jeonghyun Kang, Yeajin Jang, Yujin Sim and Harksoo Kim
Appl. Sci. 2026, 16(7), 3175; https://doi.org/10.3390/app16073175 (registering DOI) - 25 Mar 2026
Abstract
Large language models (LLMs) have shown strong performance in open-domain dialogue, yet they continue to struggle with long-term multi-session conversations (MSC), particularly in non-English languages such as Korean. In this work, we present a comprehensive empirical study on enhancing Korean MSC capabilities of [...] Read more.
Large language models (LLMs) have shown strong performance in open-domain dialogue, yet they continue to struggle with long-term multi-session conversations (MSC), particularly in non-English languages such as Korean. In this work, we present a comprehensive empirical study on enhancing Korean MSC capabilities of LLMs through dataset construction, memory modeling, and parameter-efficient fine-tuning. We introduce an extended Korean MSC dataset that explicitly distinguishes between persona memory (long-term user attributes) and episode memory (short-term, event-driven information), enabling more effective memory management across sessions. Using this dataset, we evaluate LLM performance on three core MSC tasks: session summarization, memory update, and response generation. Our experiments reveal that Korean MSC is intrinsically more challenging than English MSC and that memory update and response generation require substantial reasoning ability. To address these challenges, we compare LoRA, DPO, MoE, CPT, Layer Tuning, and neuron-level tuning methods. Results consistently show that neuron tuning, guided by a novel language-specific neuron identification method based on activation scores and entropy, achieves superior performance and robustness, particularly in continual learning settings. Overall, our findings highlight neuron-level adaptation as an effective and interpretable approach for improving long-term conversational ability in low-resource languages. Full article
(This article belongs to the Special Issue The Advanced Trends in Natural Language Processing)
Show Figures

Figure 1

36 pages, 5862 KB  
Article
Reliability Analysis of Aerospace Blade Manufacturing Equipment: A Multi-Source Uncertainty FMECA Method for Five-Axis CNC Machine Tool Spindle Systems
by Muhao Han, Yufei Li, Hailong Tian, Yuzhi Sun, Zixuan Ni, Yunshenghao Qiu and Haoyuan Li
Machines 2026, 14(4), 360; https://doi.org/10.3390/machines14040360 - 25 Mar 2026
Abstract
Five-axis Computerized Numerical Control (CNC) machine tools play a pivotal role in the precision manufacturing of aeroengine turbine blades, where ultra-high reliability and accuracy are essential. Failure Mode, Effects and Criticality Analysis (FMECA) has been widely applied in the reliability assessment of such [...] Read more.
Five-axis Computerized Numerical Control (CNC) machine tools play a pivotal role in the precision manufacturing of aeroengine turbine blades, where ultra-high reliability and accuracy are essential. Failure Mode, Effects and Criticality Analysis (FMECA) has been widely applied in the reliability assessment of such advanced machining systems due to its systematic evaluation of potential failure modes. However, traditional FMECA approaches often overlook the ambiguity of human cognition and the interdependence among expert evaluations, limiting their effectiveness in complex aerospace manufacturing environments. To address these issues, this paper proposes a novel FMECA framework based on generalized intuitionistic linguistic theory. A new Generalized Intuitionistic Linguistic Weighted Geometric Average (GILWGA) operator is introduced to couple multi-source expert information and quantify the fuzziness inherent in subjective assessments. Additionally, an intuitionistic linguistic entropy-based weighting scheme is developed to dynamically evaluate key risk factors, including severity, occurrence, detectability, and controllability. The proposed framework is applied to a case study involving the spindle system of a five-axis CNC machine tool used in aeroengine blade production. The results demonstrate that the proposed method offers more robust and consistent failure mode prioritization, providing effective decision support for reliability-centered maintenance in aerospace equipment manufacturing. Full article
Show Figures

Figure 1

42 pages, 438 KB  
Article
An Approach to Fisher-Rao Metric for Infinite Dimensional Non-Parametric Information Geometry
by Bing Cheng and Howell Tong
Entropy 2026, 28(4), 374; https://doi.org/10.3390/e28040374 - 25 Mar 2026
Abstract
Non-parametric information geometry has long faced an “intractability barrier”: in the infinite-dimensional setting, the Fisher–Rao metric is a weak Riemannian metric functional that lacks a bounded inverse, rendering classical optimization and estimation techniques computationally inaccessible. This paper resolves this barrier by building the [...] Read more.
Non-parametric information geometry has long faced an “intractability barrier”: in the infinite-dimensional setting, the Fisher–Rao metric is a weak Riemannian metric functional that lacks a bounded inverse, rendering classical optimization and estimation techniques computationally inaccessible. This paper resolves this barrier by building the statistical manifold on the Orlicz space L0Φ(Pf) (the Pistone–Sempi manifold), which provides the necessary exponential integrability for score functions and a rigorous Fréchet differentiability for the Kullback–Leibler divergence. We introduce a novel Structural Decomposition of the Tangent Space (TfM=SS), where the infinite-dimensional space is split into a finite-dimensional covariate subspace (S)—representing the observable system—and its orthogonal complement (S). Through this decomposition, we derive the Covariate Fisher Information Matrix (cFIM), denoted as Gf, which acts as the computable “Hilbertian slice” of the otherwise intractable metric functional. Key theoretical contributions include proving the Trace Theorem (HG(f)=Tr(Gf)) to identify G-entropy as a fundamental geometric invariant; demonstrating the Geometric Invariance of the Covariate Fisher Information Matrix (cFIM) as a covariant (0,2)-tensor under reparameterization; establishing the cFIM as the local Hessian of the KL-divergence; and characterizing the Efficiency Standard through a generalized Cramer–Rao Lower Bound for semi-parametric inference within the Orlicz manifold. Furthermore, we demonstrate that this framework provides a formal mathematical justification for the Manifold Hypothesis, as the structural decomposition naturally identifies the low-dimensional subspace where information is concentrated. By shifting the focus from the intractable global manifold to the tractable covariate geometry, this framework proves that statistical information is not a property of data alone, but an active geometric interaction between the environment (data), the system (covariate subspace), and the mechanism (Fisher–Rao connection). Full article
22 pages, 2673 KB  
Article
Autoencoder-Enhanced Hierarchical Mondrian Anonymization via Latent Representations
by Junpeng Hu, Tao Hu, Zhenwu Xu, Jinan Shen and Minghui Zheng
Entropy 2026, 28(4), 372; https://doi.org/10.3390/e28040372 (registering DOI) - 25 Mar 2026
Abstract
Releasing structured microdata requires balancing utility and privacy under group-based disclosure risks. We propose AE-LRHMA, a hybrid anonymization framework that performs Mondrian-style hierarchical partitioning in an autoencoder-learned latent space and integrates local (k,e)-microaggregation. To explicitly control sensitive-value concentration and [...] Read more.
Releasing structured microdata requires balancing utility and privacy under group-based disclosure risks. We propose AE-LRHMA, a hybrid anonymization framework that performs Mondrian-style hierarchical partitioning in an autoencoder-learned latent space and integrates local (k,e)-microaggregation. To explicitly control sensitive-value concentration and diversity within each equivalence class, we introduce a tunable constraint set consisting of k, a maximum sensitive proportion threshold, and an optional sensitive-entropy threshold (used as a hard gate when enabled and otherwise as a soft term in split scoring). The anonymized output is generated via standard interval/set generalization in the original space. Experiments on Adult and Bank Marketing demonstrate that AE-LRHMA yields lower information loss and more stable group structures than representative baselines under comparable settings. We further report linkage-attack-oriented risk metrics to empirically characterize relative disclosure trends without claiming formal guarantees, such as differential privacy. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

24 pages, 2600 KB  
Article
A Normalized Shannon Entropy–CV Framework for Diagnosing Short-Term Surface Water Quality Instability from High-Frequency WQI Data in Southwest China
by Junran Kuang, Yu Zhang, Qingdong Liu, Jing Hu and Shaoqi Zhou
Sustainability 2026, 18(7), 3216; https://doi.org/10.3390/su18073216 - 25 Mar 2026
Abstract
High-frequency water quality monitoring generates large volumes of sub-daily observations, but concise and scalable indicators for diagnosing short-term instability remain limited. Using four-hourly records from 336 national automatic monitoring stations in Southwest China (November 2022–September 2024), we constructed a nine-parameter water quality index [...] Read more.
High-frequency water quality monitoring generates large volumes of sub-daily observations, but concise and scalable indicators for diagnosing short-term instability remain limited. Using four-hourly records from 336 national automatic monitoring stations in Southwest China (November 2022–September 2024), we constructed a nine-parameter water quality index (WQI) and developed a normalized Shannon entropy–coefficient of variation (hCV) framework to characterize short-term instability in fixed three-day windows. A composite separation index combining the Kolmogorov–Smirnov distance of pollution-event counts and the effect size of entropy distributions, together with bootstrap resampling, identified CV ≈ 0.10 as an operational threshold for high-fluctuation windows. The joint hCV distribution revealed four typical short-term dynamic patterns and showed good consistency across three-, five-, and seven-day windows. At the station scale, instability hotspots were concentrated in southern Yunnan–Guizhou–Guangxi, the southeastern margins of the Sichuan Basin, and several mid-lower mainstream reaches, whereas alpine headwaters and upstream segments remained relatively stable. Overall, the proposed framework provides an interpretable and generalizable tool for short-term water-quality diagnosis, with practical value for risk zoning, early warning, and monitoring network optimization. Full article
(This article belongs to the Section Environmental Sustainability and Applications)
Show Figures

Figure 1

22 pages, 2100 KB  
Article
Oil Production, Net Energy, and Capital Dynamics: A System-Coupled Lotka–Volterra Approach
by Shunsuke Nakaya and Jun Matsushima
Energies 2026, 19(7), 1607; https://doi.org/10.3390/en19071607 - 25 Mar 2026
Abstract
Net energy—defined as the energy remaining after accounting for the energy required for resource extraction and processing—shapes the fundamental physical constraints of energy systems. Although the extended Energy Return on Investment (EROIext) incorporates extraction, refining, transportation, and end-use infrastructure, its long-term structural dynamics [...] Read more.
Net energy—defined as the energy remaining after accounting for the energy required for resource extraction and processing—shapes the fundamental physical constraints of energy systems. Although the extended Energy Return on Investment (EROIext) incorporates extraction, refining, transportation, and end-use infrastructure, its long-term structural dynamics remain underexplored. This study applies a Single-Cycle Lotka–Volterra (SCLV) model to examine interactions between resource stock, capital accumulation, and EROIext in the global petroleum system. The model is calibrated using historical data from 1965 to 2012 to explore structural trajectories under simplified assumptions. Results indicate that production peaks endogenously around 2041 within the model framework, while EROIext declines and falls below unity by 2081 under the assumed structural relationships. These years represent model-derived structural outcomes rather than deterministic forecasts. Capital stock reaches its maximum at the same energetic threshold (EROIext = 1), marking an internally generated transition in the resource–capital system. An entropy-based indicator is introduced as a thermodynamic proxy mirroring the decline in energetic efficiency within the modeled subsystem. These findings show how energetic reinvestment constraints generate endogenous peak and threshold behavior in resource-dependent systems. The analysis offers a structural perspective on interactions between depletion, capital accumulation, and net energy under simplified thermodynamic assumptions. These results provide insights into long-term structural constraints of the oil system, which may inform energy planning and policy discussions under conditions of declining net energy availability. Full article
Show Figures

Figure 1

19 pages, 642 KB  
Article
Enhancing Type 1 Diabetes Polygenic Risk Prediction Through Neural Networks and Entropy-Derived Insights
by Antonio Nadal-Martínez, Guillermo Pérez-Solero, Sandra Ferreiro López, Jorge Blom-Dahl, Eduard Montanya, Marta Alonso-Bernáldez, Moises Shabot, Christian Binsch, Lukasz Szczerbinski, Adam Kretowski, Julián Nevado, Pablo Lapunzina, Robert Wagner and Jair Tenorio-Castano
Int. J. Mol. Sci. 2026, 27(7), 2966; https://doi.org/10.3390/ijms27072966 - 25 Mar 2026
Abstract
Type 1 diabetes (T1D) is an autoimmune disease with a strong genetic component (~70% heritability). Early identification of individuals at risk is crucial for early intervention or risk assessment. Although polygenic risk scores (PRS) have shown promise in risk assessment, most current approaches [...] Read more.
Type 1 diabetes (T1D) is an autoimmune disease with a strong genetic component (~70% heritability). Early identification of individuals at risk is crucial for early intervention or risk assessment. Although polygenic risk scores (PRS) have shown promise in risk assessment, most current approaches remain constrained by linear assumptions and limited generalizability. We aimed to develop a neural network-driven classifier using T1D-associated single nucleotide polymorphisms (SNPs). In addition, we explored the inclusion of an entropy-derived feature as a complementary variable, representing the degree of genetic variability within an individual’s genotype profile across the 67 T1D-associated SNPs, to evaluate its potential additive contribution to the model performance. We analyzed genotype data from 11,909 individuals in the UK BioBank (546 T1D cases and 11,363 controls). Sixty-seven well-known SNPs associated with T1D were utilized as inputs to the model, using two distinct allele-encoding strategies. A feed-forward neural network was evaluated under varying case–control ratios through five-fold cross-validation. Performance was assessed using the area under the receiver operating characteristic curve (AUC) on a held-out test set and on an external European cohort as a validation cohort. Across five-fold cross-validation, the best configuration achieved a median AUC of 0.903. On the held-out UK Biobank test set, the model generalized well, with an AUC of 0.8889 (95% CI: 0.8516–0.9262). A probability-based risk framework, constructed using five risk groups (“very low”, “low”, “intermediate”, “high”, and “very high” risk), yielded a negative predictive value (NPV) of 98.9% for the “very low” risk group and a Positive Predicted Value (PPV) of 61.9% with a specificity of 97.3% for the “very high” risk group, assuming a 10% T1D prevalence. External validation in the German Diabetes Study reproduced clear case–control separation; for individuals with recent onset diabetes and glutamic acid decarboxylase antibodies (GADA+) vs. controls, specificity reached 91.9% in the “high” risk group (PPV of 94.3%) and 97.6% in the “very high” risk group (PPV of 95.7%). The proposed neural network reliably predicts T1D genetic risk using a compact SNP panel of 67 SNPs and maintains accuracy in both internal and external European cohorts. Its probabilistic output enables clinically interpretable risk thresholds, while entropy features contributed modestly to performance. These results demonstrate that a neural network-based approach achieves discriminative performance that is comparable to established T1D genetic risk models, while offering flexible probability-based risk stratification and architectural extensibility for future integration of additional features. Full article
Show Figures

Figure 1

105 pages, 2483 KB  
Article
Thermodynamics à la Souriau on Kähler Non-Compact Symmetric Spaces for Cartan Neural Networks
by Pietro G. Fré, Alexander S. Sorin and Mario Trigiante
Entropy 2026, 28(4), 365; https://doi.org/10.3390/e28040365 - 24 Mar 2026
Abstract
In this paper, we clarify several issues concerning the abstract geometrical formulation of thermodynamics on non-compact symmetric spaces U/H that are the mathematical model of hidden layers in the new paradigm of Cartan Neural Networks. We introduce a clear-cut distinction between [...] Read more.
In this paper, we clarify several issues concerning the abstract geometrical formulation of thermodynamics on non-compact symmetric spaces U/H that are the mathematical model of hidden layers in the new paradigm of Cartan Neural Networks. We introduce a clear-cut distinction between the generalized thermodynamics associated with Integrable Dynamical Systems and the challenging proposal of Gibbs probability distributions on U/H provided by generalized thermodynamics à la Souriau. Our main result is the proof that U/H.s supporting such Gibbs distributions are only the Kähler ones. Furthermore, for the latter, we solve the problem of determining the space of temperatures, namely, of Lie algebra elements for which the partition function converges. The space of generalized temperatures is the orbit under the adjoint action of U of a positivity domain in the Cartan subalgebra CcH of the maximal compact subalgebra HU. We illustrate how our explicit constructions for the Poincaré and Siegel planes might be extended to the whole class of Calabi–Vesentini manifolds utilizing Paint Group symmetry. Furthermore, we claim that Rao’s, Chentsov’s, and Amari’s Information Geometry and the thermodynamical geometry of Ruppeiner and Lychagin are the very same thing. In particular, we provide an explicit study of thermodynamical geometry for the Poincaré plane. The key feature of the Gibbs probability distributions in this setup is their covariance under the entire group of symmetries U. The partition function is invariant against U transformations, and the set of its arguments, namely the generalized temperatures, can always be reduced to a minimal set whose cardinality is equal to the rank of the compact denominator group HU. Full article
(This article belongs to the Collection Feature Papers in Information Theory)
28 pages, 2584 KB  
Article
Improving Cross-Domain Generalization in Brain MRIs via Feature Space Stability Regularization
by Shawon Chakrabarty Kakon, Harishik Dev Singh Jamwal and Saurabh Singh
Mathematics 2026, 14(6), 1082; https://doi.org/10.3390/math14061082 - 23 Mar 2026
Viewed by 77
Abstract
Deep learning models for brain tumor classification from magnetic resonance imaging (MRI) often achieve high in-dataset accuracy but exhibit substantial performance degradation when evaluated on unseen clinical data due to domain shift arising from variations in imaging protocols and intensity distributions. Existing approaches [...] Read more.
Deep learning models for brain tumor classification from magnetic resonance imaging (MRI) often achieve high in-dataset accuracy but exhibit substantial performance degradation when evaluated on unseen clinical data due to domain shift arising from variations in imaging protocols and intensity distributions. Existing approaches largely rely on architectural scaling or parameter-level regularization, which do not explicitly constrain the stability of learned feature representations. This manuscript proposes Feature Space Stability Regularization (FSSR), a lightweight and model-agnostic training framework that enforces consistency in latent feature representations under realistic, MRI-safe-intensity perturbations. FSSR introduces an auxiliary feature space loss that minimizes the 2 distance between normalized embeddings extracted from the input MRI images and their intensity-perturbed counterparts, alongside standard cross-entropy supervision. This manuscript evaluated FSSR across three convolutional backbones, ResNet-18, ResNet-34, and DenseNet-121, trained exclusively on the Kaggle Brain MRI dataset. Feature space analysis demonstrates that FSSR consistently reduces mean feature deviation and variance across architectures, indicating more stable internal representations. Generalization is assessed via zero-shot evaluation on the fully unseen BRISC-2025 dataset without retraining or fine-tuning. On the source domain, the best-performing configuration achieves 97.71% accuracy and 97.55% macro-F1. Under domain shift, FSSR improves external accuracy by up to 8.20 percentage points and the macro-F1 by up to 12.50 percentage points, with DenseNet-121 achieving a 96.70% accuracy and 96.87% macro-F1 at a domain gap of only 0.94%. Confusion matrix analysis further reveals the reduced class confusion and more stable recall across challenging tumor categories, demonstrating that feature-level stability is a key factor for robust brain MRI classification under domain shift. Full article
Show Figures

Figure 1

34 pages, 701 KB  
Article
Developing a Composite Sustainable Smart City Performance Assessment Index: A Novel Indexing Model and Cross-Country Application
by Mert Unal and Mehtap Dursun
Systems 2026, 14(3), 330; https://doi.org/10.3390/systems14030330 - 23 Mar 2026
Viewed by 89
Abstract
Cities are increasingly expected to address digital transformation and sustainability challenges at the same time. However, existing urban indices generally approach smart city and sustainable city perspectives separately, which limits their ability to capture the integrated nature of contemporary urban development. In addition, [...] Read more.
Cities are increasingly expected to address digital transformation and sustainability challenges at the same time. However, existing urban indices generally approach smart city and sustainable city perspectives separately, which limits their ability to capture the integrated nature of contemporary urban development. In addition, many index-based studies rely on similar methodological choices. This study develops a composite Sustainable Smart City (SSC) index supported by a systematic scoring framework that brings smartness and sustainability together. The proposed framework follows a step-by-step procedure covering data preparation, normalization, weighting, aggregation, and final scoring. To address information overlap among indicators, a Redundancy-Penalized Entropy Weighting (RPEW) approach is applied. Then, overall SSC scores are calculated using a soft non-compensatory aggregation to emphasize balanced performance across dimensions. The framework is empirically illustrated through a cross-country case study including 38 OECD (Organization for Economic Co-Operation and Development) countries. A machine-learning-based polynomial forecasting approach is used for a limited number of indicators to deal with data gaps allowing the assessment to reflect more up-to-date conditions. The results highlight clear differences in SSC performance and show that strong outcomes in a single dimension are not sufficient to achieve high overall SSC scores. Instead, balanced progress across economic, digital, environmental, governance, mobility, and social dimensions plays an important role. In addition, the proposed framework provides a practical basis for comparative analysis, benchmarking, and policy-oriented evaluation of smart and sustainable urban development. Full article
Show Figures

Figure 1

20 pages, 2033 KB  
Article
On the Predictability of Green Finance Markets: An Assessment Based on Fractal and Shannon Entropy
by Sonia Benghiat and Salim Lahmiri
Fractal Fract. 2026, 10(3), 205; https://doi.org/10.3390/fractalfract10030205 - 22 Mar 2026
Viewed by 93
Abstract
Econophysics is an interdisciplinary field that applies physics concepts to economic and financial systems. By utilizing tools such as statistical physics, including fractal analysis and entropy measures, econophysics helps model the complex and non-linear dynamics of equity markets. This paper examines the intrinsic [...] Read more.
Econophysics is an interdisciplinary field that applies physics concepts to economic and financial systems. By utilizing tools such as statistical physics, including fractal analysis and entropy measures, econophysics helps model the complex and non-linear dynamics of equity markets. This paper examines the intrinsic dynamics and regularity in information content in green finance markets (carbon, clean energy, and sustainability markets) by means of range scale analysis (R/S), detrended fluctuation analysis (DFA), fractionally integrated generalized auto-regressive conditionally heteroskedastic (FIGARCH) process, and Shannon entropy (SE). The empirical results can be summarized as follows. First, prices in all markets are persistent; however, returns are likely random as estimated Hurst exponents are close to 0.5. Second, the FIGARCH process shows that volatility series in carbon and sustainability markets are persistent, whilst volatility in clean energy is anti-persistent. Third, in carbon and sustainability markets, entropy is high in prices compared to returns and volatility series. On the contrary, the clean energy market shows lower entropy for prices than for returns and volatility. In sum, it is concluded that price and volatility series are predictable, whilst return series are not. Finally, based on a rolling window framework, it is concluded that the COVID-19 pandemic and the Russia–Ukraine war have altered long memory and randomness in all three green finance markets. Full article
(This article belongs to the Special Issue Fractal Approaches and Machine Learning in Financial Markets)
Show Figures

Figure 1

19 pages, 1184 KB  
Article
Hardware-Accelerated Cryptographic Random Engine for Simulation-Oriented Systems
by Meera Gladis Kurian and Yuhua Chen
Electronics 2026, 15(6), 1297; https://doi.org/10.3390/electronics15061297 - 20 Mar 2026
Viewed by 176
Abstract
Modern computing platforms increasingly rely on random number generators (RNGs) for modeling probabilistic processes in simulation, probabilistic computing, and system validation. They are also essential for cryptographic operations such as key generation, authenticated encryption, and digital signatures. Deterministic Random Bit Generators (DRBGs), as [...] Read more.
Modern computing platforms increasingly rely on random number generators (RNGs) for modeling probabilistic processes in simulation, probabilistic computing, and system validation. They are also essential for cryptographic operations such as key generation, authenticated encryption, and digital signatures. Deterministic Random Bit Generators (DRBGs), as specified in the National Institute of Standards and Technology (NIST) Special Publication (SP) 800-90A, provides a standardized method for expanding entropy into cryptographically strong pseudorandom sequences. This work presents the design and Field Programmable Gate Array (FPGA) implementation of a hash-based DRBG using Ascon-Hash256, a lightweight, quantum-resistant hash function from the NIST-standardized Ascon cryptographic suite. It implements hash-based derivation, instantiation, generation, and reseeding of the generator via iterative hash invocations and state updates. Leveraging Ascon’s sponge-based structure, the design achieves efficient entropy absorption and diffusion while maintaining an area-efficient FPGA architecture, making it well suited for resource-constrained platforms. The diffusion properties of the proposed DRBG are evaluated through avalanche and reproducibility analyses, confirming strong sensitivity to input variations and secure, repeatable operation. Moreover, Monte Carlo and stochastic-diffusion evaluation of the generated bitstreams demonstrates correct convergence and statistically consistent behavior. These results confirm that the proposed hash-based DRBG provides reproducible, hardware-efficient, and cryptographically secure random numbers suitable for next-generation neuromorphic, probabilistic computing systems, and Internet of Things (IoT) devices. Full article
Show Figures

Figure 1

23 pages, 10822 KB  
Article
Off-Road Autonomous Vehicle Semantic Segmentation and Spatial Overlay Video Assembly
by Itai Dror, Omer Aviv and Ofer Hadar
Sensors 2026, 26(6), 1944; https://doi.org/10.3390/s26061944 - 19 Mar 2026
Viewed by 129
Abstract
Autonomous systems are expanding rapidly, driving a demand for robust perception technologies capable of navigating challenging, unstructured environments. While urban autonomy has made significant progress, off-road environments pose unique challenges, including dynamic terrain and limited communication infrastructure. This research addresses these challenges by [...] Read more.
Autonomous systems are expanding rapidly, driving a demand for robust perception technologies capable of navigating challenging, unstructured environments. While urban autonomy has made significant progress, off-road environments pose unique challenges, including dynamic terrain and limited communication infrastructure. This research addresses these challenges by introducing a novel three-part solution for off-road autonomous vehicles. First, we present a large-scale off-road dataset curated to capture the visual complexity and variability of unstructured environments, providing a realistic training ground that supports improved model generalization. Second, we propose a Confusion-Aware Loss (CAL) that dynamically penalizes systematic misclassifications based on class-level confusion statistics. When combined with cross-entropy, CAL improves segmentation mean Intersection over Union (mIoU) on the off-road test set from 68.66% to 70.06% and achieves cross-domain gains of up to ~0.49% mIoU on the Cityscapes dataset. Third, leveraging semantic segmentation as an intermediate representation, we introduce a spatial overlay video encoding scheme that preserves high-fidelity RGB information in semantically critical regions while compressing non-essential background regions. Experimental results demonstrate Peak Signal-to-Noise Ratio (PSNR) improvements of up to +5 dB and Video Multi-Method Assessment Fusion (VMAF) gains of up to +40 points under lossy compression, enabling efficient and reliable off-road autonomous operation. This integrated approach provides a robust framework for real-time remote operation in bandwidth-constrained environments. Full article
(This article belongs to the Special Issue Machine Learning in Image/Video Processing and Sensing)
Show Figures

Figure 1

23 pages, 1806 KB  
Article
Harnessing the Industrial Digitalization for Carbon Productivity: New Insights from China
by Xiaochong Cui, Yuan Zhang and Feier Yan
Sustainability 2026, 18(6), 3032; https://doi.org/10.3390/su18063032 - 19 Mar 2026
Viewed by 138
Abstract
Industrial digitalization reshapes production processes and can potentially improve carbon productivity by optimizing factor allocation and energy efficiency. Using panel data for 30 Chinese provinces from 2012 to 2022, this study constructs a comprehensive industrial digitalization index with four dimensions and 13 indicators [...] Read more.
Industrial digitalization reshapes production processes and can potentially improve carbon productivity by optimizing factor allocation and energy efficiency. Using panel data for 30 Chinese provinces from 2012 to 2022, this study constructs a comprehensive industrial digitalization index with four dimensions and 13 indicators using the entropy method and examines its impact on carbon productivity (GDP per unit of CO2 emissions). We employ the Dagum Gini coefficient and kernel density estimation to describe regional disparities and their evolution, a dynamic panel threshold model to test the nonlinear role of industrial transformation and upgrading, and a spatial Durbin model to identify spatial spillover effects. The results indicate that industrial digitalization has risen nationwide but remains uneven; industrial digitalization significantly enhances carbon productivity, with stronger effects in the eastern and western regions and in plain areas; the effect exhibits a double-threshold pattern with respect to industrial transformation and upgrading, implying a U-shaped relationship; and industrial digitalization generates positive spatial spillovers. These findings suggest that policy should coordinate digital infrastructure investment with industrial upgrading and regional collaboration to accelerate low-carbon, high-efficiency growth. Full article
Show Figures

Figure 1

Back to TopTop