Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,243)

Search Parameters:
Keywords = standard entropy

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
30 pages, 1867 KB  
Article
Asymmetric and Time-Varying Lag Structures in Bitcoin’s Kimchi Premium: Rolling-Window Evidence from Granger Causality and Transfer Entropy
by Insu Choi
Mathematics 2026, 14(9), 1501; https://doi.org/10.3390/math14091501 - 29 Apr 2026
Abstract
The Kimchi Premium—the persistent price wedge between Bitcoin on Korean and global exchanges—has resisted standard no-arbitrage explanations for over seven years, raising the question of how macro-financial shocks transmit into this segmented market. Prior work relies on static, linear estimators applied over short [...] Read more.
The Kimchi Premium—the persistent price wedge between Bitcoin on Korean and global exchanges—has resisted standard no-arbitrage explanations for over seven years, raising the question of how macro-financial shocks transmit into this segmented market. Prior work relies on static, linear estimators applied over short lag horizons, leaving the timing, nonlinearity, and regime dependence of the Premium’s adjustment largely untested. We address this gap using daily data from December 2017 to December 2025 (T=2105) and rolling windows of 60, 120, and 240 trading days. Linear dynamics are tested with Granger causality (GC) and—because our return series are strongly leptokurtic (Bitcoin excess kurtosis =7.29, S&P 500 =14.77)—complemented by the Kraskov k-nearest-neighbor transfer entropy (TE) estimator, which captures conditional dependence in higher moments. Inference rests on a stationary bootstrap with pre-specified lag grids to avoid optimistic argmax bias, block-permutation tests for window-level detection rates, and Benjamini–Hochberg and Storey q-value corrections for multiple testing. Robustness is examined through conditional GC and conditional TE controlling for USD/KRW as a common factor, Toda–Yamamoto tests on price levels, a percentage-premium specification, a U.S. trading-day shift to address asynchrony, and winsorization sensitivity. We deliberately adopt a conservative inference stance at the panel level: window-level detection rates and pointwise bootstrap p-values are reported, but claims of “causality” are reserved for the within-window descriptive ranking of channels and horizons, with the panel-level null assessed by block-permutation and Benjamini–Hochberg corrections. Four empirical patterns emerge under this framing. First, Johansen tests identify a single cointegrating vector between Korean and global Bitcoin prices (trace =91.99 vs. 5% critical value 15.49), establishing the Premium as a stationary deviation from long-run parity, while none of the four macro indicators cointegrate with global Bitcoin. Second, GC detection rates for the Premium concentrate at the 240-day horizon: Gold → Premium reaches 23.3% (95% Wilson CI [19.3%,27.8%]), KOSPI 200 → Premium 16.3%, and USD/KRW → Premium 16.8%. Third, the Kraskov TE reveals an asymmetry invisible to linear tests: for Gold → Premium at w=240, the median optimal lag is five days, against one day for Gold → Bitcoin (chi-square p=0.017). Percentage-premium detection rates are substantially higher (e.g., 59.1% for Gold → Premium at w=240), indicating that the dollar-wedge specification understates causal strength. Fourth, block-permutation tests do not reject the global null of no window-level excess rejection, and Benjamini–Hochberg rejects no pair at α=0.05; we therefore read the detection-rate evidence as descriptive of localized, crisis-dependent transmission episodes rather than as panel-level rejection of pointwise non-causality, and the paper’s contribution is accordingly positioned at the level of channel ranking, lag structure, and regime decomposition rather than at the level of blanket causality claims. Crisis decomposition shows transmission is concentrated in the ETF-Halving regime (29.8% mean GC detection) and below 4% during the Terra–Luna (2.9%) and Russia–Ukraine (3.6%) episodes. The findings situate the Premium as a stationary error-correction term whose adjustment is dominated by exchange-rate and commodity channels rather than U.S. equities, with implications for arbitrage models, regulatory monitoring, and information-flow analyses of segmented crypto markets. Full article
(This article belongs to the Special Issue Advances in Data-Driven Modeling: Theory and Applications)
Show Figures

Figure 1

24 pages, 9473 KB  
Article
Delineation of High-Standard Farmland Based on Urban Expansion Probability and Compactness: A Case Study of Guangzhou
by Zilin Fan, Xiaxue Weng, Lisiren Cao and Jinyao Lin
Agriculture 2026, 16(9), 970; https://doi.org/10.3390/agriculture16090970 - 28 Apr 2026
Abstract
Protecting high-standard farmland is pivotal for sustainable land utilization and long-term regional food security. However, delineating high-standard farmland in metropolitan areas often neglects the influences of future urban expansion and farmland morphology, critical factors for enhancing farmland productivity. To address this, this study [...] Read more.
Protecting high-standard farmland is pivotal for sustainable land utilization and long-term regional food security. However, delineating high-standard farmland in metropolitan areas often neglects the influences of future urban expansion and farmland morphology, critical factors for enhancing farmland productivity. To address this, this study established a systematic evaluation framework for high-standard farmland delineation. It employed the patch-generating land use simulation model to forecast the probability of future urban expansion while employing the analytic hierarchy process and entropy weight method to calculate combined weights for evaluating farmland suitability. An ant colony optimization algorithm was implemented to improve farmland suitability and morphological compactness, thereby scientifically delineating high-standard farmland. Results from Guangzhou reveal that farmland area decreased between 2000 and 2020, primarily driven by urban expansion. The delineated high-standard farmland covers 682.18 km2, achieving dual optimization of farmland suitability and compactness. The results are predominantly located within permanent basic farmland and grain production functional zones. This finding aligns with previous studies and existing plans, demonstrating the methodology’s superiority. Furthermore, this study categorizes Guangzhou’s high-standard farmland into four grades and proposes targeted policy recommendations. In summary, this study presents a new and scientific approach for high-standard farmland delineation, offering valuable policy support for sustainable farmland management. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
14 pages, 679 KB  
Article
Post-Quantum Entropy as a Service for Embedded Systems
by Javier Blanco-Romero, Yuri Melissa Garcia-Niño, Florina Almenares Mendoza, Daniel Díaz-Sánchez, Carlos García-Rubio and Celeste Campo
Sensors 2026, 26(9), 2737; https://doi.org/10.3390/s26092737 - 28 Apr 2026
Abstract
Embedded cryptography stands or falls on entropy quality, yet small devices have few trustworthy sources and little tolerance for heavyweight protocols. We build a Quantum Entropy as a Service (QEaaS) system that moves QRNG-derived entropy from a Quantis device to ESP32-class clients over [...] Read more.
Embedded cryptography stands or falls on entropy quality, yet small devices have few trustworthy sources and little tolerance for heavyweight protocols. We build a Quantum Entropy as a Service (QEaaS) system that moves QRNG-derived entropy from a Quantis device to ESP32-class clients over post-quantum-secured channels. On the server side, the design exposes two paths: direct quantum entropy through a custom OpenSSL provider and mixed entropy through the Linux system pool. On the client side, we extend libcoap’s Zephyr support, integrate wolfSSL-based DTLS 1.3 into the CoAP stack, and add a BLAKE2s entropy pool that preserves the standard Zephyr extraction interface while introducing an injection API for server-provided entropy. Benchmarks on ESP32 hardware, targeting 100 iterations per configuration, show that ML-KEM-512 completes a DTLS 1.3 handshake in 313 ms on average without certificate verification, 35% faster than ECDHE P-256. Pairing ML-KEM-512 with ML-DSA-44 lowers the mean to 225 ms. Certificate verification adds roughly 194 ms for ECDSA but only 17 ms for ML-DSA-44, so the fully post-quantum configuration remains 63% faster than classical ECDHE P-256 with ECDSA even under full verification. Local BLAKE2s pool operations stay below 0.1 ms combined. On this platform, post-quantum key exchange and authentication are not only feasible; they are faster than the classical baseline. Full article
23 pages, 3889 KB  
Article
Clinical Correlation and Postoperative Findings of Thigh-Based Electrocardiography in Aortic Stenosis
by Aline dos Santos Silva, Miguel Velhote Correia, Andreia Gonçalves da Costa, Rui J. Cerqueira and Hugo Plácido da Silva
J. Sens. Actuator Netw. 2026, 15(3), 35; https://doi.org/10.3390/jsan15030035 - 28 Apr 2026
Abstract
Previous studies on healthy controls suggest the added value of thigh-based Electrocardiography (ECG), which collects data using sensors embedded in a toilet seat for unobtrusive signal acquisition. However, further evidence regarding its clinical feasibility is needed; with this work, we investigated three complementary [...] Read more.
Previous studies on healthy controls suggest the added value of thigh-based Electrocardiography (ECG), which collects data using sensors embedded in a toilet seat for unobtrusive signal acquisition. However, further evidence regarding its clinical feasibility is needed; with this work, we investigated three complementary aspects: signal quality, morphological correlation with standard ECG leads, and the system’s potential for heart rate variability (HRV) analysis in patients undergoing aortic valve replacement. This work was divided into two main phases. In the first, 32 healthy volunteers underwent simultaneous ECG recordings using both a standard 12-lead ECG system and the thigh-based system. Signal Quality Index (SQI) analysis revealed that 56.25% of the experimental signals were classified as excellent, and over 62.5% of recordings showed a strong correlation with Lead I of the clinical ECG. These findings extend the state of the art by further characterising the quality and relevance of the captured signals. In the second phase, two patients with severe aortic stenosis were monitored before and after surgical valve replacement. HRV metrics derived from the thigh-based ECG captured distinct autonomic responses: one patient showed significant postoperative improvement in global and parasympathetic modulation (increased SDNN, RMSSD, and Sample Entropy), while the other exhibited reduced variability and complexity, potentially indicating impaired autonomic recovery. These results highlight the feasibility of thigh-based ECG data acquisition for passive, longitudinal cardiac health monitoring in everyday environments and its applicability for pre- and postoperative autonomic assessment. Full article
(This article belongs to the Section Actuators, Sensors and Devices)
Show Figures

Figure 1

39 pages, 1037 KB  
Article
IoT-Oriented Digital Signature Defense Against Single-Trace Belief Propagation Attacks in Post-Quantum Cryptography
by Maksim Iavich and Nursulu Kapalova
J. Cybersecur. Priv. 2026, 6(3), 77; https://doi.org/10.3390/jcp6030077 - 27 Apr 2026
Abstract
Post-quantum cryptographic implementations in Internet-of-Things (IoT) devices are significantly threatened by physical side-channel attacks, where practical attack risks are increased by physical accessibility and resource limitations. In particular, recent work has shown that belief propagation-based attacks can recover secret keys from lattice-based digital [...] Read more.
Post-quantum cryptographic implementations in Internet-of-Things (IoT) devices are significantly threatened by physical side-channel attacks, where practical attack risks are increased by physical accessibility and resource limitations. In particular, recent work has shown that belief propagation-based attacks can recover secret keys from lattice-based digital signatures using only a single side-channel trace of the Number Theoretic Transform (NTT). This work introduces the Quantum-Randomized Number Theoretic Transform (QR-NTT), an implementation-level defense mechanism that integrates quantum-derived entropy directly into the execution flow of lattice-based signature algorithms. Rather than treating randomness as a static input, QR-NTT uses quantum entropy to introduce controlled variability in execution ordering, arithmetic factor usage, and memory access behavior while preserving mathematical correctness and constant-time execution. The proposed framework is designed for embedded platforms and remains compatible with existing post-quantum cryptographic standards and IoT communication protocols. A complete implementation on an ARM Cortex-M4 platform, coupled with commercial quantum random number generator (QRNG) hardware, demonstrates that QR-NTT significantly degrades the effectiveness of template matching and belief propagation attacks. Experimental evaluation shows a reduction in single-trace attack success rates from over 90% to below 3% and an increase of approximately two orders of magnitude in the number of traces required for successful key recovery. These security gains are achieved with moderate overheads of 18.3% in execution time and 1.8 KB of additional memory while remaining well within practical IoT constraints. The results indicate that quantum-derived entropy can be leveraged as a practical implementation-level defense against physical attacks, complementing algorithmic post-quantum security. QR-NTT demonstrates a viable path toward strengthening the real-world resilience of post-quantum IoT systems without sacrificing deployability. Full article
(This article belongs to the Section Cryptography and Cryptology)
49 pages, 499 KB  
Article
Brauer-Type Configurations Associated with the Boolean Geometry of the Grassmann Algebra
by Agustín Moreno Cañadas and Andrés Sarrazola Alzate
Symmetry 2026, 18(5), 744; https://doi.org/10.3390/sym18050744 - 26 Apr 2026
Viewed by 103
Abstract
We construct and analyze a family of support-defined Brauer-type configurations canonically associated with the Boolean geometry underlying the Grassmann algebra. The construction is governed by an x-support map on monomial labels, which identifies the vertex set with the Boolean lattice [...] Read more.
We construct and analyze a family of support-defined Brauer-type configurations canonically associated with the Boolean geometry underlying the Grassmann algebra. The construction is governed by an x-support map on monomial labels, which identifies the vertex set with the Boolean lattice P([n]). This identification yields a Boolean support quiver isomorphic to the directed Hasse diagram of P([n]), equivalently, to an oriented hypercube. We then equip the family with a canonical cyclic ordering at each vertex and obtain a genuine connected reduced Brauer configuration in the standard sense, together with its associated Brauer configuration algebra and its standard Brauer quiver. A ghost-variable mechanism is introduced to obtain a connected realization without altering any support-controlled invariants. We prove that polygon membership, valencies, multiplicities, Boolean stratification, and the support quiver are invariant under support-preserving ghost relabelings. We also give an explicit description of the standard Brauer quiver and show that it is different from the Boolean support quiver. On the algebraic side, we derive closed formulas for the center dimension, the algebra dimension, and the normalization constant of the induced weighted distribution. On the probabilistic side, we distinguish the vertex entropy from the layer entropy, establish an exact decomposition of the former by Hamming layers, and show that the layer distribution is asymptotically concentrated on the middle layers, while extremal vertices and any fixed maximal path contribute a negligible fraction of the total weight. As a consequence, the layer entropy satisfies a logarithmic asymptotic law. We also investigate geometric consequences of the Boolean model transported through the support identification. Coordinate projections produce a rigidity phenomenon for antipodal pairs, providing a combinatorial analogue of Greenberger–Horne–Zeilinger (GHZ)-type fragility, whereas the first Boolean layer exhibits a persistence property analogous to W-type robustness. Together, these results exhibit a concrete bridge between Grassmann combinatorics, Brauer configuration theory, hypercube geometry, and entropy asymptotics. Full article
(This article belongs to the Special Issue Symmetries in Algebraic Combinatorics and Their Applications)
24 pages, 3261 KB  
Article
Adaptive Exploration Proximal Policy Optimization for Efficient Robotic Continuous Control
by Jiajian Li, Mingrui Li and Hanshen Li
Symmetry 2026, 18(5), 717; https://doi.org/10.3390/sym18050717 - 24 Apr 2026
Viewed by 166
Abstract
Proximal Policy Optimization (PPO) is widely adopted for robotic continuous control, yet it can suffer from insufficient exploration and unstable policy updates in high-dimensional action spaces. This paper proposes Adaptive Exploration Proximal Policy Optimization (AE-PPO), an enhanced PPO framework that integrates (i) adaptive [...] Read more.
Proximal Policy Optimization (PPO) is widely adopted for robotic continuous control, yet it can suffer from insufficient exploration and unstable policy updates in high-dimensional action spaces. This paper proposes Adaptive Exploration Proximal Policy Optimization (AE-PPO), an enhanced PPO framework that integrates (i) adaptive clipping, which adjusts the clipping range according to the observed magnitude of policy updates to better balance stability and learning progress, (ii) adaptive entropy regularization, which schedules the entropy weight across training to maintain effective exploration while avoiding excessive randomness. AE-PPO is evaluated on standard MuJoCo continuous control benchmarks (e.g., Walker2d, HalfCheetah, and Humanoid) and compared with PPO and representative baselines such as Trust Region Policy Optimization (TRPO) and Soft Actor Critic (SAC). The results show that AE-PPO achieves faster convergence and an improved final performance with reduced training variance, demonstrating more stable and efficient learning in challenging high-dimensional tasks. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

17 pages, 1912 KB  
Article
Spatiotemporal Patterns and Drivers of High-Quality Development in China’s Rural Tourism
by Haotian Sui and Jiaqi Yan
Systems 2026, 14(5), 460; https://doi.org/10.3390/systems14050460 - 23 Apr 2026
Viewed by 203
Abstract
With the rapid expansion of rural tourism in China, high-quality development has become a key concern for academics and policymakers. Existing studies have focused primarily on economic and industrial growth, with limited attention paid to development quality from the perspective of resident well-being. [...] Read more.
With the rapid expansion of rural tourism in China, high-quality development has become a key concern for academics and policymakers. Existing studies have focused primarily on economic and industrial growth, with limited attention paid to development quality from the perspective of resident well-being. Using panel data from 30 Chinese provinces from 2012 to 2022, this study establishes a multidimensional evaluation framework for high-quality rural tourism. We employed the entropy weight method, Theil index, and quadratic assignment procedure analysis to examine its level, regional differences, and driving factors. The findings revealed that: (1) the overall level of rural tourism development remained relatively low but rose steadily from 0.064 (2012) to 0.150 (2022) (134.38% cumulative growth), driven by supply-side improvements and demand-side expansion. (2) Pronounced regional inequalities existed: eastern provinces had higher overall levels but larger internal gaps, whereas central/western provinces had lower overall levels but smaller internal differences, with intra-regional disparities accounting for over 66% of the national inequality. (3) The tourism market and transportation were universal key drivers, but the underlying mechanisms differed: the ecological environment exerted greater influence in the east, while public services and living standards were more critical in the central/western regions. By incorporating resident well-being into a systemic analytical framework, this study reconceptualizes high-quality rural tourism as an adaptive socio-ecological system shaped by multilevel interactions among the economy, society, and the environment. The results provide empirical evidence and systemic governance insights for promoting balanced and sustainable rural tourism development. Full article
Show Figures

Figure 1

27 pages, 13300 KB  
Article
Information-Entropic Deep Learning with Gaussian Process Regularisation for Uncertainty-Aware Quantitative Trading
by Feng Lin and Huaping Sun
Entropy 2026, 28(5), 485; https://doi.org/10.3390/e28050485 - 23 Apr 2026
Viewed by 136
Abstract
Quantitative trading systems require predictive models that simultaneously deliver accurate forecasts, calibrated uncertainty quantification, and actionable risk measures. This paper proposes an information-theoretic semiparametric regression framework combining a convolutional neural network–Transformer (CNN–Transformer) network for nonlinear temporal dependencies with a Gaussian process (GP) prior [...] Read more.
Quantitative trading systems require predictive models that simultaneously deliver accurate forecasts, calibrated uncertainty quantification, and actionable risk measures. This paper proposes an information-theoretic semiparametric regression framework combining a convolutional neural network–Transformer (CNN–Transformer) network for nonlinear temporal dependencies with a Gaussian process (GP) prior for residual autocorrelation and calibrated predictive distributions. Three theoretical results are established: an identifiability theorem guarantees joint recoverability of the nonparametric and GP components; a consistency theorem showing that the penalised maximum likelihood estimator converges at a rate n1/(2+deff); and a coverage theorem proving asymptotic nominal coverage of the GP’s credible intervals. The framework enables an entropy-regulated trading module where predictive differential entropy informs position sizing via an uncertainty-penalised Kelly criterion, Kullback–Leibler divergence quantifies model uncertainty, and CVaR-constrained optimisation controls the tail risk. Simulations show the method outperforms the CNN, long short-term memory (LSTM), Transformer, XGBoost, random forest, least absolute shrinkage and selection operator (LASSO), and standard GP regression approaches. Backtesting on four Chinese A-share stocks yielded annualised returns of 15.9–22.4% with Sharpe ratios of 0.49–0.62, maximum drawdowns below 15%, and daily 95% CVaR reductions of 28–31% relative to a full-Kelly baseline, confirming both predictive accuracy and risk management effectiveness. Full article
(This article belongs to the Special Issue Entropy, Artificial Intelligence and the Financial Markets)
26 pages, 9631 KB  
Article
A Multi-Teacher Knowledge Distillation Framework for Enhancing the Robustness of Automated Sperm Morphology Assessment
by Osman Emre Tutay, Hamza Osman Ilhan, Hakkı Uzun, Merve Huner Yigit and Gorkem Serbes
Diagnostics 2026, 16(8), 1230; https://doi.org/10.3390/diagnostics16081230 - 20 Apr 2026
Viewed by 216
Abstract
Background/Objectives: The manual analysis of sperm morphology, crucial for male infertility diagnosis, is subjective and time-consuming. Automated methods using deep learning, offer a promising alternative; however, standard deep models are prone to overfitting when applied to small, heavily unbalanced clinical datasets, limiting their [...] Read more.
Background/Objectives: The manual analysis of sperm morphology, crucial for male infertility diagnosis, is subjective and time-consuming. Automated methods using deep learning, offer a promising alternative; however, standard deep models are prone to overfitting when applied to small, heavily unbalanced clinical datasets, limiting their generalization capability. This study proposes a knowledge distillation approach that functions as a strong regularizer, improving the robustness of automated sperm morphology analysis. Methods: We utilize soft distillation to transfer knowledge from a set of high-capacity teacher models to a smaller student model (SwinV2-base). The teacher architectures include SwinV2-large, EfficientNetV2-m, and ConvNeXtV2-large. To maximize performance, we investigated two distillation strategies: a single-teacher approach, where the student learns from one specific architecture, and a multi-teacher approach, where the student learns from an averaged response of multiple teachers. The models were trained on the imbalanced Hi-LabSpermMorpho dataset, which comprises 18 different sperm morphology categories derived from three differently stained (BesLab, Histoplus, GBL) sample sets. We adopted a cross-dataset training approach in which the teacher models were fine-tuned using the combination of two stained datasets, and the student model was trained on the third, distinct stained dataset. The global loss function combined cross-entropy loss with Kullback–Leibler divergence, employing the teacher’s soft probabilities to prevent the student from over-confidence. Results: The experimental results demonstrate that the student model trained in a multi-teacher setup with augmentation and soft distillation attains higher accuracies (70.94% on BesLab, 73.61% on Histoplus, 71.63% on GBL) than the baseline models. Conclusions: This approach mitigates challenges associated with data scarcity and heavily unbalanced sperm morphology datasets, providing consistent improvements and offering a highly generalizable solution for clinical diagnostics. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

22 pages, 2068 KB  
Article
Conditional Agglomeration in China’s Northeast Rust Belt: Density, Structural Orientation, and Ownership-Mixing Entropy
by Omar Abu Risha, Jifan Ren, Mohammed Ismail Alhussam and Mohamad Ali Alhussam
Entropy 2026, 28(4), 471; https://doi.org/10.3390/e28040471 - 20 Apr 2026
Viewed by 154
Abstract
Northeast China’s rust-belt cities have faced persistent concerns about stagnating labor productivity amid structural change. This paper examines how the productivity payoff to urban density depends on local economic structure and ownership composition using an annual panel of prefecture-level cities. We estimate two-way [...] Read more.
Northeast China’s rust-belt cities have faced persistent concerns about stagnating labor productivity amid structural change. This paper examines how the productivity payoff to urban density depends on local economic structure and ownership composition using an annual panel of prefecture-level cities. We estimate two-way fixed-effects models with city and year effects and city-clustered standard errors, complemented by dynamic specifications and additional robustness checks. The results show a robust positive within-city association between population density and labor productivity. This density premium is structure-conditioned: the productivity payoff to density is significantly larger in city-years that are more industry-oriented. Information-theoretic measures further show that sectoral and ownership composition matter in distinct ways. A normalized entropy measure based on 19 all-city sectoral employment categories is positively associated with labor productivity, while its interaction with density is negative and significant, indicating that the density premium is weaker in more sectorally balanced city-years. A normalized four-category ownership entropy measure, constructed from SOE, private/self-employed, collective, and other employment shares, is positively associated with labor productivity and interacts positively with density, indicating a stronger density–productivity association in city-years with a more balanced ownership composition. Collectively, the findings suggest that urban density is not a uniform engine of productivity: its payoff depends on whether dense city economies are organized around productive sectoral linkages and a sufficiently balanced ownership environment. Overall, the evidence supports a conditional agglomeration view in which productivity dynamics in Northeast China reflect the interaction of density, structural orientation, sectoral dispersion, and ownership mixing. Full article
(This article belongs to the Special Issue Complexity in Urban Systems)
Show Figures

Figure 1

26 pages, 4343 KB  
Article
A Multi-Task Deep Learning Approach for Precipitation Retrieval from Spaceborne Microwave Imagers
by Xingyu Xiang, Leilei Kou, Jian Shang, Yanqing Xie and Liguo Zhang
Remote Sens. 2026, 18(8), 1242; https://doi.org/10.3390/rs18081242 - 19 Apr 2026
Viewed by 316
Abstract
Spaceborne microwave imagers are vital for monitoring global precipitation due to their wide swath and high sensitivity. This study proposes a deep learning approach that integrates a U-Net with a multi-task learning (MTL) framework. The model was separately trained over land and ocean [...] Read more.
Spaceborne microwave imagers are vital for monitoring global precipitation due to their wide swath and high sensitivity. This study proposes a deep learning approach that integrates a U-Net with a multi-task learning (MTL) framework. The model was separately trained over land and ocean using GPM Microwave Imager (GMI) brightness temperatures, with collocated precipitation rates and types from the Dual-frequency Precipitation Radar (DPR) as labels. This combines the accuracy of radars with the coverage of imagers to produce high-precision, wide-swath precipitation estimates. In the MTL setup, near-surface precipitation rate retrieval is the main task, and precipitation type classification is the auxiliary task. A composite loss (weighted MSE and quantile regression) is used for the main task, and weighted cross-entropy for the auxiliary task. Residual blocks and an attention mechanism are incorporated to improve physical representation and generalization, thereby significantly enhancing the model’s capability to retrieve heavy precipitation. The model was trained on 2015–2024 GPM data and evaluated on an independent six-month 2025 GMI dataset. Compared to a standard U-Net, the MTL model achieved significant gains: Pearson Correlation Coefficient (PCC) increased by 9.7% (ocean) and 13.7% (land), and Critical Success Index (CSI) by 10.7% (ocean) and 10.8% (land). The method was also applied to the FY-3G Microwave Radiation Imager (MWRI-RM). In case studies, it outperformed the official product, achieving average increases of 20.1% in PCC and 15.7% in CSI, respectively. Validation against FY-3G Precipitation Measurement Radar (June–August 2024) yielded over ocean PCC = 0.757, RMSE = 1.588 mm h−1, MAE = 0.355 mm h−1; over land PCC = 0.691, RMSE = 2.007 mm h−1, MAE = 0.692 mm h−1. The study demonstrates that the MTL-enhanced U-Net significantly improves the accuracy of spaceborne microwave imager rainfall retrieval and shows robust practical applicability. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Remote Sensing for Weather and Climate)
Show Figures

Figure 1

23 pages, 8136 KB  
Article
Fault Prediction Method of Boost Converter Based on Multi-Modal Components and Temporal Convolutional Networks
by Jiaying Li, Chengye Zhu, Yuhang Dong and Min Xia
Energies 2026, 19(8), 1974; https://doi.org/10.3390/en19081974 - 19 Apr 2026
Viewed by 160
Abstract
During long-term operation, power electronic converters are jointly affected by component degradation and operational disturbances, leading to pronounced nonstationary and multi-scale characteristics in output-voltage signals, which pose challenges for fault prediction. To address the degradation forecasting problem of Boost converter output voltage, this [...] Read more.
During long-term operation, power electronic converters are jointly affected by component degradation and operational disturbances, leading to pronounced nonstationary and multi-scale characteristics in output-voltage signals, which pose challenges for fault prediction. To address the degradation forecasting problem of Boost converter output voltage, this paper proposes a multi-scale temporal modeling method that integrates multivariate variational mode decomposition, distribution entropy-based complexity features, and a temporal convolutional network. Multivariate variational mode decomposition is employed to achieve frequency-aligned decomposition of the voltage signal, enabling effective separation of dynamic components at different scales. Distribution entropy is then introduced to characterize the evolution of local structural complexity in each mode, and multi-channel complexity feature sequences are constructed accordingly. Based on these features, a temporal convolutional network is used to perform unified modeling of short-term fluctuations and long-term degradation trends. Experimental results demonstrate that the proposed approach achieves consistently high accuracy across multiple independent runs, with average RMSE ranging from 0.0111 to 0.0179 and average MAPE from 1.15% to 1.84%. The low standard deviations further confirm its robustness for degradation trend prediction under varying operating conditions. Full article
Show Figures

Figure 1

21 pages, 2238 KB  
Article
Game-Theoretic Cost-Sensitive Adversarial Training for Robust Cloud Intrusion Detection Against GAN-Based Evasion Attacks
by Jianbo Ding, Zijian Shen and Wenhe Liu
Appl. Sci. 2026, 16(8), 3944; https://doi.org/10.3390/app16083944 - 18 Apr 2026
Viewed by 172
Abstract
Cloud-based intrusion detection systems (IDSs) increasingly rely on deep learning classifiers to identify malicious traffic; however, this reliance exposes them to adversarial evasion attacks in which adversaries craft near-imperceptible perturbations to bypass detection. Existing defenses based on conventional adversarial training often recover robustness [...] Read more.
Cloud-based intrusion detection systems (IDSs) increasingly rely on deep learning classifiers to identify malicious traffic; however, this reliance exposes them to adversarial evasion attacks in which adversaries craft near-imperceptible perturbations to bypass detection. Existing defenses based on conventional adversarial training often recover robustness against known perturbation patterns at the cost of degraded detection accuracy on canonical attack categories—a robustness–accuracy trade-off that remains an open challenge in the field. In this paper, we propose GT-CSAT (Game-Theoretic Cost-Sensitive Adversarial Training), a novel defense framework tailored for cloud security environments. GT-CSAT couples an improved Wasserstein GAN with Gradient Penalty (WGAN-GP) threat generator—conditioned on attack semantics to simulate functionally consistent and highly covert traffic variants—with a minimax adversarial training loop governed by a game-theoretic cost-sensitive loss function. The proposed loss function assigns asymmetric misclassification penalties derived from a two-player zero-sum payoff matrix, enabling the detector to maintain vigilance over both novel adversarial variants and well-characterized conventional threats simultaneously. Specifically, misclassifying an adversarially perturbed attack as benign incurs a strictly higher penalty than the symmetric cross-entropy baseline, while the cost weights are dynamically adapted via a Nash equilibrium-inspired update rule during training. We conduct comprehensive experiments on the Cloud Vulnerabilities Dataset (CVD), CICIDS-2017, and UNSW-NB15, which encompass diverse cloud-specific attack scenarios including denial-of-service, port scanning, brute-force, and SQL injection traffic. Under six representative evasion strategies—FGSM, PGD, C&W, BIM, DeepFool, and IDSGAN-style black-box perturbations—GT-CSAT achieves an average robust accuracy of 94.3%, surpassing standard adversarial training by 6.8 percentage points and the undefended baseline by 21.4 percentage points, while preserving clean-traffic detection at 97.1%. These results confirm that the game-theoretic cost structure effectively decouples robustness from accuracy, yielding a Pareto-superior detection profile relative to competing baselines across all evaluated threat models. The source code and experimental configurations have been publicly released to facilitate reproducibility. Full article
Show Figures

Figure 1

32 pages, 12782 KB  
Article
Aerodynamic Optimization of Relay Nozzle Using a Chebyshev KAN Surrogate Model Integration and an Improved Multi-Objective Red-Billed Blue Magpie Optimizer
by Min Shen, Ziqing Zhang, Guanxing Qin, Dahongnian Zhou, Lizhen Du and Lianqing Yu
Biomimetics 2026, 11(4), 282; https://doi.org/10.3390/biomimetics11040282 - 18 Apr 2026
Viewed by 269
Abstract
In air jet looms, relay nozzles are critical components in governing airflow velocity and air consumption during the weft insertion process. Although computational fluid dynamics (CFD) offers high-fidelity simulation for aerodynamic analysis, its computational burden hinders its practicality in iterative aerodynamic design of [...] Read more.
In air jet looms, relay nozzles are critical components in governing airflow velocity and air consumption during the weft insertion process. Although computational fluid dynamics (CFD) offers high-fidelity simulation for aerodynamic analysis, its computational burden hinders its practicality in iterative aerodynamic design of relay nozzles. To address the challenge, this study proposes a data-driven framework integrating a Chebyshev polynomial Kolmogorov–Arnold Network (Chebyshev KAN) surrogate model with an Improved Multi-objective Red-billed Blue Magpie Optimizer (IMORBMO). The accuracy of the Chebyshev KAN model was benchmarked against conventional multilayer perceptrons (MLP), convolutional neural networks (CNN), and the standard Kolmogorov–Arnold Network (KAN). Experimental results demonstrate that the Chebyshev KAN model achieves the lowest mean absolute error (MAE) of 0.103 for airflow velocity and 0.115 for air consumption. Building upon the non-dominated sorting and crowding distance strategies, IMORBMO was developed, incorporating an adaptive mutation mechanism by information entropy for improvement of convergence, diversity, and uniformity of the Pareto-optimal solutions. Comprehensive evaluations on the ZDT and WFG benchmark suites confirm that the IMORBMO consistently attains the best and highly competitive performance, yielding the lowest generation distance (GD), inverted generational distance (IGD) values and the highest hypervolume (HV). Applied to the aerodynamic optimization of a relay nozzle, the proposed framework delivers an optimal aerodynamic design that increases airflow velocity by 10.5% while reducing air consumption by 15.4%, as verified by CFD simulation. The steady-state flow field was simulated by solving the Reynolds-Average NavierStokes equations with the kω turbulent model, utilizing Fluent 2025.R2. No-slip wall, inlet pressure and outlet pressures are boundary conditions to the relay nozzle surfaces. This work establishes a computationally efficient and accurate optimization paradigm that holds significant promise for aerodynamic design and other complex real-world engineering applications. Full article
(This article belongs to the Section Biological Optimisation and Management)
Show Figures

Figure 1

Back to TopTop