Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,267)

Search Parameters:
Keywords = complex-valued information

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 4191 KB  
Article
Classifying Protein-DNA/RNA Interactions Using Interpolation-Based Encoding and Highlighting Physicochemical Properties via Machine Learning
by Jesús Guadalupe Cabello-Lima, Patricio Adrián Zapata-Morín and Juan Horacio Espinoza-Rodríguez
Information 2025, 16(11), 947; https://doi.org/10.3390/info16110947 (registering DOI) - 1 Nov 2025
Abstract
Protein–DNA and protein–RNA interactions are central to gene regulation and genetic disease, yet experimental identification remains costly and complex. Machine learning (ML) offers an efficient alternative, though challenges persist in representing protein sequences due to residue variability, dimensionality issues, and the risk of [...] Read more.
Protein–DNA and protein–RNA interactions are central to gene regulation and genetic disease, yet experimental identification remains costly and complex. Machine learning (ML) offers an efficient alternative, though challenges persist in representing protein sequences due to residue variability, dimensionality issues, and the risk of losing biological context. Traditional approaches such as k-mer counting or neural network encodings provide standardized sequence representations but often demand high computational resources and may obscure functional information. To address these limitations, a novel encoding method based on interpolation of physicochemical properties (PCPs) is introduced. Discrete PCPs values are transformed into continuous functions using logarithmic enhancement, highlighting residues that contribute most to nucleic acid interactions while preserving biological relevance across variable sequence lengths. Statistical features extracted from the resulting spectra via Tsfresh are then used for binary classification of DNA- and RNA-binding proteins. Six classifiers were evaluated, and the proposed method achieved up to 99% accuracy, precision, recall, and F1 score when amino acid highlighting was applied, compared with 66% without highlighting. Benchmarking against k-mer and neural network approaches confirmed superior efficiency and reliability, underscoring the potential of this method for protein interaction prediction. Our framework may be extended to multiclass problems and applied to the study of protein variants, offering a scalable tool for broader protein interaction prediction. Full article
(This article belongs to the Special Issue Applications of Deep Learning in Bioinformatics and Image Processing)
Show Figures

Figure 1

55 pages, 28544 KB  
Article
Spatial Flows of Information Entropy as Indicators of Climate Variability and Extremes
by Bernard Twaróg
Entropy 2025, 27(11), 1132; https://doi.org/10.3390/e27111132 (registering DOI) - 31 Oct 2025
Abstract
The objective of this study is to analyze spatial entropy flows that reveal the directional dynamics of climate change—patterns that remain obscured in traditional statistical analyses. This approach enables the identification of pathways for “climate information transport”, highlights associations with atmospheric circulation types, [...] Read more.
The objective of this study is to analyze spatial entropy flows that reveal the directional dynamics of climate change—patterns that remain obscured in traditional statistical analyses. This approach enables the identification of pathways for “climate information transport”, highlights associations with atmospheric circulation types, and allows for the localization of both sources and “informational voids”—regions where entropy is dissipated. The analytical framework is grounded in a quantitative assessment of long-term climate variability across Europe over the period 1901–2010, utilizing Shannon entropy as a measure of atmospheric system uncertainty and variability. The underlying assumption is that the variability of temperature and precipitation reflects the inherently dynamic character of climate as a nonlinear system prone to fluctuations. The study focuses on calculating entropy estimated within a 70-year moving window for each calendar month, using bivariate distributions of temperature and precipitation modeled with copula functions. Marginal distributions were selected based on the Akaike Information Criterion (AIC). To improve the accuracy of the estimation, a block bootstrap resampling technique was applied, along with numerical integration to compute the Shannon entropy values at each of the 4165 grid points with a spatial resolution of 0.5° × 0.5°. The results indicate that entropy and its derivative are complementary indicators of atmospheric system instability—entropy proving effective in long-term diagnostics, while its derivative provides insight into the short-term forecasting of abrupt changes. A lag analysis and Spearman rank correlation between entropy values and their potential supported the investigation of how circulation variability influences the occurrence of extreme precipitation events. Particularly noteworthy is the temporal derivative of entropy, which revealed strong nonlinear relationships between local dynamic conditions and climatic extremes. A spatial analysis of the information entropy field was also conducted, revealing distinct structures with varying degrees of climatic complexity on a continental scale. This field appears to be clearly structured, reflecting not only the directional patterns of change but also the potential sources of meteorological fluctuations. A field-theory-based spatial classification allows for the identification of transitional regions—areas with heightened susceptibility to shifts in local dynamics—as well as entropy source and sink regions. The study is embedded within the Fokker–Planck formalism, wherein the change in the stochastic distribution characterizes the rate of entropy production. In this context, regions of positive divergence are interpreted as active generators of variability, while sink regions function as stabilizing zones that dampen fluctuations. Full article
(This article belongs to the Special Issue 25 Years of Sample Entropy)
14 pages, 878 KB  
Article
Genome Selection for Fleece Traits in Inner Mongolia Cashmere Goats Based on GWAS Prior Marker Information
by Huanfeng Yao, Na Wang, Yu Li, Gang He, Jin Ning, Shuai Kang, Yongbin Liu, Jinquan Li, Qi Lv, Ruijun Wang, Yanjun Zhang, Rui Su and Zhiying Wang
Animals 2025, 15(21), 3184; https://doi.org/10.3390/ani15213184 (registering DOI) - 31 Oct 2025
Abstract
The Inner Mongolia Cashmere goat (IMCG) industry is a major contributor to global cashmere production, with fleece traits serving as key economic indicators that directly impact both income and the long-term sustainability of the industry. When genome-wide SNPs are used to estimate kinship [...] Read more.
The Inner Mongolia Cashmere goat (IMCG) industry is a major contributor to global cashmere production, with fleece traits serving as key economic indicators that directly impact both income and the long-term sustainability of the industry. When genome-wide SNPs are used to estimate kinship matrices, the traditional animal model implicitly assumes that all SNPs have the same effect-size distribution. However, in practice, there are differences in the genetic mechanisms and complexity of different traits. We conducted a genome-wide association study (GWAS) on 2299 IMCGs genotyped with 67,021 SNPs, which were obtained after imputation. The traits measured included cashmere yield (CY), wool length (WL), cashmere length (CL), and cashmere diameter (CD), with a total of 33,564 records collected. The top 5% to 20% of the significant SNPs from the GWAS were used as biological prior information. We then assigned proportional weights based on their contribution to the overall genetic variance and further integrated them with the remaining loci to construct a kinship relationship matrix for estimating genetic parameters and genomic breeding value. By incorporating prior marker information from the GWAS, it was found that the heritability estimates for CY, WL, CL, and CD were 0.26, 0.37, 0.09, and 0.35, respectively. For CY and CL, integrating the top 5% of prior SNP markers yielded the highest genomic prediction accuracies of 0.742 and 0.673, representing improvements of 16.67% and 19.75% over models that did not utilize prior information. In contrast, for WL and CD, the highest accuracies of 0.851 and 0.780 were achieved by integrating the top 10% of prior SNP markers, reflecting improvements of 9.81% and 10.14%, respectively. Compared with the conventional GBLUP method, this method of integrating GWAS-derived prior markers for genomic genetic evaluation can significantly improve the accuracy of genomic prediction for fleece traits in IMCGs. This approach facilitates accurate selection for fleece traits in IMCGs, enabling accelerated genetic progress through long-term breeding programs. Full article
(This article belongs to the Section Animal Genetics and Genomics)
37 pages, 25662 KB  
Article
A Hyperspectral Remote Sensing Image Encryption Algorithm Based on a Novel Two-Dimensional Hyperchaotic Map
by Zongyue Bai, Qingzhan Zhao, Wenzhong Tian, Xuewen Wang, Jingyang Li and Yuzhen Wu
Entropy 2025, 27(11), 1117; https://doi.org/10.3390/e27111117 (registering DOI) - 30 Oct 2025
Viewed by 79
Abstract
With the rapid advancement of hyperspectral remote sensing technology, the security of hyperspectral images (HSIs) has become a critical concern. However, traditional image encryption methods—designed primarily for grayscale or RGB images—fail to address the high dimensionality, large data volume, and spectral-domain characteristics inherent [...] Read more.
With the rapid advancement of hyperspectral remote sensing technology, the security of hyperspectral images (HSIs) has become a critical concern. However, traditional image encryption methods—designed primarily for grayscale or RGB images—fail to address the high dimensionality, large data volume, and spectral-domain characteristics inherent to HSIs. Existing chaotic encryption schemes often suffer from limited chaotic performance, narrow parameter ranges, and inadequate spectral protection, leaving HSIs vulnerable to spectral feature extraction and statistical attacks. To overcome these limitations, this paper proposes a novel hyperspectral image encryption algorithm based on a newly designed two-dimensional cross-coupled hyperchaotic map (2D-CSCM), which synergistically integrates Cubic, Sinusoidal, and Chebyshev maps. The 2D-CSCM exhibits superior hyperchaotic behavior, including a wider hyperchaotic parameter range, enhanced randomness, and higher complexity, as validated by Lyapunov exponents, sample entropy, and NIST tests. Building on this, a layered encryption framework is introduced: spectral-band scrambling to conceal spectral curves while preserving spatial structure, spatial pixel permutation to disrupt correlation, and a bit-level diffusion mechanism based on dynamic DNA encoding, specifically designed to secure high bit-depth digital number (DN) values (typically >8 bits). Experimental results on multiple HSI datasets demonstrate that the proposed algorithm achieves near-ideal information entropy (up to 15.8107 for 16-bit data), negligible adjacent-pixel correlation (below 0.01), and strong resistance to statistical, cropping, and differential attacks (NPCR ≈ 99.998%, UACI ≈ 33.30%). The algorithm not only ensures comprehensive encryption of both spectral and spatial information but also supports lossless decryption, offering a robust and practical solution for secure storage and transmission of hyperspectral remote sensing imagery. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

24 pages, 10593 KB  
Article
From Simulation to Implementation: Validating Flood Resilience Strategies in High-Density Coastal Cities—A Case Study of Macau
by Rui Zhang, Yangli Li, Chengfei Li and Tian Chen
Water 2025, 17(21), 3110; https://doi.org/10.3390/w17213110 - 30 Oct 2025
Viewed by 183
Abstract
Urban coastal areas are increasingly vulnerable to compound flooding due to the convergence of extreme rainfall, storm surges, and infrastructure aging, especially in high-density settings. This study proposes and empirically validates a multi-scale strategy for enhancing urban flood resilience in the Macau Peninsula, [...] Read more.
Urban coastal areas are increasingly vulnerable to compound flooding due to the convergence of extreme rainfall, storm surges, and infrastructure aging, especially in high-density settings. This study proposes and empirically validates a multi-scale strategy for enhancing urban flood resilience in the Macau Peninsula, a densely built coastal city with complex flood exposure patterns. Building on a previously developed network-based resilience assessment framework, the study integrates hydrodynamic simulation and complex network analysis to evaluate the effectiveness of targeted interventions, including segmented storm surge defense barriers, drainage infrastructure upgrades, and spatially optimized low-impact development (LID) measures. The Macau Peninsula was partitioned into multiple shoreline defense zones, each guided by context-specific design principles and functional zoning. Based on our previously developed flood simulation framework covering extreme rainfall, storm surge, and compound events in high-density coastal zones, this study validates resilience strategies that achieve significant reductions in inundation extent, water depth, and recession time. Additionally, the network-based resilience index showed marked improvement in system connectivity and recovery efficiency, particularly under compound hazard conditions. The findings highlight the value of integrating spatial planning, ecological infrastructure, and systemic modeling to inform adaptive flood resilience strategies in compact coastal cities. The framework developed offers transferable insights for other urban regions confronting escalating hydrometeorological risks under climate change. Full article
(This article belongs to the Section Urban Water Management)
Show Figures

Figure 1

16 pages, 579 KB  
Article
IGSMNet: Ingredient-Guided Semantic Modeling Network for Food Nutrition Estimation
by Donglin Zhang, Weixiang Shi, Boyuan Ma, Weiqing Min and Xiao-Jun Wu
Foods 2025, 14(21), 3697; https://doi.org/10.3390/foods14213697 - 30 Oct 2025
Viewed by 296
Abstract
In recent years, food nutrition estimation has received growing attention due to its critical role in dietary analysis and public health. Traditional nutrition assessment methods often rely on manual measurements and expert knowledge, which are time-consuming and not easily scalable. With the advancement [...] Read more.
In recent years, food nutrition estimation has received growing attention due to its critical role in dietary analysis and public health. Traditional nutrition assessment methods often rely on manual measurements and expert knowledge, which are time-consuming and not easily scalable. With the advancement of computer vision, RGB-based methods have been proposed, and more recently, RGB-D-based approaches have further improved performance by incorporating depth information to capture spatial cues. While these methods have shown promising results, they still face challenges in complex food scenes, such as limited ability to distinguish visually similar items with different ingredients and insufficient modeling of spatial or semantic relationships. To solve these issues, we propose an Ingredient-Guided Semantic Modeling Network (IGSMNet) for food nutrition estimation. The method introduces an ingredient-guided module that encodes ingredient information using a pre-trained language model and aligns it with visual features via cross-modal attention. At the same time, an internal semantic modeling component is designed to enhance structural understanding through dynamic positional encoding and localized attention, allowing for fine-grained relational reasoning. On the Nutrition5k dataset, our method achieves PMAE values of 12.2% for Calories, 9.4% for Mass, 19.1% for Fat, 18.3% for Carb, and 16.0% for Protein. These results demonstrate that our IGSMNet consistently outperforms existing baselines, validating its effectiveness. Full article
(This article belongs to the Section Food Nutrition)
Show Figures

Figure 1

21 pages, 8124 KB  
Article
Design of Miniaturized Cooled Medium-Wave Infrared Curved Bionic Compound-Eye Optical System
by Fu Wang, Yinghao Chi, Linhan Li, Nengbin Cai, Yimin Zhang, Yang Yu, Sili Gao and Kaijun Ma
Photonics 2025, 12(11), 1071; https://doi.org/10.3390/photonics12111071 - 29 Oct 2025
Viewed by 150
Abstract
To address the issues of insufficient detector target size and high system complexity in infrared bionic compound-eye systems, this paper designs a miniaturized cooled medium-wave infrared curved bionic compound-eye optical system specifically for large target surface detectors and develops a proof-of-concept prototype for [...] Read more.
To address the issues of insufficient detector target size and high system complexity in infrared bionic compound-eye systems, this paper designs a miniaturized cooled medium-wave infrared curved bionic compound-eye optical system specifically for large target surface detectors and develops a proof-of-concept prototype for verification. The system comprises three components: (1) a curved multi-aperture array, which consists of 61 sub-apertures with an entrance pupil diameter of 5 mm and a focal length of 10 mm; (2) a cooled planar detector; and (3) a relay imaging system, which adopts secondary imaging technology and achieves the matching between the array and detector with only six infrared lenses. The fill factor is introduced to analyze light energy utilization efficiency, providing a theoretical basis for improving the system’s signal-to-noise ratio and spatial information collection capability; meanwhile, the focal length distribution and pupil matching are analyzed to ensure the system’s optical performance. The system operates within the 3.7–4.8 μm wavelength band, with a total focal length of 3.08 mm, F-number of 2, and field of view reaching 108°. Simulations demonstrate that all sub-aperture imaging channels have MTF values greater than 0.47 at 33.3 lp/mm, with distortion less than 3%. Imaging test results verify that the system possesses excellent imaging performance. Full article
Show Figures

Figure 1

31 pages, 2485 KB  
Article
DCBAN: A Dynamic Confidence Bayesian Adaptive Network for Reconstructing Visual Images from fMRI Signals
by Wenju Wang, Yuyang Cai, Renwei Zhang, Jiaqi Li, Zinuo Ye and Zhen Wang
Brain Sci. 2025, 15(11), 1166; https://doi.org/10.3390/brainsci15111166 - 29 Oct 2025
Viewed by 130
Abstract
Background: Current fMRI (functional magnetic resonance imaging)-driven brain information decoding for visual image reconstruction techniques faces issues such as poor structural fidelity, inadequate model generalization, and unnatural visual image reconstruction in complex scenarios. Methods: To address these challenges, this study proposes a [...] Read more.
Background: Current fMRI (functional magnetic resonance imaging)-driven brain information decoding for visual image reconstruction techniques faces issues such as poor structural fidelity, inadequate model generalization, and unnatural visual image reconstruction in complex scenarios. Methods: To address these challenges, this study proposes a Dynamic Confidence Bayesian Adaptive Network (DCBAN). In this network model, deep nested Singular Value Decomposition is introduced to embed low-rank constraints into the deep learning model layers for fine-grained feature extraction, thus improving structural fidelity. The proposed Bayesian Adaptive Fractional Ridge Regression module, based on singular value space, dynamically adjusts the regularization parameters, significantly enhancing the decoder’s generalization ability under complex stimulus conditions. The constructed Dynamic Confidence Adaptive Diffusion Model module incorporates a confidence network and time decay strategy, dynamically adjusting the semantic injection strength during the generation phase, further enhancing the details and naturalness of the generated images. Results: The proposed DCBAN method is applied to the NSD, outperforming state-of-the-art methods by 8.41%, 0.6%, and 4.8% in PixCorr (0.361), Incep (96.0%), and CLIP (97.8%), respectively, achieving the current best performance in both structural and semantic fMRI visual image reconstruction. Conclusions: The DCBAN proposed in this thesis offers a novel solution for reconstructing visual images from fMRI signals, significantly enhancing the robustness and generative quality of the reconstructed images. Full article
Show Figures

Figure 1

28 pages, 1976 KB  
Article
ECG Signal Analysis and Abnormality Detection Application
by Ales Jandera, Yuliia Petryk, Martin Muzelak and Tomas Skovranek
Algorithms 2025, 18(11), 689; https://doi.org/10.3390/a18110689 - 29 Oct 2025
Viewed by 131
Abstract
The electrocardiogram (ECG) signal carries information crucial for health assessment, but its analysis can be challenging due to noise and signal variability; therefore, automated processing focused on noise removal and detection of key features is necessary. This paper introduces an ECG signal analysis [...] Read more.
The electrocardiogram (ECG) signal carries information crucial for health assessment, but its analysis can be challenging due to noise and signal variability; therefore, automated processing focused on noise removal and detection of key features is necessary. This paper introduces an ECG signal analysis and abnormality detection application developed to process single-lead ECG signals. In this study, the Lobachevsky University database (LUDB) was used as the source of ECG signals, as it includes annotated recordings using a multi-class, multi-label taxonomy that covers several diagnostic categories, each with specific diagnoses that reflect clinical ECG interpretation practices. The main aim of the paper is to provide a tool that efficiently filters noisy ECG data, accurately detects the QRS complex, PQ and QT intervals, calculates heart rate, and compares these values with normal ranges based on age and gender. Additionally, a multi-class, multi-label SVM-based model was developed and integrated into the application for heart abnormality diagnostics, i.e., assigning one or several diagnoses from various diagnostic categories. The MATLAB-based application is capable of processing raw ECG signals, allowing the use of ECG records not only from LUDB but also from other databases. Full article
(This article belongs to the Special Issue Algorithms for Computer Aided Diagnosis: 2nd Edition)
Show Figures

Figure 1

28 pages, 1624 KB  
Article
Domain-Constrained Stacking Framework for Credit Default Prediction
by Ming-Liang Ding, Yu-Liang Ma and Fu-Qiang You
Mathematics 2025, 13(21), 3451; https://doi.org/10.3390/math13213451 - 29 Oct 2025
Viewed by 183
Abstract
Accurate and reliable credit risk classification is fundamental to the stability of financial systems and the efficient allocation of capital. However, with the rapid expansion of customer information in both volume and complexity, traditional rule-based or purely statistical approaches have become increasingly inadequate. [...] Read more.
Accurate and reliable credit risk classification is fundamental to the stability of financial systems and the efficient allocation of capital. However, with the rapid expansion of customer information in both volume and complexity, traditional rule-based or purely statistical approaches have become increasingly inadequate. Motivated by these challenges, this study introduces a domain-constrained stacking ensemble framework that systematically integrates business knowledge with advanced machine learning techniques. First, domain heuristics are embedded at multiple stages of the pipeline: threshold-based outlier removal improves data quality, target variable redefinition ensures consistency with industry practice, and feature discretization with monotonicity verification enhances interpretability. Then, each variable is transformed through Weight-of-Evidence (WOE) encoding and evaluated via Information Value (IV), which enables robust feature selection and effective dimensionality reduction. Next, on this transformed feature space, we train logistic regression (LR), random forest (RF), extreme gradient boosting (XGBoost), and a two-layer stacking ensemble. Finally, the ensemble aggregates cross-validated out-of-fold predictions from LR, RF and XGBoost as meta-features, which are fused by a meta-level logistic regression, thereby capturing both linear and nonlinear relationships while mitigating overfitting. Experimental results across two credit datasets demonstrate that the proposed framework achieves superior predictive performance compared with single models, highlighting its potential as a practical solution for credit risk assessment in real-world financial applications. Full article
Show Figures

Figure 1

23 pages, 1456 KB  
Article
Progressive Prompt Generative Graph Convolutional Network for Aspect-Based Sentiment Quadruple Prediction
by Yun Feng and Mingwei Tang
Electronics 2025, 14(21), 4229; https://doi.org/10.3390/electronics14214229 - 29 Oct 2025
Viewed by 196
Abstract
Aspect-based sentiment quadruple prediction has important application value in the current information age. There are often implicit expressions and multi-level semantic relationships in sentences, making accurate prediction for existing methods still a complex and challenging task. To address the above problems, this paper [...] Read more.
Aspect-based sentiment quadruple prediction has important application value in the current information age. There are often implicit expressions and multi-level semantic relationships in sentences, making accurate prediction for existing methods still a complex and challenging task. To address the above problems, this paper proposes the Progressive Prompt-Driven Generative Graph Convolutional Network for Aspect-Based Sentiment Quadruple Prediction (ProPGCN). Firstly, a progressive prompt module is proposed. The module uses progressive prompt templates to generate paradigm expressions of corresponding orders and introduces third-order element prompt templates to associate high-order semantics in sentences, providing a bridge for modeling the final global semantics. Secondly, a graph convolutional relation-enhanced reasoning module is designed, which can make full use of contextual dependency information to enhance the recognition of implicit aspects and implicit opinions. In addition, a graph convolutional aggregation strategy is constructed. The strategy uses graph convolutional networks to aggregate adjacent node information and correct conflicting implicit logical relationships. Finally, experimental results show that the ProPGCN model can achieve state-of-the-art performance. Specifically, our ProPGCN model achieves overall F1 scores of 65.04% and 47.89% on the Restaurant and Laptop datasets, respectively, which represent improvements of +0.83% and +0.61% over the previous strongest generative baseline. Full article
Show Figures

Figure 1

22 pages, 1270 KB  
Article
A Novel Family of CDF Estimators Under PPS Sampling: Computational, Theoretical, and Applied Perspectives
by Salman Shah, Eisa Mahmoudi, Hasnain Iftikhar, Paulo Canas Rodrigues, Ronny Ivan Gonzales Medina and Javier Linkolk López-Gonzales
Axioms 2025, 14(11), 796; https://doi.org/10.3390/axioms14110796 - 29 Oct 2025
Viewed by 123
Abstract
Accurate estimation of population distribution characteristics is a fundamental task in survey sampling and statistical inference. This paper introduces a new family of estimators for the cumulative distribution function (CDF) under probability proportional to size (PPS) sampling, incorporating auxiliary information to enhance efficiency. [...] Read more.
Accurate estimation of population distribution characteristics is a fundamental task in survey sampling and statistical inference. This paper introduces a new family of estimators for the cumulative distribution function (CDF) under probability proportional to size (PPS) sampling, incorporating auxiliary information to enhance efficiency. The proposed approach employs dual auxiliary variables in the estimation phase, while the sampling design relies on a single auxiliary variable. Theoretical properties, including bias and mean squared error (MSE), are rigorously derived to establish the efficiency of the new class. An extensive empirical evaluation using three distinct populations—fisheries data, wine chemistry data, and demographic records—demonstrates the superiority of the proposed estimators. In terms of accuracy, the best-performing proposed estimator achieves an MSE of 0.0012, compared to 0.0127 for the widely used GK estimator. Percentage relative efficiency (PRE) values further underscore these improvements, with gains ranging from 123% to over 328% across the three populations. Graphical comparisons confirm these trends, illustrating that the proposed estimators consistently dominate conventional approaches. Overall, the findings highlight both the theoretical soundness and practical utility of the proposed family, offering robust and computationally efficient improvements for CDF estimation in complex survey designs. Full article
(This article belongs to the Special Issue Computational Statistics and Its Applications, 2nd Edition)
Show Figures

Figure 1

23 pages, 3485 KB  
Article
MMA-Net: A Semantic Segmentation Network for High-Resolution Remote Sensing Images Based on Multimodal Fusion and Multi-Scale Multi-Attention Mechanisms
by Xuanxuan Huang, Xuejie Zhang, Longbao Wang, Dandan Yuan, Shufang Xu, Fengguang Zhou and Zhijun Zhou
Remote Sens. 2025, 17(21), 3572; https://doi.org/10.3390/rs17213572 - 28 Oct 2025
Viewed by 405
Abstract
Semantic segmentation of high-resolution remote sensing images is of great application value in fields like natural disaster monitoring. Current multimodal semantic segmentation methods have improved the model’s ability to recognize different ground objects and complex scenes by integrating multi-source remote sensing data. However, [...] Read more.
Semantic segmentation of high-resolution remote sensing images is of great application value in fields like natural disaster monitoring. Current multimodal semantic segmentation methods have improved the model’s ability to recognize different ground objects and complex scenes by integrating multi-source remote sensing data. However, these methods still face challenges such as blurred boundary segmentation and insufficient perception of multi-scale ground objects when achieving high-precision classification. To address these issues, this paper proposes MMA-Net, a semantic segmentation network enhanced by two key modules: cross-layer multimodal fusion module and multi-scale multi-attention module. These modules effectively improve the model’s ability to capture detailed features and model multi-scale ground objects, thereby enhancing boundary segmentation accuracy, detail feature preservation, and consistency in multi-scale object segmentation. Specifically, the cross-layer multimodal fusion module adopts a staged fusion strategy to integrate detailed information and multimodal features, realizing detail preservation and modal synergy enhancement. The multi-scale multi-attention module combines cross-attention and self-attention to leverage long-range dependencies and inter-modal complementary relationships, strengthening the model’s feature representation for multi-scale ground objects. Experimental results show that MMA-Net outperforms state-of-the-art methods on the Potsdam and Vaihingen datasets. Its mIoU reaches 88.74% and 84.92% on the two datasets, respectively. Ablation experiments further verify that each proposed module contributes to the final performance. Full article
Show Figures

Graphical abstract

12 pages, 936 KB  
Protocol
Using Citizen Science to Address Out-of-Pocket Healthcare Expenditure with Aboriginal Communities in the Far West of South Australia: A Protocol
by Courtney Ryder, Ray Mahoney, Patrick Sharpe, Georga Sallows, Karla Canuto, Andrew Goodman, Julieann Coombes, Odette Pearson, Jaquelyne T. Hughes, Marlien Varnfield, Candice Oster, Jonathan Karnon, Claire Drummond, James A. Smith, Shanti Omodei-James, Lavender Otieno, Ali Soltani and Billie Bonevski
Int. J. Environ. Res. Public Health 2025, 22(11), 1640; https://doi.org/10.3390/ijerph22111640 - 28 Oct 2025
Viewed by 219
Abstract
Out-of-pocket health expenditure (OOPHE) significantly impacts people with chronic and complex diseases (CCDs) and injuries. Aboriginal communities experience a higher burden of CCDs and injury, along with greater OOPHE inequities. This project aims to develop and implement a social prescribing digital platform (Web [...] Read more.
Out-of-pocket health expenditure (OOPHE) significantly impacts people with chronic and complex diseases (CCDs) and injuries. Aboriginal communities experience a higher burden of CCDs and injury, along with greater OOPHE inequities. This project aims to develop and implement a social prescribing digital platform (Web App) to reduce OOPHE. It is grounded in citizen science approaches that value the lived experience and knowledge of Aboriginal people in shaping solutions. The project uses a citizen science methodology adapted for these communities, using knowledge interface methodology to weave together Indigenous and Western knowledges. Research methods (Indigenous, quantitative, qualitative) explore the relational nature of OOPHE risks and protective factors through co-design and workshops with Aboriginal participants to develop the Web App. A community-centric developmental evaluation guides the trial and refinement of the platform, allowing for ongoing learning and adaptation. Process measures inform a national scale-up and evaluation framework. Addressing OOPHE is essential to improving health and wellbeing for Aboriginal and Torres Strait Islander individuals and families living with or at risk of CCDs. This initiative aims to reduce the impact of OOPHE through digital social prescribing, there by connecting people with essential community services to access healthcare, offering a scalable approach to addressing health inequities nationwide. Full article
(This article belongs to the Section Global Health)
Show Figures

Figure 1

27 pages, 8347 KB  
Article
Diversity Constraint and Adaptive Graph Multi-View Functional Matrix Completion
by Haiyan Gao and Youdi Bian
Axioms 2025, 14(11), 793; https://doi.org/10.3390/axioms14110793 - 28 Oct 2025
Viewed by 202
Abstract
The integrity of real-time monitoring data is paramount to the accuracy of scientific research and the reliability of decision-making. However, data incompleteness arising from transmission interruptions or extreme weather disrupting equipment operations severely compromises the validity of statistical analyses and the stability of [...] Read more.
The integrity of real-time monitoring data is paramount to the accuracy of scientific research and the reliability of decision-making. However, data incompleteness arising from transmission interruptions or extreme weather disrupting equipment operations severely compromises the validity of statistical analyses and the stability of modelling. From a mathematical view, real-time monitoring data may be regarded as continuous functions, exhibiting intricate correlations and mutual influences between different indicators. Leveraging their inherent smoothness and interdependencies enables high-precision data imputation. Within the functional data analysis framework, this paper proposes a Diversity Constraint and Adaptive Graph Multi-View Functional Matrix Completion (DCAGMFMC) method. Integrating multi-view learning with an adaptive graph strategy, this approach comprehensively accounts for complex correlations between data from different views while extracting differential information across views, thereby enhancing information utilization and imputation accuracy. Random simulation experiments demonstrate that the DCAGMFMC method exhibits significant imputation advantages over classical methods such as KNN, HFI, SFI, MVNFMC, and GRMFMC. Furthermore, practical applications on meteorological datasets reveal that, compared to these imputation methods, the root mean square error (RMSE), mean absolute error (MAE), and normalized root mean square error (NRMSE) of the DCAGMVNFMC method decreased by an average of 39.11% to 59.15%, 54.50% to 71.97%, and 43.96% to 63.70%, respectively. It also demonstrated stable imputation performance across various meteorological indicators and missing data rates, exhibiting good adaptability and practical value. Full article
Show Figures

Figure 1

Back to TopTop