Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,454)

Search Parameters:
Keywords = averaging aggregation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 1123 KB  
Article
Relationship Between Mean Faecal Gastrointestinal Nematode Egg Excretion in Horses and Its Variability: Implications for Control
by Jacques Cabaret, Cristina Guerrero Molina, Cintli Martínez-Ortiz-de Montellano and Yazmin Alcala Canto
Pathogens 2026, 15(2), 156; https://doi.org/10.3390/pathogens15020156 (registering DOI) - 2 Feb 2026
Abstract
Faecal egg counts (FECs) are used to assess the intensity of gastrointestinal nematode (GIN) infections in herbivores. FEC distribution is aggregated, meaning that approximately 20% of animals harbour 80% of infections. In times of escalating anthelmintic resistance, it may be necessary to restrict [...] Read more.
Faecal egg counts (FECs) are used to assess the intensity of gastrointestinal nematode (GIN) infections in herbivores. FEC distribution is aggregated, meaning that approximately 20% of animals harbour 80% of infections. In times of escalating anthelmintic resistance, it may be necessary to restrict treatment to the animals with the heaviest infections. This strategy is called targeted selective treatment (TST) and is relevant to GIN, for example. The difficulty lies in identifying which animals to treat. One solution is to select potentially at-risk animals based on age (for example, treating the young) or to perform individual faecal egg counts (though this is costly). We propose a solution for determining the suitability of selective treatment based on the level of FEC (200 or 500 eggs per gram of faeces). First, we demonstrated that the mean FEC in a group is strictly related to its variance (Taylor’s power law) using published data and our own unpublished data on horses from France, Poland, and Mexico. The study focused on small and large strongyles in horses. Taylor’s power law states that sample variance (Var) and the population mean are related by a simple equation: Var = a Mean^b or log(Var) = log(a) + b log(Mean). The influence of factors such as age, status (mare, stallion, yearling, etc.), day-to-day variability, and previous anthelmintic treatments did not alter this relationship. To reduce the number of FECs, we estimated the mean FEC on a composite faecal sample. We then calculated the variability and therefore the number of horses with an FEC above the chosen acceptable level. When the mean is high, the number of horses to be treated is also high and TST is not beneficial. When the FEC is average, TST may be worthwhile, either based on the FEC of individual horses or on the horse class at risk. Based on the percentage of horses with an FEC above the acceptable level, farmers can decide whether to treat all animals or establish a TST protocol. Caution should be exercised when using TST in the presence of large strongyles. Full article
(This article belongs to the Special Issue Parasitic Helminths and Control Strategies)
Show Figures

Figure 1

14 pages, 278 KB  
Review
Cultivated Oral Mucosal Epithelial Transplantation for Limbal Stem Cell Deficiency: A Scoping Review of Indications, Platforms, Outcomes and Safety
by Konstantinos Papadopoulos, Mohamed Elalfy, Hasan Naveed, Sokratis Zormpas and Artemis Matsou
J. Clin. Med. 2026, 15(3), 1134; https://doi.org/10.3390/jcm15031134 (registering DOI) - 1 Feb 2026
Abstract
Background: Cultivated oral mucosal epithelial transplantation (COMET/CAOMECS) is an autologous, immunosuppression-sparing option for ocular surface reconstruction in limbal stem cell deficiency (LSCD). After two decades, indications, platforms and outcome definitions vary, and COMET’s position relative to limbal-derived epithelium remains uncertain. Methods: We conducted [...] Read more.
Background: Cultivated oral mucosal epithelial transplantation (COMET/CAOMECS) is an autologous, immunosuppression-sparing option for ocular surface reconstruction in limbal stem cell deficiency (LSCD). After two decades, indications, platforms and outcome definitions vary, and COMET’s position relative to limbal-derived epithelium remains uncertain. Methods: We conducted a PRISMA-ScR scoping review of human clinical studies (PubMed, 2000–30 December 2025) with hand-searching and regulatory sources. Eligible reports included COMET/CAOMECS series and comparative cohorts (CLET/ACLET, SLET, KLAL/CLAL). The primary outcome was anatomical success (stable epithelialised cornea without recurrent persistent epithelial defect, progressive conjunctivalisation or uncontrolled neovascularisation at last assessment). Given heterogeneity in definitions and analytic frames (fixed-time vs. Kaplan–Meier [KM]), results were synthesised narratively by indication and platform. Results: Twenty-five reports (893 eyes; 821 patients) were included. Aetiologies were predominantly burns and SJS/TEN. Across amniotic membrane-based mixed-aetiology series, 12-month anatomical success clustered around 55–70%. Aggregated descriptively across COMET eyes, 211/467 (45%) had a stable surface at last follow-up. Epithelialisation was generally rapid in quiet AM-based reconstructions and slower with severe adnexal disease or carrier-free platforms. Mean BCVA improved from 1.8 ± 0.7 to 1.4 ± 0.7 logMAR (471 eyes); ≥2-line gains occurred in 308/471 (65.4%). A matched comparison suggested better 12-month survival, less neovascularisation and better BCVA with substrate-free versus AM-carried COMET; a biomaterial-/feeder-free platform reconstructed most eyes but failed more often with four-quadrant symblepharon. Observational comparative cohorts suggested higher surface survival and average visual gain with limbal-derived epithelium, at the cost of systemic immunosuppression. Conclusions: In appropriately selected bilateral LSCD, COMET offers immunosuppression-sparing reconstruction with moderate, durable surface stability and clinically meaningful visual gains when performed on a quiet, optimised surface. Platform refinements—particularly substrate-free constructs—and prospective, indication-defined comparative studies with harmonised outcomes are needed to define COMET’s role relative to limbal-derived epithelium. Full article
17 pages, 2638 KB  
Article
Evaluation of Geotourism Potential Based on Spatial Pattern Analysis in Jiangxi Province, China
by Qiuxiang Cao, Haixia Deng, Lanshu Zheng, Qing Wang and Kai Xu
Sustainability 2026, 18(3), 1449; https://doi.org/10.3390/su18031449 (registering DOI) - 1 Feb 2026
Abstract
To provide essential information on geoheritage and geotourism potential in Jiangxi Province—a key region for geoheritage distribution in China—this study summarizes and categorizes the types, grades, and distribution characteristics of geoheritage within local communities. The primary analytical methods included average nearest neighbour analysis, [...] Read more.
To provide essential information on geoheritage and geotourism potential in Jiangxi Province—a key region for geoheritage distribution in China—this study summarizes and categorizes the types, grades, and distribution characteristics of geoheritage within local communities. The primary analytical methods included average nearest neighbour analysis, kernel density estimation, and spatial autocorrelation to explore spatial distribution patterns. A total of 202 significant geoheritage sites were identified in Jiangxi Province. Furthermore, an evaluation index system was established using the entropy weight TOPSIS model to assess the geotourism potential of each city. The findings reveal the following: (1) Geoheritage sites in Jiangxi Province exhibit an overall aggregated spatial distribution, although clustering intensity varies among different geoheritage types and grades. (2) Considering both grade and category, the core distribution area of geoheritage is located in eastern Shangrao City, while global-level geoheritage sites are mainly concentrated in the Poyang Lake Plain. (3) Spatial autocorrelation analysis indicates that, except for global-level geoheritage sites, other geoheritage sites display significant spatial agglomeration with positive spatial correlation. Moreover, local-scale spatial association characteristics differ notably according to geoheritage type and grade. (4) The geotourism development potential across Jiangxi Province shows clear spatial differentiation, with higher potential concentrated in the eastern and southern regions. Full article
Show Figures

Figure 1

26 pages, 10610 KB  
Article
Spatio-Temporal Dynamics, Driving Forces, and Location–Distance Attenuation Mechanisms of Beautiful Leisure Tourism Villages in China
by Xiaowei Wang, Jiaqi Mei, Zhu Mei, Hui Cheng, Wei Li, Linqiang Wang, Danling Chen, Yingying Wang and Zhongwen Gao
Land 2026, 15(2), 250; https://doi.org/10.3390/land15020250 (registering DOI) - 1 Feb 2026
Abstract
Beautiful Leisure Tourism Villages (BLTVs) represent an effective pathway for advancing high-quality rural industrial development and promoting comprehensive rural revitalization. They are of great significance to enriching new rural business formats and new functions. The analysis is interpreted within an integrated location–distance attenuation [...] Read more.
Beautiful Leisure Tourism Villages (BLTVs) represent an effective pathway for advancing high-quality rural industrial development and promoting comprehensive rural revitalization. They are of great significance to enriching new rural business formats and new functions. The analysis is interpreted within an integrated location–distance attenuation framework. Based on the methods of spatial clustering analysis, geographical linkage rate and geographical weighted regression, the spatio-temporal evolution of 1982 BLTVs in China up to 2023 was examined to uncover the underlying driving mechanisms. Findings indicated that (1) a staged expansion in the number of villages across China, with the most pronounced growth occurring between 2014 and 2018, averaged 124 new villages per year; their stage characteristics showed an obvious “unipolar core-bipolar multi-core-bipolar network” development model; (2) the barycenters of villages were all located in Nanyang City of Henan Province; they migrated from east to west, and formed a push and pull migration trend from east to west and then east; (3) the spatial distribution of villages was highly aggregated and demonstrated marked regional heterogeneity, following a south–north and east–west gradient, with the highest concentration in Jiangzhe and the lowest in Ningxia Hui Autonomous Region; and (4) natural ecology, hydrological and climatic conditions, socioeconomic context, transportation accessibility, and resource endowment collectively shaped the spatial layout of villages, exhibiting pronounced spatial variation in the intensity of these driving factors. On the whole, topography, social economy, traffic condition and precipitation condition had greater influences on the spatial distribution of villages in the western than in the eastern part of China. In contrast, the effects of resource endowment and temperature on the spatial distribution of BLTVs were stronger in eastern China than in western China. These findings enhance the theoretical understanding of tourism-oriented rural development by integrating spatio-temporal evolution with a location–distance attenuation perspective and provide differentiated guidance for the sustainable development of BLTVs across regions. Full article
Show Figures

Figure 1

53 pages, 5532 KB  
Article
Neural Network Method for Detecting Low-Intensity DDoS Attacks with Stochastic Fragmentation and Its Adaptation to Law Enforcement Activities in the Cyber Protection of Critical Infrastructure Facilities
by Serhii Vladov, Victoria Vysotska, Łukasz Ścisło, Rafał Dymczyk, Oleksandr Posashkov, Mariia Nazarkevych, Oleksandr Yunin, Liliia Bobrishova and Yevheniia Pylypenko
Computers 2026, 15(2), 84; https://doi.org/10.3390/computers15020084 (registering DOI) - 1 Feb 2026
Abstract
This article develops a method for the early detection of low-intensity DDoS attacks based on a three-factor vector metric and implements an applied hybrid neural network traffic analysis system that combines preprocessing stages, competitive pretraining (SOM), a radial basis layer, and an associative [...] Read more.
This article develops a method for the early detection of low-intensity DDoS attacks based on a three-factor vector metric and implements an applied hybrid neural network traffic analysis system that combines preprocessing stages, competitive pretraining (SOM), a radial basis layer, and an associative Grossberg output, followed by gradient optimisation. The initial tools used are statistical online estimates (moving or EWMA estimates), CUSUM-like statistics for identifying small stable shifts, and deterministic signature filters. An algorithm has been developed that aggregates the components of fragmentation, reception intensity, and service availability into a single index. Key features include the physically interpretable features, a hybrid neural network architecture with associative stability and low computational complexity, and built-in mechanisms for adaptive threshold calibration and online training. An experimental evaluation of the developed method using real telemetry data demonstrated high recognition performance of the proposed approach (accuracy is 0.945, AUC is 0.965, F1 is 0.945, localisation accuracy is 0.895, with an average detection latency of 55 ms), with these results outperforming the compared CNN-LSTM and Transformer solutions. The scientific contribution of this study lies in the development of a robust, computationally efficient, and application-oriented solution for detecting low-intensity attacks with the ability to integrate into edge and SOC systems. Practical recommendations for reducing false positives and further improvements through low-training methods and hardware acceleration are also proposed. Full article
(This article belongs to the Special Issue Using New Technologies in Cyber Security Solutions (3rd Edition))
27 pages, 2971 KB  
Article
Awake Insights for Obstructive Sleep Apnea: Severity Detection Using Tracheal Breathing Sounds and Meta-Model Analysis
by Ali Mohammad Alqudah and Zahra Moussavi
Diagnostics 2026, 16(3), 448; https://doi.org/10.3390/diagnostics16030448 (registering DOI) - 1 Feb 2026
Abstract
Background/Objectives: Obstructive sleep apnea (OSA) is a prevalent, yet underdiagnosed, disorder associated with cardiovascular and cognitive risks. While overnight polysomnography (PSG) remains the diagnostic gold standard, it is resource-intensive and impractical for large-scale rapid screening. Methods: This study extends prior work on feature [...] Read more.
Background/Objectives: Obstructive sleep apnea (OSA) is a prevalent, yet underdiagnosed, disorder associated with cardiovascular and cognitive risks. While overnight polysomnography (PSG) remains the diagnostic gold standard, it is resource-intensive and impractical for large-scale rapid screening. Methods: This study extends prior work on feature extraction and binary classification using tracheal breathing sounds (TBS) and anthropometric data by introducing a meta-modeling framework that utilizes machine learning (ML) and aggregates six one-vs.-one classifiers for multi-class OSA severity prediction. We employed out-of-bag (OOB) estimation and three-fold cross-validation to assess model generalization performance. To enhance reliability, the framework incorporates conformal prediction to provide calibrated confidence sets. Results: In the three-class setting (non, mild, moderate/severe), the model achieved 76.7% test accuracy, 77.7% sensitivity, and 87.1% specificity, with strong OOB performance of 91.1% accuracy, 91.6% sensitivity, and 95.3% specificity. Three-fold confirmed stable performance across folds (mean accuracy: 77.8%; mean sensitivity: 78.6%; mean specificity: 76.4%) and conformal prediction achieved full coverage with an average set size of 2. In the four-class setting (non, mild, moderate, severe), the model achieved 76.7% test accuracy, 75% sensitivity, and 92% specificity, with OOB performance of 88.2% accuracy, 91.6% sensitivity, and 88.2% specificity. Conclusions: These findings support the potential of this non-invasive system as an efficient and rapid OSA severity assessment whilst awake, offering a scalable alternative to PSG for large-scale screening and clinical triaging. Full article
(This article belongs to the Special Issue Advances in Sleep and Respiratory Medicine)
34 pages, 5295 KB  
Article
Adaptive Local–Global Synergistic Perception Network for Hydraulic Concrete Surface Defect Detection
by Zhangjun Peng, Li Li, Chuanhao Chang, Mingfei Wan, Guoqiang Zheng, Zhiming Yue, Shuai Zhou and Zhigui Liu
Sensors 2026, 26(3), 923; https://doi.org/10.3390/s26030923 (registering DOI) - 31 Jan 2026
Abstract
Surface defects in hydraulic concrete structures exhibit extreme topological heterogeneity. and are frequently obscured by unstructured environmental noise. Conventional detection models, constrained by fixed-grid convolutions, often fail to effectively capture these irregular geometries or suppress background artifacts. To address these challenges, this study [...] Read more.
Surface defects in hydraulic concrete structures exhibit extreme topological heterogeneity. and are frequently obscured by unstructured environmental noise. Conventional detection models, constrained by fixed-grid convolutions, often fail to effectively capture these irregular geometries or suppress background artifacts. To address these challenges, this study proposes the Adaptive Local–Global Synergistic Perception Network (ALGSP-Net). First, to overcome geometric constraints, the Defect-aware Receptive Field Aggregation and Adaptive Dynamic Receptive Field modules are introduced. Instead of rigid sampling, this design adaptively modulates the receptive field to align with defect morphologies, ensuring the precise encapsulation of slender cracks and interlaced spalling. Second, a dual-stream gating fusion strategy is employed to mitigate semantic ambiguity. This mechanism leverages global context to calibrate local feature responses, effectively filtering background interference while enhancing cross-scale alignment. Experimental results on the self-constructed SDD-HCS dataset demonstrate that the method achieves an average Precision of 77.46% and an mAP50 of 72.78% across six defect categories. Comparative analysis confirms that ALGSP-Net outperforms state-of-the-art benchmarks in both accuracy and robustness, providing a reliable solution for the intelligent maintenance of hydraulic infrastructure. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
14 pages, 3020 KB  
Review
Endovascular Treatment of Crural Aneurysms: Case Report and Systematic Review Regarding Indications, Stent Characteristics, and Patency
by Abhay Setia, Roberto Scaratti, Maher Fattoum, Samir Khan and Farzin Adili
J. Vasc. Dis. 2026, 5(1), 6; https://doi.org/10.3390/jvd5010006 - 30 Jan 2026
Viewed by 38
Abstract
Background: We present our experience of carrying out endovascular therapy (EVT) of a pseudo-aneurysm of the posterior tibial artery (PTA) with an associated arteriovenous fistula (AVF). We also present results of a systematic review which was carried out to cast light on endovascular [...] Read more.
Background: We present our experience of carrying out endovascular therapy (EVT) of a pseudo-aneurysm of the posterior tibial artery (PTA) with an associated arteriovenous fistula (AVF). We also present results of a systematic review which was carried out to cast light on endovascular treatment modalities. Methods: A 31-year-old patient with a history of war trauma presented with pain of increasing severity in the lower leg. A CT angiogram confirmed an aneurysm of the PTA with an AVF. With a bidirectional endovascular approach, the aneurysm was occluded with coils and excluded with a Viabahn endoprosthesis. Aspirin and clopidogrel were recommended postoperatively. After 18 months of follow-up, the patient was free of symptoms, with patent endoprosthesis. Multiple databases (Scopus, Pubmed, Medline, OVID) were systematically searched using MeSH terms. The studies were scrutinized, and data on demographics, procedural details, and follow-up were collected and aggregated. Results: A total of 44 studies (56 patients) were eligible and were included. Average age was 50 (15–87 years). The most common etiology was trauma (iatrogenic 29/56 (51.7%); non-iatrogenic 15/56 (26.7%)). EVT strategies included coil embolization (n = 29), stent implantation (n = 25), and a combination of both (n = 2). Median stent diameter was 3 mm (2.5–6). The follow-up period ranged from 1 week to 60 months. Aggregated reported primary patency was 18/27 (66.6%) with no documented complications—an observation that likely reflects reporting and publication bias, rather than a true absence of adverse events. Conclusions: EVT offers a feasible and safe alternative to simple ligation or occlusion of crural aneurysms, to preserve distal flow to the foot. Dedicated stents for crural arteries are not available. Studies with long-term follow-up are lacking. Full article
(This article belongs to the Special Issue Peripheral Arterial Disease (PAD) and Innovative Treatments)
Show Figures

Figure 1

18 pages, 4933 KB  
Article
6DoF Pose Estimation of Transparent Objects: Dataset and Method
by Yunhe Wang, Ting Wu and Qin Zou
Sensors 2026, 26(3), 898; https://doi.org/10.3390/s26030898 - 29 Jan 2026
Viewed by 138
Abstract
6DoF pose estimation is one of the key technologies for robotic grasping. Due to the lack of texture, most existing 6DoF pose estimation methods perform poorly on transparent objects. In this work, a hierarchical feature fusion network, HFF6DoF, is proposed for 6DoF pose [...] Read more.
6DoF pose estimation is one of the key technologies for robotic grasping. Due to the lack of texture, most existing 6DoF pose estimation methods perform poorly on transparent objects. In this work, a hierarchical feature fusion network, HFF6DoF, is proposed for 6DoF pose estimation of transparent objects. In HFF6DoF, appearance and geometry features are extracted from RGB-D images with a dual-branch network, and are hierarchically fused for information aggregation. A decoding module is introduced for semantic segmentation and keypoint vector-field prediction. Based on the results of semantic segmentation and keypoint prediction, 6DoF poses of transparent objects are calculated by using Random Sample Consensus (RANSAC) and Least-Squares Fitting. In addition, a new transparent-object 6DoF pose estimation dataset, TDoF20, is constructed, which consists of 61,886 pairs of RGB and depth images covering 20 types of objects. The experimental results show that the proposed HFF6DoF outperforms state-of-the-art approaches on the TDoF20 dataset by a large margin, achieving an average ADD of 50.5%. Full article
20 pages, 5710 KB  
Article
Steel Slag-Enhanced Cement-Stabilized Recycled Aggregate Bases: Mechanical Performance and PINN-Based Sulfate Diffusion Prediction
by Guodong Zeng, Hao Li, Yuyuan Deng, Xuancang Wang, Yang Fang and Haoxiang Liu
Materials 2026, 19(3), 546; https://doi.org/10.3390/ma19030546 - 29 Jan 2026
Viewed by 101
Abstract
The application of cement-stabilized recycled aggregate (CSR) in pavement bases is constrained by the high porosity and low strength of recycled aggregate (RA), whereas sulfate transport and durability mechanisms are less reported. To address this issue, this study incorporated high-strength and potentially reactive [...] Read more.
The application of cement-stabilized recycled aggregate (CSR) in pavement bases is constrained by the high porosity and low strength of recycled aggregate (RA), whereas sulfate transport and durability mechanisms are less reported. To address this issue, this study incorporated high-strength and potentially reactive steel slag aggregate (SSA) into CSR to develop steel slag-enhanced cement-stabilized recycled aggregate (CSRS). The mechanical performance of the mixtures was evaluated through unconfined compressive strength (UCS) and indirect tensile strength (ITS) tests, and their durability was assessed using thermal shrinkage and sulfate resistance tests. In addition, a sulfate prediction model based on a physics-informed neural network (PINN) was developed. The results showed that, compared with CSR, the 7-day and 28-day UCS of CSRS increased by 6.7% and 16.0%, respectively, and the ITS increased by 4.3% and 5.9%. Thermal shrinkage tests indicated that CSR and CSRS, incorporating RA and SSA, exhibited slightly higher thermal shrinkage strain than cement-stabilized natural aggregate (CSN). During sulfate attack, SSA significantly improved the sulfate resistance of CSR, with the sulfate resistance coefficient of CSRS increasing by 18.8% compared to CSR. Furthermore, the PINN model predicted that, in 3%, 5%, and 7% sodium sulfate solutions, the sulfate concentration at a 1 mm depth in CSRS was reduced by 35.6%, 21.8%, and 29.4%, respectively, compared to CSR, with an average relative error below 14%, confirming its reliability. Therefore, these findings demonstrate that the incorporation of SSA markedly enhances the mechanical properties and sulfate resistance of CSR, and that the PINN model provides an effective tool for accurate simulation and prediction of sulfate diffusion. Full article
Show Figures

Figure 1

19 pages, 473 KB  
Article
Privacy Protection Optimization Method for Cloud Platforms Based on Federated Learning and Homomorphic Encryption
by Jing Wang and Yun Wang
Sensors 2026, 26(3), 890; https://doi.org/10.3390/s26030890 - 29 Jan 2026
Viewed by 78
Abstract
With the wide application of cloud computing in multi-tenant, heterogeneous nodes and high-concurrency environments, model parameters frequently interact during distributed training, which easily leads to privacy leakage, communication redundancy, and decreased aggregation efficiency. To realize the collaborative optimization of privacy protection and computing [...] Read more.
With the wide application of cloud computing in multi-tenant, heterogeneous nodes and high-concurrency environments, model parameters frequently interact during distributed training, which easily leads to privacy leakage, communication redundancy, and decreased aggregation efficiency. To realize the collaborative optimization of privacy protection and computing performance, this study proposes the Heterogeneous Federated Homomorphic Encryption Cloud (HFHE-Cloud) model, which integrates federated learning (FL) and homomorphic encryption and constructs a secure and efficient collaborative learning framework for cloud platforms. Under the condition of not exposing the original data, the model effectively reduces the performance bottleneck caused by encryption calculation and communication delay through hierarchical key mapping and dynamic scheduling mechanism of heterogeneous nodes. The experimental results show that HFHE-Cloud is significantly superior to Federated Averaging (FedAvg), Federated Proximal (FedProx), Federated Personalization (FedPer) and Federated Normalized Averaging (FedNova) in comprehensive performance, Homomorphically Encrypted Federated Averaging (HE-FedAvg) and other five baseline models. In the dimension of privacy protection, the global accuracy is up to 94.25%, and the Loss is stable within 0.09. In terms of computing performance, the encryption and decryption time is shortened by about one third, and the encryption overhead is controlled at 13%. In terms of distributed training efficiency, the number of communication rounds is reduced by about one fifth, and the node participation rate is stable at over 90%. The results verify the model’s ability to achieve high security and high scalability in multi-tenant environment. This study aims to provide cloud service providers and enterprise data holders with a technical solution of high-intensity privacy protection and efficient collaborative training that can be deployed in real cloud platforms. Full article
(This article belongs to the Section Sensor Networks)
32 pages, 4294 KB  
Article
Restricted Network Reconstruction from Time Series via Dempster–Shafer Evidence Theory
by Cai Zhang, Yishu Xian, Xiao Yuan, Meizhu Li and Qi Zhang
Entropy 2026, 28(2), 148; https://doi.org/10.3390/e28020148 - 28 Jan 2026
Viewed by 87
Abstract
As a fundamental mathematical model for complex systems, complex networks describe interactions among social, infrastructural, and biological systems. However, the complete connection structure is often unobservable, making topology reconstruction from limited data—such as time series of unit states—a crucial challenge. To address network [...] Read more.
As a fundamental mathematical model for complex systems, complex networks describe interactions among social, infrastructural, and biological systems. However, the complete connection structure is often unobservable, making topology reconstruction from limited data—such as time series of unit states—a crucial challenge. To address network reconstruction under sparse local observations, this paper proposes a novel framework that integrates epidemic dynamics with Dempster–Shafer (DS) evidence theory. The core of our method lies in a two-level belief fusion process: (1) Intra-node fusion, which aggregates multiple independent SIR simulation results from a single seed node to generate robust local evidence represented as Basic Probability Assignments (BPAs), effectively quantifying uncertainty; (2) Inter-node fusion, which orthogonally combines BPAs from multiple seed nodes using DS theory to synthesize a globally consistent network topology. This dual-fusion design enables the framework to handle uncertainty and conflict inherent in sparse, stochastic observations. Extensive experiments demonstrate the effectiveness and robustness of the proposed approach. It achieves stable and high reconstruction accuracy on both a synthetic 16-node benchmark network and the real-world Zachary’s Karate Club network. Furthermore, the method scales successfully to four large-scale real-world networks, attaining an average accuracy of 0.85, thereby confirming its practical applicability across networks of different scales and densities. Full article
(This article belongs to the Special Issue Recent Progress in Uncertainty Measures)
28 pages, 1964 KB  
Article
The Carbon Cost of Intelligence: A Domain-Specific Framework for Measuring AI Energy and Emissions
by Rashanjot Kaur, Triparna Kundu, Kathleen Marshall Park and Eugene Pinsky
Energies 2026, 19(3), 642; https://doi.org/10.3390/en19030642 - 26 Jan 2026
Viewed by 252
Abstract
The accelerating energy demands from artificial intelligence (AI) deployment introduce systemic challenges for achieving carbon neutrality. Large language models (LLMs) represent a dominant driver of AI energy consumption, with inference operations constituting 80–90% of total energy usage. Current energy benchmarks report aggregate metrics [...] Read more.
The accelerating energy demands from artificial intelligence (AI) deployment introduce systemic challenges for achieving carbon neutrality. Large language models (LLMs) represent a dominant driver of AI energy consumption, with inference operations constituting 80–90% of total energy usage. Current energy benchmarks report aggregate metrics without domain-level breakdowns, preventing accurate carbon footprint estimation for workloadspecific operations. This study addresses this critical gap by introducing a carbon-aware framework centered on the carbon cost of intelligence (CCI), a novel metric enabling workload-specific energy and carbon calculation that balances accuracy and efficiency across heterogeneous domains. This paper presents a comprehensive cross-domain energy benchmark using the massive multitask language understanding (MMLU) dataset, measuring accuracy and energy consumption in five representative domains: clinical knowledge (medicine), professional accounting (finance), professional law (legal), college computer science (technology), and general knowledge. Empirical analysis of GPT-4 across 100 MMLU questions, 20 per domain, reveals substantive variations: legal queries consume 4.3× more energy than general knowledge queries (222 J vs. 52 J per query), while energy consumption varies by domain due to input length differences. Our analysis demonstrates the evolution from simple ratio-based approaches (weighted accuracy divided by weighted energy) to harmonic mean aggregation, showing that the harmonic mean, by preventing bias from extreme values, provides more accurate carbon usage estimates. The CCI metric, calculated using weighted harmonic mean (analogous to P/E ratios in finance, where A/E represents accuracy-to-energy ratio), enables practitioners to accurately estimate energy and carbon emissions for specific workload mixes (e.g., 80% medicine + 15% general + 5% law). Results demonstrate that the domain workload mix significantly impacts carbon footprint: a law firm workload (60% law) consumes 96% more energy per query than a hospital workload (80% medicine), representing 49% potential savings through workload optimization. Carbon footprint analysis using US Northeast grid intensity (320 gCO2e/kWh) shows domain-specific emissions ranging from 0.0046–0.0197 gCO2 per query. CCI is validated through comparison with simple weighted average, demonstrating differences up to 12.1%, confirming that the harmonic mean provides more accurate and conservative carbon estimates essential for carbon reporting and neutrality planning. Our findings provide a novel cross-domain energy benchmark for GPT-4 and establish a practical carbon calculator framework for sustainable AI deployment aligned with carbon neutrality goals. Full article
Show Figures

Graphical abstract

29 pages, 3431 KB  
Article
Evolution Mechanism of Volume Parameters and Gradation Optimization Method for Asphalt Mixtures Based on Dual-Domain Fractal Theory
by Bangyan Hu, Zhendong Qian, Fei Zhang and Yu Zhang
Materials 2026, 19(3), 488; https://doi.org/10.3390/ma19030488 - 26 Jan 2026
Viewed by 169
Abstract
The primary objective of this study is to bridge the gap between descriptive geometry and mechanistic design by establishing a dual-domain fractal framework to analyze the internal architecture of asphalt mixtures. This research quantitatively assesses the sensitivity of volumetric indicators—namely air voids (VV), [...] Read more.
The primary objective of this study is to bridge the gap between descriptive geometry and mechanistic design by establishing a dual-domain fractal framework to analyze the internal architecture of asphalt mixtures. This research quantitatively assesses the sensitivity of volumetric indicators—namely air voids (VV), voids in mineral aggregate (VMA), and voids filled with asphalt (VFA)—by employing the coarse aggregate fractal dimension (Dc), the fine aggregate fractal dimension (Df), and the coarse-to-fine ratio (k) through Grey Relational Analysis (GRA). The findings demonstrate that whereas Df and k substantially influence macro-volumetric parameters, the mesoscopic void fractal dimension (DV) remains structurally unchanged, indicating that gradation predominantly dictates void volume rather than geometric intricacy. Sensitivity rankings create a prevailing hierarchy: Process Control (Compaction) > Skeleton Regulation (Dc) > Phase Filling (Pb) > Gradation Adjustment (k, Df). Dc is recognized as the principal regulator of VMA, while binder content (Pb) governs VFA. A “Robust Design” methodology is suggested, emphasizing Dc to stabilize the mineral framework and reduce sensitivity to construction variations. A comparative investigation reveals that the optimized gradation (OG) achieves a more stable volumetric condition and enhanced mechanical performance relative to conventional empirical gradations. Specifically, the OG group demonstrated a substantial 112% enhancement in dynamic stability (2617 times/mm compared to 1230 times/mm) and a 75% increase in average film thickness (AFT), while ensuring consistent moisture and low-temperature resistance. In conclusion, this study transforms asphalt mixture design from empirical trial-and-error to a precision-engineered methodology, providing a robust instrument for optimizing the long-term durability of pavements in extreme cold and arid environments. Full article
Show Figures

Figure 1

17 pages, 3127 KB  
Article
Performance Enhancement of Non-Intrusive Load Monitoring Based on Adaptive Multi-Scale Attention Integration Module
by Guobing Pan, Tao Tian, Haipeng Wang, Zheyu Hu and Beining Lao
Electronics 2026, 15(3), 517; https://doi.org/10.3390/electronics15030517 - 25 Jan 2026
Viewed by 185
Abstract
Non-Intrusive Load Monitoring is an effective method for disaggregating the power consumption of individual appliances from the aggregate load data of a building. The advent of smart meters, Internet of Things devices, and artificial intelligence technologies has significantly advanced the capabilities of non-intrusive [...] Read more.
Non-Intrusive Load Monitoring is an effective method for disaggregating the power consumption of individual appliances from the aggregate load data of a building. The advent of smart meters, Internet of Things devices, and artificial intelligence technologies has significantly advanced the capabilities of non-intrusive load monitoring. However, challenges such as varying sampling frequencies and measurement sensitivities remain. This paper introduces an innovative model incorporating an Adaptive Multi-Scale Attention Integration Module (AMSAIM) to address these issues. The model leverages deep learning and attention mechanisms to improve the accuracy and real-time performance of non-intrusive load monitoring. Validated on the standard UK-DALE dataset, the model consistently demonstrated superior performance. In seen scenarios, our model achieved average F1-scores approximating 0.94 and notably reduced Mean Absolute Error (MAE) values. For washing machines, it achieved an F1-score of 0.99 and MAE of 41.64, outperforming the next best method’s F1-score by 1 percentage point. In challenging unseen scenarios, the model showcased strong generalization, achieving an F1-score of 0.91 for washing machines and reducing MAE to 7.66. Furthermore, an ablation study rigorously confirmed the necessity of the AMSAIM module, showing that the synergistic integration of the efficient multi-scale attention (EMA) and the selective kernel (SK) adaptive receptive field unit is crucial for enhancing model robustness and generalization. Our results highlight the model’s potential for enhancing energy efficiency and providing actionable insights for energy management across various conditions. Full article
(This article belongs to the Special Issue AI Applications for Smart Grid)
Show Figures

Figure 1

Back to TopTop