Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (544)

Search Parameters:
Keywords = multi-granularity

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 4548 KB  
Article
Design and Experimentation of High-Throughput Granular Fertilizer Detection and Real-Time Precision Regulation System
by Li Ding, Feiyang Wu, Yuanyuan Li, Kaixuan Wang, Yechao Yuan, Bingjie Liu and Yufei Dou
Agriculture 2026, 16(3), 290; https://doi.org/10.3390/agriculture16030290 - 23 Jan 2026
Viewed by 168
Abstract
To address the challenge of imprecise detection and control of fertilizer application rates caused by high granular flow during fertilization operations, a parallel diversion detection method with real-time application rate regulation is proposed. The mechanism of uniform distribution of discrete particles formed by [...] Read more.
To address the challenge of imprecise detection and control of fertilizer application rates caused by high granular flow during fertilization operations, a parallel diversion detection method with real-time application rate regulation is proposed. The mechanism of uniform distribution of discrete particles formed by high-throughput aggregated granular fertilizer was elucidated. Key components including the uniform fertilizer tube, sensor detection structure, six-channel diversion cone disc, and fertilizer convergence tube underwent parametric design, culminating in the innovative development of a six-channel parallel diversion detection device. A multi-channel parallel signal detection method was studied, and a synchronous multi-channel signal acquisition system was designed. Through calibration tests, relationship models were established between the measured flow rate of granular fertilizer and voltage, as well as between the actual flow rate and the rotational speed of the fertilizer discharge shaft. A fuzzy PID control model was constructed in MATLAB2023/Simulink. Using overshoot, response time, and stability as evaluation metrics, the control performance of traditional PID and fuzzy PID was compared and analyzed. To validate the control system’s precision, device performance tests were conducted. Results demonstrated that fuzzy PID control reduced the time required to reach steady state by 66.87% compared to traditional PID, while overshoot decreased from 7.38 g·s−1 to 1.49 g·s−1. Divergence uniformity tests revealed that at particle generation rates of 10, 20, 30, and 40 g·s−1, the coefficient of variation for channel divergence consistency gradually increased with rising tilt angles. During field operations at 0–5.0° tilt, the coefficient of variation for channel divergence consistency remained below 7.72%. Bench tests revealed that the fuzzy PID control system achieved an average accuracy improvement of 3.64% compared to traditional PID control, with a maximum response time of 0.9 s. Field trials demonstrated detection accuracy no less than 92.64% at normal field operation speeds of 3.0–6.0 km·h−1. This system enables real-time, precise detection of fertilizer application rates and closed-loop regulation. Full article
Show Figures

Figure 1

25 pages, 1109 KB  
Article
A Scenario-Robust Intuitionistic Fuzzy AHP–TOPSIS Model for Sustainable Healthcare Waste Treatment Selection: Evidence from Türkiye
by Pınar Özkurt
Sustainability 2026, 18(3), 1167; https://doi.org/10.3390/su18031167 - 23 Jan 2026
Viewed by 93
Abstract
Selecting a sustainable healthcare waste treatment method is a complex multi-criteria problem influenced by environmental, economic, social and technological factors. This study addresses key gaps in the literature by proposing an intuitionistic fuzzy AHP–TOPSIS framework that explicitly models cognitive uncertainty and expert hesitation, [...] Read more.
Selecting a sustainable healthcare waste treatment method is a complex multi-criteria problem influenced by environmental, economic, social and technological factors. This study addresses key gaps in the literature by proposing an intuitionistic fuzzy AHP–TOPSIS framework that explicitly models cognitive uncertainty and expert hesitation, while demonstrating its application through a real-world case study in Adana, Türkiye. In contrast to prior studies utilizing fewer criteria, our framework evaluates four treatment alternatives—incineration, steam sterilization, microwave, and landfill—across 17 comprehensive criteria that directly integrate circular economy principles such as resource recovery and energy efficiency. The results indicate that steam sterilization is the most sustainable option, demonstrating superior performance across environmental, economic, social, and technological dimensions. A 15-scenario sensitivity analysis ensures ranking resilience across varying decision contexts. Furthermore, a systematic comparative analysis highlights the methodological advantages of the proposed framework in terms of analytical granularity and robustness compared to existing models. The study also offers step-by-step operational guidance, creating a transparent and policy-responsive decision-support tool for healthcare waste management authorities to advance sustainable practices. Full article
26 pages, 11141 KB  
Article
MISA-Net: Multi-Scale Interaction and Supervised Attention Network for Remote-Sensing Image Change Detection
by Haoyu Yin, Junzhe Wang, Shengyan Liu, Yuqi Wang, Yi Liu, Tengyue Guo and Min Xia
Remote Sens. 2026, 18(2), 376; https://doi.org/10.3390/rs18020376 - 22 Jan 2026
Viewed by 33
Abstract
Change detection in remote sensing imagery plays a vital role in land use analysis, disaster assessment, and ecological monitoring. However, existing remote sensing change detection methods often lack a structured and tightly coupled interaction paradigm to jointly reconcile multi-scale representation, bi-temporal discrimination, and [...] Read more.
Change detection in remote sensing imagery plays a vital role in land use analysis, disaster assessment, and ecological monitoring. However, existing remote sensing change detection methods often lack a structured and tightly coupled interaction paradigm to jointly reconcile multi-scale representation, bi-temporal discrimination, and fine-grained boundary modeling under practical computational constraints. To address this fundamental challenge, we propose a Multi-scale Interaction and Supervised Attention Network (MISANet). To improve the model’s ability to perceive changes at multiple scales, we design a Progressive Multi-Scale Feature Fusion Module (PMFFM), which employs a progressive fusion strategy to effectively integrate multi-granular cross-scale features. To enhance the interaction between bi-temporal features, we introduce a Difference-guided Gated Attention Interaction (DGAI) module. This component leverages difference information between the two time phases and employs a gating mechanism to retain fine-grained details, thereby improving semantic consistency. Furthermore, to guide the model’s focus on change regions, we design a Supervised Attention Decoder Module (SADM). This module utilizes a channel–spatial joint attention mechanism to reweight the feature maps. In addition, a deep supervision strategy is incorporated to direct the model’s attention toward both fine-grained texture differences and high-level semantic changes during training. Experiments conducted on the LEVIR-CD, SYSU-CD, and GZ-CD datasets demonstrate the effectiveness of our method, achieving F1-scores of 91.19%, 82.25%, and 88.35%, respectively. Compared with the state-of-the-art BASNet model, MISANet achieves performance gains of 0.50% F1 and 0.85% IoU on LEVIR-CD, 2.13% F1 and 3.02% IoU on SYSU-CD, and 1.28% F1 and 2.03% IoU on GZ-CD. The proposed method demonstrates strong generalization capabilities and is applicable to various complex change detection scenarios. Full article
Show Figures

Figure 1

11 pages, 3164 KB  
Article
Influence of MgO Binder Regulation on the Interfacial Structure of Lithium Thermal Batteries
by Zhi-Yang Fan, Xiao-Min Wang, Wei-Yi Zhang, Li-Ke Cheng, Wen-Xiu Gao and Cheng-Yong Shu
C 2026, 12(1), 10; https://doi.org/10.3390/c12010010 - 22 Jan 2026
Viewed by 62
Abstract
Lithium thermal batteries are primary reserve batteries utilizing solid molten salt electrolytes. They are regarded as ideal power sources for high-reliability applications due to their high power density, rapid activation, long shelf life, wide operating temperature range, and excellent environmental adaptability. However, existing [...] Read more.
Lithium thermal batteries are primary reserve batteries utilizing solid molten salt electrolytes. They are regarded as ideal power sources for high-reliability applications due to their high power density, rapid activation, long shelf life, wide operating temperature range, and excellent environmental adaptability. However, existing electrode systems are limited by insufficient conductivity and the use of high-impedance MgO binders. This results in sluggish electrode reaction kinetics and incomplete material conversion during high-temperature discharge, causing actual discharge capacities to fall far below theoretical values. To address this, FeS2-CoS2 multi-component composite cathode materials were synthesized via a high-temperature solid-phase method. Furthermore, two distinct MgO binders were systematically investigated: flake-like MgO (MgO-F) with a sheet-stacking structure and spherical MgO (MgO-S) with a low-tortuosity granular structure. Results indicate that while MgO-F offers superior electrolyte retention via physical confinement, its high tortuosity limits ionic conduction. In contrast, MgO-S facilitates the construction of a wettability-enhanced continuous ionic network, which effectively reduces interfacial impedance and enhances system conductivity. This regulation promoted Li+ migration and accelerated interfacial reaction kinetics. This study provides a feasible pathway for improving the electrochemical performance of lithium thermal batteries through morphology-oriented MgO binder regulation. Full article
(This article belongs to the Section Carbon Materials and Carbon Allotropes)
Show Figures

Graphical abstract

21 pages, 3313 KB  
Article
MGF-DTA: A Multi-Granularity Fusion Model for Drug–Target Binding Affinity Prediction
by Zheng Ni, Bo Wei and Yuni Zeng
Int. J. Mol. Sci. 2026, 27(2), 947; https://doi.org/10.3390/ijms27020947 - 18 Jan 2026
Viewed by 93
Abstract
Drug–target affinity (DTA) prediction is one of the core components of drug discovery. Despite considerable advances in previous research, DTA tasks still face several limitations with insufficient multi-modal information of drugs, the inherent sequence length limitation of protein language models, and single attention [...] Read more.
Drug–target affinity (DTA) prediction is one of the core components of drug discovery. Despite considerable advances in previous research, DTA tasks still face several limitations with insufficient multi-modal information of drugs, the inherent sequence length limitation of protein language models, and single attention mechanisms that fail to capture critical multi-scale features. To alleviate the above limitations, we developed a multi-granularity fusion model for drug–target binding affinity prediction, termed MGF-DTA. This model is composed of three fusion modules, specifically as follows. First, the model extracts deep semantic features of SMILES strings through ChemBERTa-2 and integrates them with molecular fingerprints by using gated fusion to enhance the multi-modal information of drugs. In addition, it employs a residual fusion mechanism to integrate the global embeddings from ESM-2 with the local features obtained by the k-mer and principal component analysis (PCA) method. Finally, a hierarchical attention mechanism is employed to extract multi-granularity features from both drug SMILES strings and protein sequences. Comparative analysis with other mainstream methods on the Davis, KIBA, and BindingDB datasets reveals that the MGF-DTA model exhibits outstanding performance advantages. Further, ablation studies confirm the effectiveness of the model components and case study illustrates its robust generalization capability. Full article
Show Figures

Figure 1

25 pages, 3269 KB  
Article
Dynamic Carbon-Aware Scheduling for Electric Vehicle Fleets Using VMD-BSLO-CTL Forecasting and Multi-Objective MPC
by Hongyu Wang, Zhiyu Zhao, Kai Cui, Zixuan Meng, Bin Li, Wei Zhang and Wenwen Li
Energies 2026, 19(2), 456; https://doi.org/10.3390/en19020456 - 16 Jan 2026
Viewed by 135
Abstract
Accurate perception of dynamic carbon intensity is a prerequisite for low-carbon demand-side response. However, traditional grid-average carbon factors lack the spatio-temporal granularity required for real-time regulation. To address this, this paper proposes a “Prediction-Optimization” closed-loop framework for electric vehicle (EV) fleets. First, a [...] Read more.
Accurate perception of dynamic carbon intensity is a prerequisite for low-carbon demand-side response. However, traditional grid-average carbon factors lack the spatio-temporal granularity required for real-time regulation. To address this, this paper proposes a “Prediction-Optimization” closed-loop framework for electric vehicle (EV) fleets. First, a hybrid forecasting model (VMD-BSLO-CTL) is constructed. By integrating Variational Mode Decomposition (VMD) with a CNN-Transformer-LSTM network optimized by the Blood-Sucking Leech Optimizer (BSLO), the model effectively captures multi-scale features. Validation on the UK National Grid dataset demonstrates its superior robustness against prediction horizon extension compared to state-of-the-art baselines. Second, a multi-objective Model Predictive Control (MPC) strategy is developed to guide EV charging. Applied to a real-world station-level scenario, the strategy navigates the trade-offs between user economy and grid stability. Simulation results show that the proposed framework simultaneously reduces economic costs by 4.17% and carbon emissions by 8.82%, while lowering the peak-valley difference by 6.46% and load variance by 11.34%. Finally, a cloud-edge collaborative deployment scheme indicates the engineering potential of the proposed approach for next-generation low-carbon energy management. Full article
Show Figures

Figure 1

27 pages, 3663 KB  
Article
Investigating Sustainable Development Trajectories in China (2006–2021): A Coupling Coordination Analysis of the Social, Economic, and Ecological Nexus
by Sirui Wang, Shisong Cao, Mingyi Du, Yue Liu and Yuxin Qian
Sustainability 2026, 18(2), 899; https://doi.org/10.3390/su18020899 - 15 Jan 2026
Viewed by 155
Abstract
The successful attainment of the Sustainable Development Goals (SDGs) necessitates robust monitoring frameworks capable of tracking progress toward tangible outcomes while capturing dynamic sustainability trajectories. However, existing SDG evaluation methods suffer from three critical limitations: (1) misalignment between global targets and national priorities, [...] Read more.
The successful attainment of the Sustainable Development Goals (SDGs) necessitates robust monitoring frameworks capable of tracking progress toward tangible outcomes while capturing dynamic sustainability trajectories. However, existing SDG evaluation methods suffer from three critical limitations: (1) misalignment between global targets and national priorities, which undermines contextual relevance; (2) fragmented assessments that neglect holistic integration of social, economic, and ecological dimensions, thereby obscuring systemic interdependencies; and (3) insufficient longitudinal analysis, which restricts insights into temporal patterns of sustainable development and hinders adaptive policymaking. To address these gaps, we employed China’s 31 provinces as a case study and constructed an SDG indicator framework comprising 178 metrics—harmonizing global SDG benchmarks with China’s national development priorities. Using official statistics and open-source data spanning 2006–2021, we evaluate longitudinal SDG scores for all 17 goals (SDGs 1–17). Additionally, we developed a composite SDG index that considers the coupling coordination degree of the social–economic–ecological system and evaluated the index value under different economic region settings. Finally, we developed a two-threshold model to analyze the dynamic evolution of SDG conditions, incorporating temporal sustainability (long-term development resilience) and action urgency (short-term policy intervention needs) as dual evaluation dimensions. This model was applied to conduct a longitudinal analysis (2006–2021) across all 31 Chinese provinces, enabling a granular assessment of regional SDG trajectories while capturing both systemic trends and acute challenges over time. The results indicate that China’s social SDG performance improved substantially over the 2006–2021 period, achieving a cumulative increase of 126.53%, whereas progress in ecological SDGs was comparatively modest, with a cumulative growth of only 23.93%. Over the same period, the average composite SDG score across China’s 31 provinces increased markedly from 0.502 to 0.714, reflecting a strengthened systemic alignment between regional development trajectories and national sustainability objectives. Further analysis shows that all provinces attained a status of “temporal sustainability with low action urgency” throughout the study period, highlighting China’s overall progress in sustainable development. Nevertheless, pronounced regional disparities persist: eastern provinces developed earlier and have consistently maintained leading positions; central and northeastern regions exhibit broadly comparable development levels; and western regions, despite severe early-stage lagging, have demonstrated accelerated growth in later years. Our study holds substantial significance by integrating multi-dimensional indicators—spanning ecological, economic, and social dimensions—to deliver a holistic, longitudinal perspective on sustainable development. Full article
Show Figures

Figure 1

31 pages, 2412 KB  
Article
Privacy-Preserving User Profiling Using MLP-Based Data Generalization
by Dardan Maraj, Renato Šoić, Antonia Žaja and Marin Vuković
Appl. Sci. 2026, 16(2), 848; https://doi.org/10.3390/app16020848 - 14 Jan 2026
Viewed by 123
Abstract
The rapid growth in Internet-based services has increased the demand for user data to enable personalized and adaptive digital experiences. These services typically require users to disclose various types of personal information, which are organized into user profiles and used to tailor content, [...] Read more.
The rapid growth in Internet-based services has increased the demand for user data to enable personalized and adaptive digital experiences. These services typically require users to disclose various types of personal information, which are organized into user profiles and used to tailor content, recommendations, and accessibility settings. However, achieving an effective balance between personalization accuracy and user data protection remains a persistent and complex challenge. Excessive data disclosure raises the risk of re-identification and privacy breaches, while excessive anonymization can significantly diminish personalization and overall service quality. In this paper, we address this trade-off by proposing a context-aware learning-based data generalization framework that preserves user privacy while maintaining the functional usefulness of personal data. We first conduct a systematic classification of user data commonly collected into five main categories: demographic, location, accessibility, preference, and behavior data. To generalize these data categories dynamically and adaptively, we use a Multi-Layer Perceptron (MLP) model that learns patterns across heterogeneous data types. Unlike traditional rule-based generalization techniques, the MLP-based approach captures nonlinear relationships, adapts to heterogeneous data distributions, and scales efficiently with large datasets. The proposed MLP-based generalization method reduces the granularity of personal data, preserving privacy without significantly compromising information usefulness. Experimental results show that the proposed method reduces the risk of re-identification to approximately 35%, compared to non-anonymized data, where the re-identification risk is about 80–90%. These findings highlight the potential of learning-based data generalization as a strategy for privacy-preserving personalization in modern Internet services. They also show how the proposed generalization method can be applied in practice to transform user data while maintaining both utility and confidentiality. Full article
(This article belongs to the Special Issue Advances in Technologies for Data Privacy and Security)
Show Figures

Figure 1

13 pages, 1361 KB  
Article
Mitigating Write Amplification via Stream-Aware Block-Level Buffering in Multi-Stream SSDs
by Hyeonseob Kim and Taeseok Kim
Appl. Sci. 2026, 16(2), 838; https://doi.org/10.3390/app16020838 - 14 Jan 2026
Viewed by 151
Abstract
Write amplification factor (WAF) is a critical performance and endurance bottleneck in flash-based solid-state drives (SSDs). Multi-streamed SSDs mitigate WAF by enabling logical data streams to be written separately, thereby improving the efficiency of garbage collection. However, despite the architectural potential of multi-streaming, [...] Read more.
Write amplification factor (WAF) is a critical performance and endurance bottleneck in flash-based solid-state drives (SSDs). Multi-streamed SSDs mitigate WAF by enabling logical data streams to be written separately, thereby improving the efficiency of garbage collection. However, despite the architectural potential of multi-streaming, prior research has largely overlooked the design of write buffer management schemes tailored to this model. In this paper, we propose a stream-aware block-level write buffer management technique that leverages both spatial and temporal locality to further reduce WAF. Although the write buffer operates at the granularity of pages, eviction is performed at the block level, where each block is composed exclusively of pages from the same stream. All pages and blocks are tracked using least recently used (LRU) lists at both global and per-stream levels. To avoid mixing data with disparate hotness and update frequencies, pages from the same stream are dynamically grouped into logical blocks based on their recency order. When space is exhausted, eviction is triggered by selecting a full block of pages from the cold region of the global LRU list. This strategy prevents premature eviction of hot pages and aligns physical block composition with logical stream boundaries. The proposed approach enhances WAF and garbage collection efficiency without requiring hardware modification or device-specific extensions. Experimental results confirm that our design delivers consistent performance and endurance improvements across diverse multi-streamed I/O workloads. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

41 pages, 80556 KB  
Article
Why ROC-AUC Is Misleading for Highly Imbalanced Data: In-Depth Evaluation of MCC, F2-Score, H-Measure, and AUC-Based Metrics Across Diverse Classifiers
by Mehdi Imani, Majid Joudaki, Ayoub Bagheri and Hamid R. Arabnia
Technologies 2026, 14(1), 54; https://doi.org/10.3390/technologies14010054 - 10 Jan 2026
Viewed by 467
Abstract
This study re-evaluates ROC-AUC for binary classification under severe class imbalance (<3% positives). Despite its widespread use, ROC-AUC can mask operationally salient differences among classifiers when the costs of false positives and false negatives are asymmetric. Using three benchmarks, credit-card fraud detection (0.17%), [...] Read more.
This study re-evaluates ROC-AUC for binary classification under severe class imbalance (<3% positives). Despite its widespread use, ROC-AUC can mask operationally salient differences among classifiers when the costs of false positives and false negatives are asymmetric. Using three benchmarks, credit-card fraud detection (0.17%), yeast protein localization (1.35%), and ozone level detection (2.9%), we compare ROC-AUC with Matthews Correlation Coefficient, F2-score, H-measure, and PR-AUC. Our empirical analyses span 20 classifier–sampler configurations per dataset, combined with four classifiers (Logistic Regression, Random Forest, XGBoost, and CatBoost) and four oversampling methods plus a no-resampling baseline (no resampling, SMOTE, Borderline-SMOTE, SVM-SMOTE, ADASYN). ROC-AUC exhibits pronounced ceiling effects, yielding high scores even for underperforming models. In contrast, MCC and F2 align more closely with deployment-relevant costs and achieve the highest Kendall’s τ rank concordance across datasets; PR-AUC provides threshold-independent ranking, and H-measure integrates cost sensitivity. We quantify uncertainty and differences using stratified bootstrap confidence intervals, DeLong’s test for ROC-AUC, and Friedman–Nemenyi critical-difference diagrams, which collectively underscore the limited discriminative value of ROC-AUC in rare-event settings. The findings recommend a shift to a multi-metric evaluation framework: ROC-AUC should not be used as the primary metric in ultra-imbalanced settings; instead, MCC and F2 are recommended as primary indicators, supplemented by PR-AUC and H-measure where ranking granularity and principled cost integration are required. This evidence encourages researchers and practitioners to move beyond sole reliance on ROC-AUC when evaluating classifiers in highly imbalanced data. Full article
Show Figures

Figure 1

41 pages, 701 KB  
Review
New Trends in the Use of Artificial Intelligence and Natural Language Processing for Occupational Risks Prevention
by Natalia Orviz-Martínez, Efrén Pérez-Santín and José Ignacio López-Sánchez
Safety 2026, 12(1), 7; https://doi.org/10.3390/safety12010007 - 8 Jan 2026
Viewed by 266
Abstract
In an increasingly technologized and automated world, workplace safety and health remain a major global challenge. After decades of regulatory frameworks and substantial technical and organizational advances, the expanding interaction between humans and machines and the growing complexity of work systems are gaining [...] Read more.
In an increasingly technologized and automated world, workplace safety and health remain a major global challenge. After decades of regulatory frameworks and substantial technical and organizational advances, the expanding interaction between humans and machines and the growing complexity of work systems are gaining importance. In parallel, the digitalization of Industry 4.0/5.0 is generating unprecedented volumes of safety-relevant data and new opportunities to move from reactive analysis to proactive, data-driven prevention. This review maps how artificial intelligence (AI), with a specific focus on natural language processing (NLP) and large language models (LLMs), is being applied to occupational risk prevention across sectors. A structured search of the Web of Science Core Collection (2013–October 2025), combined OSH-related terms with AI, NLP and LLM terms. After screening and full-text assessment, 123 studies were discussed. Early work relied on text mining and traditional machine learning to classify accident types and causes, extract risk factors and support incident analysis from free-text narratives. More recent contributions use deep learning to predict injury severity, potential serious injuries and fatalities (PSIF) and field risk control program (FRCP) levels and to fuse textual data with process, environmental and sensor information in multi-source risk models. The latest wave of studies deploys LLMs, retrieval-augmented generation and vision–language architectures to generate task-specific safety guidance, support accident investigation, map occupations and job tasks and monitor personal protective equipment (PPE) compliance. Together, these developments show that AI-, NLP- and LLM-based systems can exploit unstructured OSH information to provide more granular, timely and predictive safety insights. However, the field is still constrained by data quality and bias, limited external validation, opacity, hallucinations and emerging regulatory and ethical requirements. In conclusion, this review positions AI and LLMs as tools to support human decision-making in OSH and outlines a research agenda centered on high-quality datasets and rigorous evaluation of fairness, robustness, explainability and governance. Full article
(This article belongs to the Special Issue Advances in Ergonomics and Safety)
Show Figures

Figure 1

35 pages, 1656 KB  
Review
Microgrid Optimization with Metaheuristic Algorithms—A Review of Technologies and Trends for Sustainable Energy Systems
by Ghassan Zubi and Sofoklis Makridis
Sustainability 2026, 18(2), 647; https://doi.org/10.3390/su18020647 - 8 Jan 2026
Viewed by 312
Abstract
Microgrids are evolving from simple hybrid systems into complex, multi-energy platforms with high-dimensional optimization challenges due to technological diversification, sector coupling, and increased data granularity. This review systematically examines the intersection of microgrid optimization and metaheuristic algorithms, focusing on the period from 2015 [...] Read more.
Microgrids are evolving from simple hybrid systems into complex, multi-energy platforms with high-dimensional optimization challenges due to technological diversification, sector coupling, and increased data granularity. This review systematically examines the intersection of microgrid optimization and metaheuristic algorithms, focusing on the period from 2015 to 2025. We first trace the technological evolution of microgrids and identify the drivers of increased optimization complexity. We then provide a structured overview of metaheuristic algorithms—including evolutionary, swarm intelligence, physics-based, and human-inspired approaches—and discuss their suitability for high-dimensional search spaces. Through a comparative analysis of case studies, we demonstrate that metaheuristics such as genetic algorithms, particle swarm optimization, and the gray wolf optimizer can reduce the computation time to under 10% of that required by an exhaustive search while effectively handling multimodal, constrained objectives. The review further highlights the growing role of hybrid algorithms and the need to incorporate uncertainty into optimization models. We conclude that future microgrid design will increasingly rely on adaptive and hybrid metaheuristics, supported by standardized benchmark problems, to navigate the growing dimensionality and ensure resilient, cost-effective, and sustainable systems. This work provides a roadmap for researchers and practitioners in selecting and developing optimization frameworks for the next generation of microgrids. Full article
Show Figures

Figure 1

25 pages, 1075 KB  
Article
Prompt-Based Few-Shot Text Classification with Multi-Granularity Label Augmentation and Adaptive Verbalizer
by Deling Huang, Zanxiong Li, Jian Yu and Yulong Zhou
Information 2026, 17(1), 58; https://doi.org/10.3390/info17010058 - 8 Jan 2026
Viewed by 243
Abstract
Few-Shot Text Classification (FSTC) aims to classify text accurately into predefined categories using minimal training samples. Recently, prompt-tuning-based methods have achieved promising results by constructing verbalizers that map input data to the label space, thereby maximizing the utilization of pre-trained model features. However, [...] Read more.
Few-Shot Text Classification (FSTC) aims to classify text accurately into predefined categories using minimal training samples. Recently, prompt-tuning-based methods have achieved promising results by constructing verbalizers that map input data to the label space, thereby maximizing the utilization of pre-trained model features. However, existing verbalizer construction methods often rely on external knowledge bases, which require complex noise filtering and manual refinement, making the process time-consuming and labor-intensive, while approaches based on pre-trained language models (PLMs) frequently overlook inherent prediction biases. Furthermore, conventional data augmentation methods focus on modifying input instances while overlooking the integral role of label semantics in prompt tuning. This disconnection often leads to a trade-off where increased sample diversity comes at the cost of semantic consistency, resulting in marginal improvements. To address these limitations, this paper first proposes a novel Bayesian Mutual Information-based method that optimizes label mapping to retain general PLM features while reducing reliance on irrelevant or unfair attributes to mitigate latent biases. Based on this method, we propose two synergistic generators that synthesize semantically consistent samples by integrating label word information from the verbalizer to effectively enrich data distribution and alleviate sparsity. To guarantee the reliability of the augmented set, we propose a Low-Entropy Selector that serves as a semantic filter, retaining only high-confidence samples to safeguard the model against ambiguous supervision signals. Furthermore, we propose a Difficulty-Aware Adversarial Training framework that fosters generalized feature learning, enabling the model to withstand subtle input perturbations. Extensive experiments demonstrate that our approach outperforms state-of-the-art methods on most few-shot and full-data splits, with F1 score improvements of up to +2.8% on the standard AG’s News benchmark and +1.0% on the challenging DBPedia benchmark. Full article
Show Figures

Graphical abstract

16 pages, 1970 KB  
Article
LSON-IP: Lightweight Sparse Occupancy Network for Instance Perception
by Xinwang Zheng, Yuhang Cai, Lu Yang, Chengyu Lu and Guangsong Yang
World Electr. Veh. J. 2026, 17(1), 31; https://doi.org/10.3390/wevj17010031 - 7 Jan 2026
Viewed by 189
Abstract
The high computational demand of dense voxel representations severely limits current vision-centric 3D semantic occupancy prediction methods, despite their capacity for granular scene understanding. This challenge is particularly acute in safety-critical applications like autonomous driving, where accurately perceiving dynamic instances often takes precedence [...] Read more.
The high computational demand of dense voxel representations severely limits current vision-centric 3D semantic occupancy prediction methods, despite their capacity for granular scene understanding. This challenge is particularly acute in safety-critical applications like autonomous driving, where accurately perceiving dynamic instances often takes precedence over capturing the static background. This paper challenges the paradigm of dense prediction for such instance-focused tasks. We introduce the LSON-IP, a framework that strategically avoids the computational expense of dense 3D grids. LSON-IP operates on a sparse set of 3D instance queries, which are initialized directly from multi-view 2D images. These queries are then refined by our novel Sparse Instance Aggregator (SIA), an attention-based module. The SIA incorporates rich multi-view features while simultaneously modeling inter-query relationships to construct coherent object representations. Furthermore, to obviate the need for costly 3D annotations, we pioneer a Differentiable Sparse Rendering (DSR) technique. DSR innovatively defines a continuous field from the sparse voxel output, establishing a differentiable bridge between our sparse 3D representation and 2D supervision signals through volume rendering. Extensive experiments on major autonomous driving benchmarks, including SemanticKITTI and nuScenes, validate our approach. LSON-IP achieves strong performance on key dynamic instance categories and competitive overall semantic completion, all while reducing computational overhead by over 60% compared to dense baselines. Our work thus paves the way for efficient, high-fidelity instance-aware 3D perception. Full article
Show Figures

Figure 1

19 pages, 950 KB  
Article
Edge Microservice Deployment and Management Using SDN-Enabled Whitebox Switches
by Mohamad Rahhal, Lluis Gifre, Pablo Armingol Robles, Javier Mateos Najari, Aitor Zabala, Manuel Angel Jimenez, Rafael Leira Osuna, Raul Muñoz, Oscar González de Dios and Ricard Vilalta
Electronics 2026, 15(1), 246; https://doi.org/10.3390/electronics15010246 - 5 Jan 2026
Viewed by 236
Abstract
This work advances a 6G-ready, micro-granular SDN fabric that unifies high-performance edge data planes with intent-driven, multi-domain orchestration and cloud offloading. First, edge and cell-site whiteboxes are upgraded with Smart Network Interface Cards and embedded AI accelerators, enabling line-rate processing of data flows [...] Read more.
This work advances a 6G-ready, micro-granular SDN fabric that unifies high-performance edge data planes with intent-driven, multi-domain orchestration and cloud offloading. First, edge and cell-site whiteboxes are upgraded with Smart Network Interface Cards and embedded AI accelerators, enabling line-rate processing of data flows and on-box learning/inference directly in the data plane. This pushes functions such as traffic classification, telemetry, and anomaly mitigation to the point of ingress, reducing latency and backhaul load. Second, an SDN controller, i.e., ETSI TeraFlowSDN, is extended to deliver multi-domain SDN orchestration with native lifecycle management (LCM) for whitebox Network Operating Systems—covering onboarding, configuration-drift control, rolling upgrades/rollbacks, and policy-guarded compliance—so operators can reliably manage heterogeneous edge fleets at scale. Third, the SDN controller incorporates a new NFV-O client that seamlessly offloads network services—such as ML pipelines or NOS components—to telco clouds via an NFV orchestrator (e.g., ETSI Open Source MANO), enabling elastic placement and scale-out across the edge–cloud continuum. Together, these contributions deliver an open, programmable platform that couples in-situ acceleration with closed-loop, intent-based orchestration and elastic cloud resources, targeting demonstrable gains in end-to-end latency, throughput, operational agility, and energy efficiency for emerging 6G services. Full article
(This article belongs to the Special Issue Optical Networking and Computing)
Show Figures

Figure 1

Back to TopTop