Due to scheduled maintenance work on our servers, there may be short service disruptions on this website between 11:00 and 12:00 CEST on March 28th.
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,257)

Search Parameters:
Keywords = novel environment task

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
32 pages, 8696 KB  
Article
Phase-Aware Hierarchical Reinforcement Learning with Dynamic Human–AI Authority Allocation for Mountain Search and Rescue
by Chenzhe Zhong, Bo Liu, Wei Zhu, Dongxu Dai and Yu Jiang
Drones 2026, 10(4), 229; https://doi.org/10.3390/drones10040229 - 24 Mar 2026
Abstract
Search and rescue (SAR) operations in mountainous terrain present significant challenges due to complex environments, time-critical decisions, and the need for effective human–AI collaboration. Existing approaches typically employ either fully autonomous systems that lack adaptability to varying task requirements, or fixed human–AI authority [...] Read more.
Search and rescue (SAR) operations in mountainous terrain present significant challenges due to complex environments, time-critical decisions, and the need for effective human–AI collaboration. Existing approaches typically employ either fully autonomous systems that lack adaptability to varying task requirements, or fixed human–AI authority allocations that fail to leverage the distinct strengths of humans and AI across different mission phases. This paper proposes Phase-Aware Hierarchical Reinforcement Learning (PAHRL), a novel framework that dynamically allocates decision-making authority between human operators and AI agents based on identified task phases. First, we formulate the mountain SAR problem as a three-phase task structure: Wide Search (WS), Target Confirmation (TC), and Rescue Coordination (RC), and examine the consistency of this decomposition through unsupervised clustering analysis, supported by bootstrap stability (ARI = 0.983 ± 0.083) and multiple clustering metrics. Second, we design an adaptive authority mechanism with four levels (L1: Human-Led to L4: Full-Auto) that automatically adjusts human involvement based on current phase characteristics and environmental uncertainty estimates. Third, we introduce a priority-based task execution module that ensures efficient resource allocation across multiple rescue objectives while respecting authority constraints. Extensive experiments demonstrate that PAHRL outperforms baseline methods, achieving a 20.9% higher success rate compared to standard PPO (59.0% vs. 48.8%) and 66.7% improvement over heuristic approaches. PAHRL maintains 96.9% precision even under 60% noise conditions with only 0.09 false rescues per episode. Ablation studies further reveal that phase awareness serves as a critical robustness mechanism; removing phase detection causes complete mission failure under noisy conditions. These results evaluate that phase-aware dynamic authority allocation significantly enhances both efficiency and robustness in human–AI collaborative SAR missions. While demonstrated in a proof-of-concept simulation with computational human models, validation with real operators and more complex environments remains essential before operational deployment. Full article
(This article belongs to the Section Artificial Intelligence in Drones (AID))
Show Figures

Figure 1

23 pages, 1038 KB  
Article
The Age of Generative AI Model for Fresh Industrial AIGC Services: A Hybrid-Action Multi-Agent DRL Approach
by Wenjing Li, Ni Tian and Long Zhang
Future Internet 2026, 18(3), 172; https://doi.org/10.3390/fi18030172 - 23 Mar 2026
Viewed by 101
Abstract
To meet the growing demand for autonomous decision-making and real-time optimization in industrial manufacturing, integrating Artificial Intelligence-Generated Content (AIGC) services with Industry 5.0 can enable real-time industrial intelligence. The effectiveness of a generative model is closely related to the current state of the [...] Read more.
To meet the growing demand for autonomous decision-making and real-time optimization in industrial manufacturing, integrating Artificial Intelligence-Generated Content (AIGC) services with Industry 5.0 can enable real-time industrial intelligence. The effectiveness of a generative model is closely related to the current state of the production environment. However, existing studies often ignore the dynamic temporal relationship between generative models and production environments, especially in industrial scenarios with large model transmission delays and random AIGC task arrivals. Therefore, we define a novel metric, namely the Age of Model (AoM), to measure the freshness of generative models with respect to current industrial tasks. We then formulate an average-AoM-minimization problem that jointly considers LoRA-based fine-tuning, wireless transmission and resource allocation. To solve this problem, we propose a Hybrid-Action Multi-Agent Proximal Policy Optimization (HA-MAPPO) algorithm. The proposed algorithm follows the centralized training and decentralized execution (CTDE) paradigm and introduces a Main-Agent Priority State Strategy to support coordinated training and independent execution. In addition, a multi-head output structure is designed to handle the hybrid-action space, which includes discrete fine-tuning association decisions and continuous transmission resource allocation. Simulation results show that the proposed scheme outperforms all benchmark methods. Specifically, the cumulative rewards are improved by approximately 11.13%, 20.32%, 36.61%, and 38.78% compared with the four benchmark algorithms, respectively. These results demonstrate that the proposed scheme can significantly reduce the average AoM while providing high-quality and timely industrial AIGC services. Full article
Show Figures

Figure 1

19 pages, 992 KB  
Article
Hybrid Music Similarity with Hypergraph and Siamese Network
by Sera Kim, Youngjun Kim, Jaewon Lee and Dalwon Jang
Big Data Cogn. Comput. 2026, 10(3), 96; https://doi.org/10.3390/bdcc10030096 - 21 Mar 2026
Viewed by 127
Abstract
This paper proposes a novel method for measuring music similarity. Existing music similarity measurements have often been used for music appreciation, but this paper proposes a method for measuring the similarity between music samples which are used for music production. Conventional music recommendation [...] Read more.
This paper proposes a novel method for measuring music similarity. Existing music similarity measurements have often been used for music appreciation, but this paper proposes a method for measuring the similarity between music samples which are used for music production. Conventional music recommendation approaches often rely on either metadata-based similarity or audio-based feature similarity in isolation, which limits their effectiveness in sample-based recommendation scenarios where both compositional context and acoustic characteristics are important. To address this limitation, the proposed framework combines a hypergraph-based information similarity module with a feature-based similarity module learned using Siamese networks and triplet loss. In the information-based module, metadata attributes such as beats per minute (BPM), genre, chord, key, and instrument are modeled as vertices in a hypergraph, and Random Walk–Word2Vec embeddings are learned to capture structural relationships between music samples and their attributes. In parallel, the feature-based module employs vertex-specific Siamese networks trained on instrument and key classification tasks to learn perceptual similarity directly from audio signals. The two modules are trained independently and jointly utilized at the recommendation stage to provide attribute-specific similarity results for a given query sample. Results show that the proposed system achieves high Precision@k across multiple attributes and forms stable similarity structures in the embedding space, even without relying on user interaction data. These results reflect embedding consistency evaluated over the entire dataset where training and retrieval are performed on the same sample pool, rather than generalization to unseen samples. These results demonstrate that the proposed hybrid framework effectively captures both structural and perceptual similarity among music samples and is well suited for sample-based music recommendation in music production environments. Full article
Show Figures

Figure 1

46 pages, 33541 KB  
Article
AIFloodSense: A Global Aerial Imagery Dataset for Semantic Segmentation and Understanding of Flooded Environments
by Georgios Simantiris, Konstantinos Bacharidis, Apostolos Papanikolaou, Petros Giannakakis and Costas Panagiotakis
Remote Sens. 2026, 18(6), 938; https://doi.org/10.3390/rs18060938 - 19 Mar 2026
Viewed by 140
Abstract
Accurate flood detection is critical for disaster response, yet the scarcity of diverse annotated datasets hinders robust model development. Existing resources typically suffer from limited geographic scope and insufficient annotation granularity, restricting the generalization capabilities of computer vision methods. To bridge this gap, [...] Read more.
Accurate flood detection is critical for disaster response, yet the scarcity of diverse annotated datasets hinders robust model development. Existing resources typically suffer from limited geographic scope and insufficient annotation granularity, restricting the generalization capabilities of computer vision methods. To bridge this gap, we introduce AIFloodSense, a comprehensive evaluation benchmark designed to advance domain-generalized Artificial Intelligence for climate resilience. The dataset comprises 470 high-resolution aerial images capturing 230 distinct flood events across 64 countries and six continents. Unlike prior benchmarks, AIFloodSense ensures exceptional global diversity and temporal relevance (2022–2024), supporting three complementary tasks: (i) Image Classification, featuring novel sub-tasks for environment type, camera angle, and continent recognition; (ii) Semantic Segmentation, providing precise pixel-level masks for flood, sky, buildings, and background; and (iii) Visual Question Answering (VQA), enabling natural language reasoning for disaster assessment. We provide baseline benchmarks for all tasks using state-of-the-art architectures, demonstrating the dataset’s complexity and its utility in fostering robust AI tools for environmental monitoring. Crucially, we show that despite its compact size, AIFloodSense enables better generalization on external test sets than much larger alternatives, validating the premise that rigorous diversity is more effective than scale for training robust flood detection models, and is made publicly available to accelerate further research in the field. Full article
Show Figures

Figure 1

36 pages, 12321 KB  
Article
A Multi-Scale Spatio-Temporal Graph Neural Network for Meteorology-Driven Dissolved Oxygen Prediction in Taihu Lake
by Yiming Xia, Qiqi Li, Songhan Sun, Chen Ding, Yichen Zha, Jiquan Yang and Jianping Shi
Water 2026, 18(6), 716; https://doi.org/10.3390/w18060716 - 18 Mar 2026
Viewed by 142
Abstract
Dissolved oxygen (DO) is a crucial indicator for characterizing water quality and ecosystem status in freshwater lakes, and its concentration is closely correlated with the surrounding aquatic environment, particularly meteorological conditions. However, traditional DO prediction methods struggle to effectively capture the intricate coupling [...] Read more.
Dissolved oxygen (DO) is a crucial indicator for characterizing water quality and ecosystem status in freshwater lakes, and its concentration is closely correlated with the surrounding aquatic environment, particularly meteorological conditions. However, traditional DO prediction methods struggle to effectively capture the intricate coupling relationships between multi-station meteorological factors and DO concentration time series, limiting the prediction accuracy. This study proposes a multi-scale spatio-temporal graph neural network with integrated multi-meteorological factors. Taking Taihu Lake and its surrounding cities as the study area, a meteorological graph is constructed based on the geographic proximity between meteorological stations, and a dual-stage “local–global” modeling strategy is adopted to capture the spatio-temporal dependencies of DO concentration under meteorological forcing. Using R2, RMSE, MAE and MAPE as evaluation metrics, we conducted single-step and multi-step DO prediction experiments on the 2023–2024 Taihu Tuoshan water quality dataset and compared the proposed model with commonly used prediction models. In the single-step prediction task, the proposed model improved R2 by 2.12–20.84% and reduced RMSE, MAE, and MAPE by 3.05–40.80%, 14.97–53.26%, and 6.91–55.62%, respectively. In the 6-step-ahead and 12-step-ahead prediction tasks, RMSE and MAE were reduced by 3.79–15.75% and 6.68–23.09%, and by 5.03–10.39% and 7.13–16.46%, respectively. The experimental results provide quantitative evidence for the superiority of the proposed model in single-step and multi-step DO prediction. This study offers a novel data-driven tool for lake water quality early warning and drinking water safety, and the proposed framework can serve as a reference for water quality prediction studies driven by multi-source environmental factors. Full article
Show Figures

Figure 1

23 pages, 8149 KB  
Article
UGV Swarm Multi-View Fusion Under Occlusion: A Graph-Based Calibration-Free Framework
by Jiaqi Jing, Weilong Song, Hangcheng Zhang, Yong Liu, Fuyong Feng, Dezhi Zheng and Shangchun Fan
Drones 2026, 10(3), 214; https://doi.org/10.3390/drones10030214 - 18 Mar 2026
Viewed by 142
Abstract
In unmanned ground vehicle (UGV) swarm systems, comprehensive environmental awareness is critical for coordinated operations. Yet they are frequently deployed in occlusion-rich, constrained environments where multi-agent visual fusion is essential. However, existing methods are critically limited by offline-calibrated extrinsic parameters, hindering flexible deployment, [...] Read more.
In unmanned ground vehicle (UGV) swarm systems, comprehensive environmental awareness is critical for coordinated operations. Yet they are frequently deployed in occlusion-rich, constrained environments where multi-agent visual fusion is essential. However, existing methods are critically limited by offline-calibrated extrinsic parameters, hindering flexible deployment, and by a strong co-visibility assumption, which fails under severe occlusion. To overcome these constraints, we introduce an end-to-end, calibration-free framework for the joint registration of cameras and subjects. Our approach begins with a single-view module that estimates subjects’ poses and appearance features. Subsequently, a novel graph-based pose propagation module (GPPM) treats UGVs’ cameras as nodes in a graph, connecting them with edges when they share co-visible subjects identified via appearance matching. Breadth-first search (BFS) then finds the shortest registration path from any camera to a designated root camera, enabling pose propagation via local co-visibility links and global alignment of all subjects into a unified bird’s-eye-view (BEV) space. This strategy relaxes the stringent requirement of full co-visibility with the root node. A multi-task loss function is proposed to jointly optimize pose estimation and feature matching. Trained and evaluated on a synthetic dataset with occlusions (CSRD-O) collected by a UGV swarm system, our framework achieves mean camera pose errors of 1.57 m/8.70° and mean subject pose errors of 1.40 m/9.14°. Furthermore, we demonstrate a scene monitoring task using a UGV swarm system. Experiments show that the proposed method generates robust BEV estimates even under severe occlusion and low inter-view overlap. This work presents a purely visual, self-calibrating multi-view fusion perception scheme, demonstrating its potential to support cooperative perception, task-oriented monitoring, and collective situational awareness in UGV swarm systems. Full article
Show Figures

Figure 1

17 pages, 2130 KB  
Article
FogGate-YOLO: Traffic Object Detection in Foggy Environments Using Channel Selection Mechanisms
by Yuhe Yang, Suilian You, Jinpeng Yu and Bo Lu
Sensors 2026, 26(6), 1811; https://doi.org/10.3390/s26061811 - 13 Mar 2026
Viewed by 151
Abstract
To address the challenges posed by foggy conditions in object detection tasks, we propose FogGate-YOLO, an enhanced YOLOv8 framework designed for robust and efficient detection in foggy environments. Unlike traditional methods that rely on image dehazing or preprocessing enhancements, our approach directly strengthens [...] Read more.
To address the challenges posed by foggy conditions in object detection tasks, we propose FogGate-YOLO, an enhanced YOLOv8 framework designed for robust and efficient detection in foggy environments. Unlike traditional methods that rely on image dehazing or preprocessing enhancements, our approach directly strengthens the model’s feature representation by introducing two novel modules: GroupGatedConv and C2fGated. These modules collaboratively mitigate fog-induced degradation, improving feature extraction and enhancing performance without additional inference overhead. The GroupGatedConv module focuses on coarse-grained channel selection in the early to mid-stages of the backbone, suppressing noise while preserving essential structural features. The C2fGated module refines the aggregated features in both the backbone and neck after multi-branch fusion, enhancing fine-grained feature recalibration. Together, these two modules provide a hierarchical coarse to fine channel selection strategy that significantly improves the model’s discriminative power in foggy conditions. Full article
(This article belongs to the Topic Advances in Autonomous Vehicles, Automation, and Robotics)
Show Figures

Figure 1

22 pages, 1506 KB  
Article
Task Offloading Based on Virtual Network Embedding in Software-Defined Edge Networks: A Deep Reinforcement Learning Approach
by Lixin Ma, Peiying Zhang and Ning Chen
Information 2026, 17(3), 278; https://doi.org/10.3390/info17030278 - 10 Mar 2026
Viewed by 238
Abstract
The advent of 5G/6G technologies and the pervasive deployment of IoT devices are driving the emergence of demanding applications that necessitate ultra-low latency, high bandwidth, and significant computational power. Traditional cloud computing models fall short in meeting these stringent requirements. To address this, [...] Read more.
The advent of 5G/6G technologies and the pervasive deployment of IoT devices are driving the emergence of demanding applications that necessitate ultra-low latency, high bandwidth, and significant computational power. Traditional cloud computing models fall short in meeting these stringent requirements. To address this, Software-Defined Edge Networks (SDENs) have emerged as a promising architecture, yet efficiently managing their heterogeneous and geographically distributed resources poses substantial challenges for optimal application provisioning. In response, this paper proposes a novel framework for intelligent task offloading, which reframes the intricate multi-component application task offloading problem as a Virtual Network Embedding (VNE) challenge within a SDEN environment. We introduce a comprehensive model where complex applications are represented as Virtual Network Requests (VNRs). In this model, each VNR consists of virtual nodes that demand specific computing and storage resources, as well as virtual links that demand specific bandwidth and must adhere to maximum tolerable delay constraints. To dynamically solve this NP-hard VNE problem in the face of stochastic VNR arrivals and dynamic network conditions, we leverage Deep Reinforcement Learning (DRL). Specifically, a Soft Actor-Critic (SAC) agent is employed at the SDN controller. This agent learns a sequential decision-making policy for mapping virtual nodes to physical edge servers and virtual links to network paths. To guide the agent towards efficient resource utilization, we define the reward for each successful embedding as the long-term revenue-to-cost ratio. By learning to maximize this reward, the agent is naturally driven to find economically viable allocation strategies. Comprehensive simulation experiments demonstrate that our SAC-based VNE approach significantly outperforms other baselines across key metrics, affirming its efficacy in dynamic SDEN environments. Full article
(This article belongs to the Section Information and Communications Technology)
Show Figures

Figure 1

22 pages, 5676 KB  
Article
Complete Coverage Random Path Planning Based on a Novel Fractal-Fractional-Order Multi-Scroll Chaotic System
by Xiaoran Lin, Mengxuan Dong, Xueya Xue, Xiaojuan Li and Yachao Wang
Mathematics 2026, 14(5), 926; https://doi.org/10.3390/math14050926 - 9 Mar 2026
Viewed by 213
Abstract
With the increasing demands for autonomy and coverage efficiency in tasks such as security patrol and post-disaster exploration using mobile robots, achieving random, efficient, and complete coverage path planning has become a critical challenge. Traditional chaotic path planning methods, while capable of generating [...] Read more.
With the increasing demands for autonomy and coverage efficiency in tasks such as security patrol and post-disaster exploration using mobile robots, achieving random, efficient, and complete coverage path planning has become a critical challenge. Traditional chaotic path planning methods, while capable of generating unpredictable trajectories, still have limitations in terms of randomness strength, traversal uniformity, and convergence coverage. To address this, this study proposes a complete-coverage random path planning method based on a novel four-dimensional fractal-fractional multi-scroll chaotic system. The main contributions of this research are as follows: First, by introducing additional state variables and fractal-fractional operators into the classical Chen system, a fractal-fractional chaotic system with a multi-scroll attractor structure is constructed. The output of this system is then mapped into robot angular velocity commands to achieve area coverage in unknown environments. Key findings include: the novel chaotic system possesses two positive Lyapunov exponents; Spectral Entropy (SE) and Complexity (CO) analyses indicate that when parameter B is fixed and the fractional order α increases, the dynamic complexity of the system significantly rises; in a 50 × 50 grid environment, the robot driven by this system achieved a coverage rate of 98.88% within 10,000 iterations, outperforming methods based on Lorenz, Chua systems, and random walks; ablation experiments further demonstrate that the combined effects of the fractal order β, fractional order α, and multi-scroll nonlinear terms are key to enhancing system complexity and coverage performance. The significance of this study lies in that it not only provides new ideas for constructing complex chaotic systems but also offers a reliable theoretical foundation and practical solution for mobile robots to perform efficient, random, and high-coverage autonomous inspection tasks in unknown regions. Full article
Show Figures

Figure 1

26 pages, 4140 KB  
Article
A Resource-Efficient Approach to Fine-Tuning a BERT-Base Model for Sentiment Analysis
by Abdullah M. Basahel, Shreyanth H. Giriyappa, Furqan Alam, Tahani Saleh Mohammed Alnazzawi, Saqib Qamar and Adnan Ahmed Abi Sen
Computers 2026, 15(3), 159; https://doi.org/10.3390/computers15030159 - 3 Mar 2026
Viewed by 452
Abstract
Fine-tuning a BERT-Base model for specific tasks, such as sentiment analysis, has become resource-intensive and often requires high computational power and memory. This paper introduces SCALE, a novel resource-efficient fine-tuning method that targets the most critical transformer layers, which reduces computational costs without [...] Read more.
Fine-tuning a BERT-Base model for specific tasks, such as sentiment analysis, has become resource-intensive and often requires high computational power and memory. This paper introduces SCALE, a novel resource-efficient fine-tuning method that targets the most critical transformer layers, which reduces computational costs without sacrificing performance. By dynamically profiling transformer layers via activation magnitudes and attention entropy, SCALE selects and adapts only the most influential layers with lightweight adapter modules. The proposed method outperforms traditional fine-tuning techniques, achieving a 2.3% improvement in accuracy on the IMDB dataset and reducing training time by 56.3% compared to full-model fine-tuning. Experiments across various sentiment analysis benchmarks demonstrate SCALE’s effectiveness in optimizing fine-tuning for the BERT-base model in resource-constrained environments, achieving up to 99% of the performance of full-model fine-tuning while using only 40% of the parameters. The empirical validation in this study is restricted to binary and multi-class sentiment classification. The evaluation specifically reflects effectiveness in sentiment analysis text classification tasks. Full article
(This article belongs to the Section AI-Driven Innovations)
Show Figures

Figure 1

26 pages, 3000 KB  
Article
Material Classification from Non-Line-of-Sight Acoustic Echoes Using Wavelet-Acoustic Hybrid Feature Fusion
by Dilan Onat Alakuş and İbrahim Türkoğlu
Sensors 2026, 26(5), 1577; https://doi.org/10.3390/s26051577 - 3 Mar 2026
Viewed by 349
Abstract
Acoustic material classification under non-line-of-sight (NLOS) conditions—where direct sound paths are obstructed—is a challenging task due to echo attenuation, complex reflections, and noise effects. This study aims to improve NLOS material recognition by introducing a novel wavelet–acoustic hybrid feature fusion method integrated with [...] Read more.
Acoustic material classification under non-line-of-sight (NLOS) conditions—where direct sound paths are obstructed—is a challenging task due to echo attenuation, complex reflections, and noise effects. This study aims to improve NLOS material recognition by introducing a novel wavelet–acoustic hybrid feature fusion method integrated with deep recurrent neural network architectures. Echo signals from nine different materials were collected using the newly developed ANLOS-R (Acoustic Non-Line-of-Sight Recognition) dataset, which was specifically designed to simulate realistic NLOS propagation environments. From these recordings, time-domain acoustic features and multi-scale wavelet-based energy and entropy statistics were extracted using ten wavelet families. The resulting 70-dimensional hybrid feature set was used to train several deep learning architectures, including Long Short-Term Memory (LSTM), Bidirectional LSTM (BiLSTM), Gated Recurrent Unit (GRU), and Convolutional Neural Network–LSTM (CNN–LSTM). Among these, the CNN–LSTM achieved the highest balanced accuracy and macro-F1 score of 0.99, showing strong generalization and convergence performance. SHapley Additive exPlanations (SHAP) analysis indicated that Mel-Frequency Cepstral Coefficients (MFCCs) and wavelet entropy–energy features play complementary roles in material discrimination. The proposed approach provides a robust and interpretable framework for real-time NLOS acoustic sensing, bridging data-driven deep learning with the physical understanding of acoustic material behavior. Full article
(This article belongs to the Section Sensor Materials)
Show Figures

Figure 1

21 pages, 3342 KB  
Article
Automated Segmentation of Psoriasis in Uncontrolled Environments Using a Three-Class Ensemble Architecture
by Robertas Audinys, Vaiva Paskeviciute, Vidas Raudonis, Linas Eidimtas, Dominyka Stragyte and Skaidra Valiukeviciene
Appl. Sci. 2026, 16(5), 2422; https://doi.org/10.3390/app16052422 - 2 Mar 2026
Viewed by 207
Abstract
Psoriasis is a heterogeneous inflammatory skin disease requiring continuous monitoring to assess treatment efficacy. Automated lesion segmentation remains a significant computer vision challenge due to irregular plaque boundaries, variable skin tones, and uncontrolled lighting conditions in clinical photography. This study proposes a robust [...] Read more.
Psoriasis is a heterogeneous inflammatory skin disease requiring continuous monitoring to assess treatment efficacy. Automated lesion segmentation remains a significant computer vision challenge due to irregular plaque boundaries, variable skin tones, and uncontrolled lighting conditions in clinical photography. This study proposes a robust hybrid deep learning framework for the automated segmentation of psoriatic lesions in unconstrained environments. We constructed a unique dataset utilizing a hierarchical three-class labeling scheme (psoriatic plaque, healthy skin, and background) to mitigate the class imbalance and background noise often found in binary segmentation tasks. Following a systematic hyperparameter optimization using the Optuna framework, three distinct architectures—DeepLabV3+, UperNet, and SegFormer—were identified as optimal. A novel ensemble architecture was then developed to integrate the high sensitivity of DeepLabV3+, the precision of UperNet, and the contextual balance of SegFormer via a conflict-resolution voting algorithm. Experimental results demonstrate that the proposed hybrid model outperforms individual state-of-the-art architectures, achieving a Dice coefficient of 89.3% for lesion segmentation and an F1 score of 90.7% across skin classes. These findings confirm the system’s adaptability to real-world imaging conditions, validating its potential as an objective decision-support tool for dermatological practice. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

27 pages, 46741 KB  
Article
An Adaptive Enhanced Meta-Transformer for Few-Shot Fault Diagnosis of Unmanned Underwater Vehicle Actuators Under Noisy Conditions
by Yazhou Wang, Jie Liu, Yimin Chen, Rui Tang and Huiyu Wu
Actuators 2026, 15(3), 139; https://doi.org/10.3390/act15030139 - 2 Mar 2026
Viewed by 290
Abstract
Fault diagnosis is crucial for ensuring the navigational safety and reliability of unmanned underwater vehicles (UUVs). However, in complex marine environments, UUV fault samples are typically scarce and often contaminated by severe hydraulic noise, which significantly restricts the performance of existing diagnostic methods. [...] Read more.
Fault diagnosis is crucial for ensuring the navigational safety and reliability of unmanned underwater vehicles (UUVs). However, in complex marine environments, UUV fault samples are typically scarce and often contaminated by severe hydraulic noise, which significantly restricts the performance of existing diagnostic methods. To address these challenges, this paper proposes a few-shot fault diagnosis method based on an Adaptive Enhanced Meta-Transformer. First, operational vibration data from UUV actuators are acquired and preprocessed. Second, a feature enhancement module is constructed using an improved Transformer architecture that incorporates a novel Adaptive Head-Weighted Multi-Head Self-Attention mechanism. This mechanism enables the model to precisely localize key fault segments and enhance directional features, even under noisy backgrounds, effectively mitigating attention dispersion. Subsequently, a meta-optimization strategy is employed to iteratively update model parameters, enabling the model to rapidly adapt to new tasks with limited data. Finally, extensive experiments using real-world operational data from UUV actuators demonstrate that the proposed method outperforms state-of-the-art baselines in terms of accuracy and robustness, particularly in cross-component and noisy scenarios. Full article
(This article belongs to the Special Issue Cutting-Edge Advancements in Robotics and Control Systems)
Show Figures

Figure 1

25 pages, 10445 KB  
Article
Temporal Trend and Fluctuation Learning via Enhanced Attention Mamba for Carbon Price Interval Forecasting
by Lijun Duan, Jin Chen, Qiankun Zuo, Yanfei Zhu, Yi Di and Ruiheng Li
Entropy 2026, 28(3), 270; https://doi.org/10.3390/e28030270 - 28 Feb 2026
Viewed by 239
Abstract
Accurate carbon price forecasting is essential for transforming complex carbon trading markets into efficiently managed and stably operating systems. Existing long-term time series forecasting methods struggle to capture the nonlinear and non-stationary characteristics inherent in carbon prices. To address this limitation, we propose [...] Read more.
Accurate carbon price forecasting is essential for transforming complex carbon trading markets into efficiently managed and stably operating systems. Existing long-term time series forecasting methods struggle to capture the nonlinear and non-stationary characteristics inherent in carbon prices. To address this limitation, we propose the Temporal Trend and Fluctuation Learning (TTFL) model for interval-valued carbon price forecasting. The model first uses wavelet decomposition to separate the forecasting task into two branches: Price Trend Learning (PTL) and Price Fluctuation Learning (PFL). The PTL branch adopts a forward–backward enhanced Mamba architecture to extract low-frequency, long-term trend features. This design facilitates price interactions across time steps. The enhanced Mamba module leverages a state space model (SSM) to preserve historical information selectively and employs a forgetting gate to recover missing information. As a result, the model captures complementary dependencies across different price points, improving prediction reliability. The PFL branch integrates an attention mechanism with the standard Mamba architecture to model high-frequency temporal dynamics. It provides fine-grained short-term volatility information essential for market participants. We also introduce an interval-valued recovery loss function. This loss quantifies the overlap between predicted and actual interval prices, emphasizes trend learning, and stabilizes model training. We evaluate the TTFL model on three real-world carbon trading markets. Comparative experiments demonstrate that TTFL achieves superior prediction accuracy and robustness relative to baseline methods. Through collaborative learning and selective state space modeling, our approach not only outperforms traditional forecasting models but also offers stakeholders a practical tool for navigating complex carbon market environments. This work contributes a novel forecasting paradigm that integrates multivariate collaborative learning with selective state space modeling. It provides actionable insights for policymaking, investment strategy development, and risk management in the energy and environmental sectors. Full article
Show Figures

Figure 1

28 pages, 2533 KB  
Article
Intermittent Active Inference
by Markus Klar, Sebastian Stein, Fraser Paterson, John H. Williamson, Henrik Gollee and Roderick Murray-Smith
Entropy 2026, 28(3), 269; https://doi.org/10.3390/e28030269 - 28 Feb 2026
Viewed by 448
Abstract
Active inference provides a unified framework for perception and action as processes of minimizing prediction error given a generative model of the environment. Whilst standard formulations assume continuous inference and control, empirical evidence indicates that humans update their control strategies intermittently, which reduces [...] Read more.
Active inference provides a unified framework for perception and action as processes of minimizing prediction error given a generative model of the environment. Whilst standard formulations assume continuous inference and control, empirical evidence indicates that humans update their control strategies intermittently, which reduces computational demands and mitigates propagation of correlated noise in closed feedback loops. To address this, we introduce Intermittent Active Inference (IAIF), a novel variant in which sensing, inference, planning, or acting can occur intermittently. This paper investigates intermittent planning, where IAIF agents follow their current plan and only re-plan when the prediction error exceeds a predefined threshold or the Expected Free Energy associated with the current plan surpasses prior estimates. We evaluate intermittent planning in a mouse pointing task, comparing against continuous planning while examining the impact of different threshold parameters on performance and efficiency. The findings indicate that IAIF reduces computation time whilst maintaining task performance, particularly when the number of plans sampled during planning is increased. In case of the proposed trigger based on Expected Free Energy, no additional calibration is required for this. The straightforward integration of IAIF makes it valuable in practical modelling workflows. Full article
(This article belongs to the Special Issue Active Inference in Cognitive Neuroscience)
Show Figures

Figure 1

Back to TopTop