Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,918)

Search Parameters:
Keywords = feature maximization

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 2454 KB  
Article
Less Is More: Data-Driven Day-Ahead Electricity Price Forecasting with Short Training Windows
by Vasilis Michalakopoulos, Christoforos Menos-Aikateriniadis, Elissaios Sarmas, Antonis Zakynthinos, Pavlos S. Georgilakis and Dimitris Askounis
Energies 2026, 19(2), 376; https://doi.org/10.3390/en19020376 (registering DOI) - 13 Jan 2026
Abstract
Volatility in the modern world and electricity Day-Ahead Markets (DAMs) usually makes long-term historical data irrelevant or even detrimental for accurate forecasting. This study directly addresses this challenge by proposing a novel forecasting paradigm centered on extremely short training windows, ranging from 7 [...] Read more.
Volatility in the modern world and electricity Day-Ahead Markets (DAMs) usually makes long-term historical data irrelevant or even detrimental for accurate forecasting. This study directly addresses this challenge by proposing a novel forecasting paradigm centered on extremely short training windows, ranging from 7 to 90 days, to maximize responsiveness to recent market dynamics. This volatility-driven approach intentionally creates a data-scarce environment where the suitability of deep learning models is limited. Building on the hypothesis that shallow machine learning models, and more specifically boosting trees, are better adapted to this reality, we evaluate four models, namely LSTM with feed-forward error correction, XGBoost, LightGBM, and CatBoost, across three European energy markets (Greece, Belgium, Ireland) using feature sets derived from ENTSO-E forecast data. Results consistently demonstrate that LightGBM provides superior forecasting accuracy and robustness, particularly when trained on 45–60 day windows, which strike an optimal balance between temporal relevance and learning depth. Furthermore, a stronger capability in detecting seasonal effects and peak price events is exhibited. These findings validate that a short-window training strategy, combined with computationally efficient shallow models, is a highly effective and practical approach for navigating the volatility and data constraints of modern DAM forecasting. Full article
Show Figures

Figure 1

38 pages, 13037 KB  
Article
Coconut Shell-Derived Activated Carbons: Preparation, Physicochemical Properties, and Dye Removal from Water
by Vanda María Cachola Maldito Lowden, María Francisca Alexandre-Franco, Juan Manuel Garrido-Zoido, Eduardo Manuel Cuerda-Correa and Vicente Gómez-Serrano
Molecules 2026, 31(2), 263; https://doi.org/10.3390/molecules31020263 (registering DOI) - 12 Jan 2026
Abstract
Valorizing coconut shell waste as a renewable lignocellulosic precursor offers a sustainable route to produce high-performance activated carbons for wastewater treatment. In this study, coconut shells were transformed into activated carbons through physical activation (air, CO2, steam) and chemical activation (H [...] Read more.
Valorizing coconut shell waste as a renewable lignocellulosic precursor offers a sustainable route to produce high-performance activated carbons for wastewater treatment. In this study, coconut shells were transformed into activated carbons through physical activation (air, CO2, steam) and chemical activation (H3PO4, ZnCl2, KOH), allowing direct comparison of how each method influences porosity and surface chemistry. Among the physically activated samples, steam activation produced the best material, A-ST, with SBET = 738 m2 g−1, Vmi = 0.38 cm3 g−1 and Vme = 0.07 cm3 g−1. KOH activation yielded the top-performing carbon, A-KOH, achieving SBET = 1600 m2 g−1, Vmi = 0.74 cm3 g−1, and Vme = 0.22 cm3 g−1. Adsorption tests with methylene blue, methyl orange, and orange G showed a clear link between physicochemical features and dye uptake. A-ST and A-KOH exhibited the highest capacities due to their wide micro–mesoporosity and favorable surface charge at the adsorption pH. In both cases, methylene blue was most strongly retained, confirming that large aromatic cations benefit from π–π interactions with graphene-like layers and easy micropore access. Overall, the results demonstrate that coconut-shell valorization is maximized when activation enhances both porosity and surface chemistry, enabling the production of tailored sorbents for the efficient removal of organic contaminants. Full article
(This article belongs to the Special Issue Carbon-Based Materials for Sustainable Chemistry: 3rd Edition)
23 pages, 6249 KB  
Article
Refining Open-Source Asset Management Tools: AI-Driven Innovations for Enhanced Reliability and Resilience of Power Systems
by Gopal Lal Rajora, Miguel A. Sanz-Bobi, Lina Bertling Tjernberg and Pablo Calvo-Bascones
Technologies 2026, 14(1), 57; https://doi.org/10.3390/technologies14010057 - 11 Jan 2026
Abstract
Traditional methods of asset management in electric power systems rely upon fixed schedules and reactive measurements, leading to challenges in the transparent prioritization of maintenance under evolving operating conditions and incomplete data. In this paper, we introduce a new, fully integrated artificial intelligence [...] Read more.
Traditional methods of asset management in electric power systems rely upon fixed schedules and reactive measurements, leading to challenges in the transparent prioritization of maintenance under evolving operating conditions and incomplete data. In this paper, we introduce a new, fully integrated artificial intelligence (AI)-driven approach for enhancing the resilience and reliability of open-source asset management tools to support improved performance and decisions in electric power system operations. This methodology addresses and overcomes several significant challenges, including data heterogeneity, algorithmic limitations, and inflexible decision-making, through a three-module workflow. The data fidelity module provides a domain-aware pipeline for identifying structural (missing) values from explicit missingness using sophisticated imputation methods, including Multiple Imputation Chain Equations (MICE) and Generative Adversarial Network (GAN)-based hybrids. The characterization module employs seven complementary weighting strategies, including PCA, Autoencoder, GA-based optimization, SHAP, Decision-Tree Importance, and Entropy Weighting, to achieve objective feature weight assignment, thereby eliminating the need for subjective manual rules. The optimization module enhanced the action space through multi-objective optimization, balancing reliability maximization and cost minimization. A synthetic dataset of 100 power transformers was used to validate that the MICE achieved better imputation than other methods. The optimized weighting framework successfully categorizes Health Index values into five condition levels, while the multi-objective maintenance policy optimization generates decisions that align with real-world asset management practices. The proposed framework provides the Transmission and Distribution System Operators (TSOs/DSOs) with an adaptable, industry-oriented decision-support workflow system for enhancing reliability, optimizing maintenance expenses, and improving asset management policies for critical power infrastructure. Full article
(This article belongs to the Special Issue AI for Smart Engineering Systems)
25 pages, 1075 KB  
Article
Prompt-Based Few-Shot Text Classification with Multi-Granularity Label Augmentation and Adaptive Verbalizer
by Deling Huang, Zanxiong Li, Jian Yu and Yulong Zhou
Information 2026, 17(1), 58; https://doi.org/10.3390/info17010058 - 8 Jan 2026
Viewed by 136
Abstract
Few-Shot Text Classification (FSTC) aims to classify text accurately into predefined categories using minimal training samples. Recently, prompt-tuning-based methods have achieved promising results by constructing verbalizers that map input data to the label space, thereby maximizing the utilization of pre-trained model features. However, [...] Read more.
Few-Shot Text Classification (FSTC) aims to classify text accurately into predefined categories using minimal training samples. Recently, prompt-tuning-based methods have achieved promising results by constructing verbalizers that map input data to the label space, thereby maximizing the utilization of pre-trained model features. However, existing verbalizer construction methods often rely on external knowledge bases, which require complex noise filtering and manual refinement, making the process time-consuming and labor-intensive, while approaches based on pre-trained language models (PLMs) frequently overlook inherent prediction biases. Furthermore, conventional data augmentation methods focus on modifying input instances while overlooking the integral role of label semantics in prompt tuning. This disconnection often leads to a trade-off where increased sample diversity comes at the cost of semantic consistency, resulting in marginal improvements. To address these limitations, this paper first proposes a novel Bayesian Mutual Information-based method that optimizes label mapping to retain general PLM features while reducing reliance on irrelevant or unfair attributes to mitigate latent biases. Based on this method, we propose two synergistic generators that synthesize semantically consistent samples by integrating label word information from the verbalizer to effectively enrich data distribution and alleviate sparsity. To guarantee the reliability of the augmented set, we propose a Low-Entropy Selector that serves as a semantic filter, retaining only high-confidence samples to safeguard the model against ambiguous supervision signals. Furthermore, we propose a Difficulty-Aware Adversarial Training framework that fosters generalized feature learning, enabling the model to withstand subtle input perturbations. Extensive experiments demonstrate that our approach outperforms state-of-the-art methods on most few-shot and full-data splits, with F1 score improvements of up to +2.8% on the standard AG’s News benchmark and +1.0% on the challenging DBPedia benchmark. Full article
Show Figures

Graphical abstract

16 pages, 2242 KB  
Article
Hydraulic Design Optimization of a Multi-Stage Overtopping Wave Energy Converter Using WCSPH Methodology Under Site-Specific Wave Conditions
by Sung-Hwan An and Jong-Hyun Lee
J. Mar. Sci. Eng. 2026, 14(2), 127; https://doi.org/10.3390/jmse14020127 - 7 Jan 2026
Viewed by 189
Abstract
In multi-level overtopping wave energy converters (OWEC), the inlet slot governs overtopping losses and the distribution of inflow among reservoirs, making it a critical design feature for maximizing hydraulic efficiency. This study defines the relative slot width as λ (=w/ [...] Read more.
In multi-level overtopping wave energy converters (OWEC), the inlet slot governs overtopping losses and the distribution of inflow among reservoirs, making it a critical design feature for maximizing hydraulic efficiency. This study defines the relative slot width as λ (=w/Lslop) and investigates its influence on the performance of an SSG-based multi-level OWEC using DualSPHysics, an open-source weakly compressible smoothed particle hydrodynamics (WCSPH) solver, in a two-dimensional recirculating numerical wave tank under regular-wave conditions. Hydraulic efficiency is evaluated as the ratio of the overtopping-stored potential-energy flux to the incident wave energy flux per unit width. The results show a nonlinear dependence of reservoir-level contributions on λ, and an intermediate λ provides a balanced contribution across upper, middle, and lower reservoirs, yielding the maximum overall efficiency. To extend the analysis beyond a single design wave, a global-state performance map in the period–height space is constructed and combined with the target-sea spectral characteristics, indicating that the optimal geometry maintains relatively robust efficiency in the dominant spectral band while revealing efficiency limitations associated with insufficient overtopping at small waves and saturation at large waves. The proposed approach provides quantitative guidance for slot design and site-relevant performance screening of multi-level OWEC. Full article
(This article belongs to the Special Issue Challenges of Marine Energy Development and Facilities Engineering)
Show Figures

Figure 1

24 pages, 3401 KB  
Article
Ground to Altitude: Weakly-Supervised Cross-Platform Domain Generalization for LiDAR Semantic Segmentation
by Jingyi Wang, Xiaojia Xiang, Jun Lai, Yu Liu, Qi Li and Chen Chen
Remote Sens. 2026, 18(2), 192; https://doi.org/10.3390/rs18020192 - 6 Jan 2026
Viewed by 136
Abstract
Collaborative sensing between low-altitude remote sensing and ground-based mobile mapping lays the theoretical foundation for multi-platform 3D data fusion. However, point clouds collected from Airborne Laser Scanners (ALSs) remain scarce due to high acquisition and annotation costs. In contrast, while autonomous driving datasets [...] Read more.
Collaborative sensing between low-altitude remote sensing and ground-based mobile mapping lays the theoretical foundation for multi-platform 3D data fusion. However, point clouds collected from Airborne Laser Scanners (ALSs) remain scarce due to high acquisition and annotation costs. In contrast, while autonomous driving datasets are more accessible, dense annotation remains a significant bottleneck. To address this, we propose Ground to Altitude (GTA), a weakly supervised domain generalization (DG) framework. GTA leverages sparse autonomous driving data to learn robust representations, enabling reliable segmentation on airborne point clouds under zero-label conditions. Specifically, we tackle cross-platform discrepancies through progressive domain-aware augmentation (PDA) and cross-scale semantic alignment (CSA). For PDA, we design a distance-guided dynamic upsampling strategy to approximate airborne point density and a cross-view augmentation scheme to model viewpoint variations. For CSA, we impose cross-domain feature consistency and contrastive regularization to enhance robustness against perturbations. A progressive training pipeline is further employed to maximize the utility of limited annotations and abundant unlabeled data. Our study reveals the limitations of existing DG methods in cross-platform scenarios. Extensive experiments demonstrate that GTA achieves state-of-the-art (SOTA) performance. Notably, under the challenging 0.1% supervision setting, our method achieves a 6.36% improvement in mIoU over the baseline on the SemanticKITTI → DALES benchmark, demonstrating significant gains across diverse categories beyond just structural objects. Full article
(This article belongs to the Special Issue New Perspectives on 3D Point Cloud (Fourth Edition))
Show Figures

Figure 1

28 pages, 2832 KB  
Article
Unsupervised Neural Beamforming for Uplink MU-SIMO in 3GPP-Compliant Wireless Channels
by Cemil Vahapoglu, Timothy J. O’Shea, Wan Liu, Tamoghna Roy and Sennur Ulukus
Sensors 2026, 26(2), 366; https://doi.org/10.3390/s26020366 - 6 Jan 2026
Viewed by 172
Abstract
Beamforming is highly significant for the physical layer of wireless communication systems, for multi-antenna systems such as multiple input multiple output (MIMO) and massive MIMO, since it improves spectral efficiency and reduces interference. Traditional linear beamforming methods such as zero-forcing beamforming (ZFBF) and [...] Read more.
Beamforming is highly significant for the physical layer of wireless communication systems, for multi-antenna systems such as multiple input multiple output (MIMO) and massive MIMO, since it improves spectral efficiency and reduces interference. Traditional linear beamforming methods such as zero-forcing beamforming (ZFBF) and minimum mean square error (MMSE) beamforming provide closed-form solutions. Yet, their performance drops when they face non-ideal conditions such as imperfect channel state information (CSI), dynamic propagation environment, or high-dimensional system configurations, primarily due to static assumptions and computational limitations. These limitations have led to the rise of deep learning-based beamforming, where data-driven models derive beamforming solutions directly from CSI. By leveraging the representational capabilities of cutting-edge deep learning architectures, along with the increasing availability of data and computational resources, deep learning presents an adaptive and potentially scalable alternative to traditional methodologies. In this work, we unify and systematically compare our two unsupervised learning architectures for uplink receive beamforming: a simple neural network beamforming (NNBF) model, composed of convolutional and fully connected layers, and a transformer-based NNBF model that integrates grouped convolutions for feature extraction and transformer blocks to capture long-range channel dependencies. They are evaluated in a common multi-user single input multiple output (MU-SIMO) system model to maximize sum-rate across single-antenna user equipments (UEs) under 3GPP-compliant channel models, namely TDL-A and UMa. Furthermore, we present a FLOPs-based asymptotic computational complexity analysis for the NNBF architectures alongside baseline methods, namely ZFBF and MMSE beamforming, explicitly characterizing inference-time scaling behavior. Experiments for the simple NNBF are performed under simplified assumptions such as stationary UEs and perfect CSI across varying antenna configurations in the TDL-A channel. On the other hand, transformer-based NNBF is evaluated in more realistic conditions, including urban macro environments with imperfect CSI, diverse UE mobilities, coding rates, and modulation schemes. Results show that the transformer-based NNBF achieves superior performance under realistic conditions at the cost of increased computational complexity, while the simple NNBF presents comparable or better performance than baseline methods with significantly lower complexity under simplified assumptions. Full article
(This article belongs to the Special Issue Sensor Networks and Communication with AI)
Show Figures

Figure 1

30 pages, 16273 KB  
Article
PMG-SAM: Boosting Auto-Segmentation of SAM with Pre-Mask Guidance
by Jixue Gao, Xiaoyan Jiang, Anjie Wang, Yongbin Gao, Zhijun Fang and Michael S. Lew
Sensors 2026, 26(2), 365; https://doi.org/10.3390/s26020365 - 6 Jan 2026
Viewed by 143
Abstract
The Segment Anything Model (SAM), a foundational vision model, struggles with fully automatic segmentation of specific objects. Its “segment everything” mode, reliant on a grid-based prompt strategy, suffers from localization blindness and computational redundancy, leading to poor performance on tasks like Dichotomous Image [...] Read more.
The Segment Anything Model (SAM), a foundational vision model, struggles with fully automatic segmentation of specific objects. Its “segment everything” mode, reliant on a grid-based prompt strategy, suffers from localization blindness and computational redundancy, leading to poor performance on tasks like Dichotomous Image Segmentation (DIS). To address this, we propose PMG-SAM, a framework that introduces a Pre-Mask Guided paradigm for automatic targeted segmentation. Our method employs a dual-branch encoder to generate a coarse global Pre-Mask, which then acts as a dense internal prompt to guide the segmentation decoder. A key component, our proposed Dense Residual Fusion Module (DRFM), iteratively co-refines multi-scale features to significantly enhance the Pre-Mask’s quality. Extensive experiments on challenging DIS and Camouflaged Object Segmentation (COS) tasks validate our approach. On the DIS-TE2 benchmark, PMG-SAM boosts the maximal F-measure from SAM’s 0.283 to 0.815. Notably, our fully automatic model’s performance surpasses even the ground-truth bounding box prompted modes of SAM and SAM2, while using only 22.9 M trainable parameters (58.8% of SAM2-Tiny). PMG-SAM thus presents an efficient and accurate paradigm for resolving the localization bottleneck of large vision models in prompt-free scenarios. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

23 pages, 1543 KB  
Article
Jailbreaking MLLMs via Attention Redirection and Entropy Regularization
by Jiayu Du, Fangxu Dong and Fan Zhang
Electronics 2026, 15(1), 237; https://doi.org/10.3390/electronics15010237 - 5 Jan 2026
Viewed by 251
Abstract
Multimodal Large Language Models (MLLMs) have demonstrated remarkable capabilities across vision–language tasks, yet their safety alignment remains vulnerable to adversarial manipulation. Existing jailbreak attacks typically optimize adversarial perturbations using negative log-likelihood loss alone, which often leads to overfitting on target affirmative tokens and [...] Read more.
Multimodal Large Language Models (MLLMs) have demonstrated remarkable capabilities across vision–language tasks, yet their safety alignment remains vulnerable to adversarial manipulation. Existing jailbreak attacks typically optimize adversarial perturbations using negative log-likelihood loss alone, which often leads to overfitting on target affirmative tokens and fails to elicit substantive harmful content. We propose Attention-Enhancement and Targeted Entropy Regularization for Adversarial Optimization (AERO), a novel jailbreak framework addressing these limitations through two complementary mechanisms. First, an attention enhancement loss strategically redirects cross-modal attention toward perturbed visual tokens, distracting safety-aligned features from scrutinizing malicious queries. Second, a targeted entropy regularization scheme maximizes output diversity over non-refusal tokens during initial generation, creating a permissive context that improves cross-query generalization and enables responses that genuinely address malicious requests. Extensive experiments on multiple state-of-the-art MLLMs demonstrate that AERO significantly outperforms existing methods, achieving Attack Success Rates (ASRs) of 65.8–70.7% on MM-SafetyBench and 71.0–84.5% on HarmBench. Our approach surpasses the strongest baselines by margins of up to 16.2% in success rate while consistently generating higher-quality harmful content. Full article
(This article belongs to the Special Issue Artificial Intelligence Safety and Security)
Show Figures

Figure 1

16 pages, 1546 KB  
Article
A Deep Reinforcement Learning-Based Approach for Bandwidth-Aware Service Function Chaining
by Yan-Jing Wu, Shi-Hao Hwang, Wen-Shyang Hwang and Ming-Hua Cheng
Electronics 2026, 15(1), 227; https://doi.org/10.3390/electronics15010227 - 4 Jan 2026
Viewed by 174
Abstract
Network function virtualization (NFV) is an emerging technology that is gaining popularity for network function migration. NFV converts a network function from a dedicated hardware device into a virtual network function (VNF), thereby improving the agility of network services and reducing management costs. [...] Read more.
Network function virtualization (NFV) is an emerging technology that is gaining popularity for network function migration. NFV converts a network function from a dedicated hardware device into a virtual network function (VNF), thereby improving the agility of network services and reducing management costs. A complex network service can be expressed as a service function chain (SFC) request, which consists of an ordered sequence of VNFs. Given the inherent heterogeneity and dynamic nature of network services, effective SFC deployment encounters significant unpredictable challenges. Machine learning-based methods offer the flexibility to predict and select the optimal next action based on existing data models. In this paper, we propose a deep reinforcement learning-based approach for bandwidth-aware service function chaining (DRL-BSFC). Aiming to simultaneously improve the acceptance ratio of SFC requests and maximize the total revenue for Internet service providers, DRL-BSFC integrates a graph convolutional network (GCN) for feature extraction of the underlying physical network, a sequence-to-sequence (Seq2Seq) model for capturing the order information of an SFC request, and a modified A3C (Asynchronous Advantage Actor–Critic) algorithm of deep reinforcement learning. To ensure efficient resource utilization and a higher acceptance ratio of SFC requests, the bandwidth cost for deploying an SFC is explicitly incorporated into the A3C’s reward function. The effectiveness and superiority of DRL-BSFC compared to the existing DRL-SFCP scheme are demonstrated via simulations. The performance measures include the acceptance ratio of SFC requests, the average bandwidth cost, the average remaining link bandwidth, and the average revenue-to-cost ratio under different SFC request arrival rates. Full article
(This article belongs to the Special Issue New Trends in Machine Learning, System and Digital Twins)
Show Figures

Figure 1

20 pages, 3948 KB  
Article
Integrated DEM–Experimental Framework for Multi-Objective Optimization of a Low-Disturbance Liquid Manure Injector Shank
by Zhiwei Zeng, Adewale Sedara and Matthew Digman
AgriEngineering 2026, 8(1), 10; https://doi.org/10.3390/agriengineering8010010 - 1 Jan 2026
Viewed by 213
Abstract
Low-disturbance liquid manure injection is increasingly important for sustainable soil management because it reduces residue burial, minimizes surface disruption, and lowers energy demand during application. However, the performance of low-disturbance shanks has not been systematically optimized, and their interaction with soil remains poorly [...] Read more.
Low-disturbance liquid manure injection is increasingly important for sustainable soil management because it reduces residue burial, minimizes surface disruption, and lowers energy demand during application. However, the performance of low-disturbance shanks has not been systematically optimized, and their interaction with soil remains poorly quantified. This study developed an integrated discrete element method (DEM)–experimental framework to evaluate and optimize the performance of a purpose-built injector shank featuring a 45° rake angle, 25 mm thickness, and 110 mm width. The framework aimed to identify operating conditions that balance soil disturbance and energy efficiency. A DEM soil model was constructed using mechanical properties obtained from laboratory characterization tests and validated against soil bin experiments measuring draft force and soil rupture area across five working depths (100–250 mm) and three travel speeds (350–450 mm/s). The calibrated model showed strong agreement with experimental observations, yielding mean absolute relative errors of 1.7% for draft force and 6.2% for rupture area. Following validation, a multi-objective optimization was performed to minimize draft force while maximizing soil rupture, two key indicators of energy demand and injection effectiveness. Optimization results identified the most favorable operating parameters at a forward speed of 450 mm/s and an injection depth of 150 mm, achieving a desirability score of 0.884. The integrated DEM–experimental framework demonstrated reliable predictive capability and enables virtual testing of soil–tool interactions prior to field implementation. This study provides a scientifically grounded approach for improving injector shank operation and supports sustainable manure management by identifying settings that achieve adequate soil disruption while reducing energy consumption. Full article
Show Figures

Figure 1

19 pages, 17228 KB  
Article
The Influence of Leading Edge Tubercle on the Transient Pressure Fluctuations of a Hubless Propeller
by Max Hieke, Matthias Witte and Frank-Hendrik Wurm
Int. J. Turbomach. Propuls. Power 2026, 11(1), 4; https://doi.org/10.3390/ijtpp11010004 - 31 Dec 2025
Viewed by 209
Abstract
In recent years, the design priorities of modern marine propellers have shifted from maximizing efficiency to minimizing vibration-induced noise emissions and improving structural durability. However, an optimized design does not necessarily ensure optimal performance across the full operational range of a vessel. Due [...] Read more.
In recent years, the design priorities of modern marine propellers have shifted from maximizing efficiency to minimizing vibration-induced noise emissions and improving structural durability. However, an optimized design does not necessarily ensure optimal performance across the full operational range of a vessel. Due to operational constraints such as reduced docking times and regional speed regulations, propellers frequently operate off-design. This deviation from the design point leads to periodic turbulent boundary layer separation on the propeller blades, resulting in increased unsteady pressure fluctuations and, consequently, elevated hydroacoustic noise emissions. To mitigate these effects, bio-inspired modifications have been investigated as a means of improving flow characteristics and reducing pressure fluctuations. Tubercles, characteristic protrusions along the leading edge of humpback whale fins, have been shown to enhance lift characteristics beyond the stall angle by modifying the flow separation pattern. However, their influence on transient pressure fluctuations and the associated hydroacoustic behavior of marine propellers remains insufficiently explored. In this study, we apply the concept of tubercles to the blades of a hubless propeller, also referred to as a rim-drive propeller. We analyze the pressure fluctuations on the blades and in the wake by comparing conventional propeller blades with those featuring tubercles. The flow fields of both reference and tubercle-modified blades were simulated using the Stress Blended Eddy Simulation (SBES) turbulence model to highlight differences in the flow field. In both configurations, multiple helix-shaped vortex systems form in the propeller wake, but their decay characteristics vary, with the vortex structures collapsing at different distances from the propeller center. Additionally, Proper Orthogonal Decomposition (POD) analysis was employed to isolate and analyze the periodic, coherent flow structures in each case. Previous studies on the flow field of hubless propellers have demonstrated a direct correlation between transient pressure fluctuations in the flow field and the resulting noise emissions. It was demonstrated that the tubercle modification significantly reduces pressure fluctuations both on the propeller blades and in the wake flow. In the analyzed case, a reduction in pressure fluctuations by a factor of three to ten for the different BPF orders was observed within the wake flow. Full article
Show Figures

Graphical abstract

36 pages, 630 KB  
Article
Semantic Communication Unlearning: A Variational Information Bottleneck Approach for Backdoor Defense in Wireless Systems
by Sümeye Nur Karahan, Merve Güllü, Mustafa Serdar Osmanca and Necaattin Barışçı
Future Internet 2026, 18(1), 17; https://doi.org/10.3390/fi18010017 - 28 Dec 2025
Viewed by 226
Abstract
Semantic communication systems leverage deep neural networks to extract and transmit essential information, achieving superior performance in bandwidth-constrained wireless environments. However, their vulnerability to backdoor attacks poses critical security threats, where adversaries can inject malicious triggers during training to manipulate system behavior. This [...] Read more.
Semantic communication systems leverage deep neural networks to extract and transmit essential information, achieving superior performance in bandwidth-constrained wireless environments. However, their vulnerability to backdoor attacks poses critical security threats, where adversaries can inject malicious triggers during training to manipulate system behavior. This paper introduces Selective Communication Unlearning (SCU), a novel defense mechanism based on Variational Information Bottleneck (VIB) principles. SCU employs a two-stage approach: (1) joint unlearning to remove backdoor knowledge from both encoder and decoder while preserving legitimate data representations, and (2) contrastive compensation to maximize feature separation between poisoned and clean samples. Extensive experiments on the RML2016.10a wireless signal dataset demonstrate that SCU achieves 629.5 ± 191.2% backdoor mitigation (5-seed average; 95% CI: [364.1%, 895.0%]), with peak performance of 1486% under optimal conditions, while maintaining only 11.5% clean performance degradation. This represents an order-of-magnitude improvement over detection-based defenses and fundamentally outperforms existing unlearning approaches that achieve near-zero or negative mitigation. We validate SCU across seven signal processing domains, four adaptive backdoor types, and varying SNR conditions, demonstrating unprecedented robustness and generalizability. The framework achieves a 243 s unlearning time, making it practical for resource-constrained edge deployments in 6G networks. Full article
(This article belongs to the Special Issue Future Industrial Networks: Technologies, Algorithms, and Protocols)
Show Figures

Figure 1

34 pages, 20157 KB  
Article
Dual-Level Attention Relearning for Cross-Modality Rotated Object Detection in UAV RGB–Thermal Imagery
by Zhuqiang Li, Zhijun Zhen, Shengbo Chen, Liqiang Zhang and Lisai Cao
Remote Sens. 2026, 18(1), 107; https://doi.org/10.3390/rs18010107 - 28 Dec 2025
Viewed by 397
Abstract
Effectively leveraging multi-source unmanned aerial vehicle (UAV) observations for reliable object recognition is often compromised by environmental extremes (e.g., occlusion and low illumination) and the inherent physical discrepancies between modalities. To overcome these limitations, we propose DLANet, a lightweight, rotation-aware multimodal object detection [...] Read more.
Effectively leveraging multi-source unmanned aerial vehicle (UAV) observations for reliable object recognition is often compromised by environmental extremes (e.g., occlusion and low illumination) and the inherent physical discrepancies between modalities. To overcome these limitations, we propose DLANet, a lightweight, rotation-aware multimodal object detection framework that introduces a dual-level attention relearning strategy to maximize complementary information from visible (RGB) and thermal infrared (TIR) imagery. DLANet integrates two novel components: the Implicit Fine-Grained Fusion Module (IF2M), which facilitates deep cross-modal interaction by jointly modeling channel and spatial dependencies at intermediate stages, and the Adaptive Branch Feature Weighting (ABFW) module, which dynamically recalibrates modality contributions at higher levels to suppress noise and pseudo-targets. This synergistic approach allows the network to relearn feature importance based on real-time scene conditions. To support industrial applications, we construct the OilLeak dataset, a dedicated benchmark for onshore oil-spill detection. The experimental results demonstrate that DLANet achieves state-of-the-art performance, recording an mAP0.5 of 0.858 on the public DroneVehicle dataset while maintaining high efficiency, with 39.04 M parameters and 72.69 GFLOPs, making it suitable for real-time edge deployment. Full article
(This article belongs to the Special Issue Advances in SAR, Optical, Hyperspectral and Infrared Remote Sensing)
Show Figures

Figure 1

62 pages, 797 KB  
Article
The Relation of Slavic Verb Prefixes to Perfective Aspect
by Hana Filip
Languages 2026, 11(1), 5; https://doi.org/10.3390/languages11010005 - 26 Dec 2025
Viewed by 297
Abstract
This paper advances two main theses: The first overarching thesis is that the Slavic perfective/imperfective distinction is predominantly of a lexical-derivational nature. Among the categories of the tense–modality–aspect (TMA) system, Slavic aspect systems represent marginal categories, rather than core ones, which are realized [...] Read more.
This paper advances two main theses: The first overarching thesis is that the Slavic perfective/imperfective distinction is predominantly of a lexical-derivational nature. Among the categories of the tense–modality–aspect (TMA) system, Slavic aspect systems represent marginal categories, rather than core ones, which are realized by means of inflectional morphology. The second, and related, thesis concerns the status of Slavic verb prefixes in Slavic aspect systems, given that prefixed verbs constitute the bulk of their perfective verbs. I will provide some arguments, also defended elsewhere, that Slavic verb prefixes are not perfective markers, e.g., do not spell out a functional head/feature in the dedicated aspect structure, as is often assumed in syntactic theories of aspect, and neither do they carry a uniform semantic function for the interpretation of perfective aspect. Instead, Slavic verb prefixes are best treated as separate from perfectivity, on both formal and semantic grounds. This separation, however, does not mean that the two are unrelated. Here, the semantics of perfectivity is represented by means of the maximalization operator (maxe). The most fundamental requirement for its application, and for any maximalization operator for that matter, is that it respect some ordering criterion. It is the role of Slavic verb prefixes to contribute to its specification. They do so by virtue of having common uses/meanings that can be analyzed as extensive or intensive measure functions or vague quantifiers over arguments of verbs to which they are attached. Such meanings are reducible to a uniform scalar-based representation, from which the requisite ordering criterion can be extracted. Full article
Back to TopTop