Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,755)

Search Parameters:
Keywords = hybrid-attention

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 27206 KiB  
Article
KCUNET: Multi-Focus Image Fusion via the Parallel Integration of KAN and Convolutional Layers
by Jing Fang, Ruxian Wang, Xinglin Ning, Ruiqing Wang, Shuyun Teng, Xuran Liu, Zhipeng Zhang, Wenfeng Lu, Shaohai Hu and Jingjing Wang
Entropy 2025, 27(8), 785; https://doi.org/10.3390/e27080785 - 24 Jul 2025
Abstract
Multi-focus image fusion (MFIF) is an image-processing method that aims to generate fully focused images by integrating source images from different focal planes. However, the defocus spread effect (DSE) often leads to blurred or jagged focus/defocus boundaries in fused images, which affects the [...] Read more.
Multi-focus image fusion (MFIF) is an image-processing method that aims to generate fully focused images by integrating source images from different focal planes. However, the defocus spread effect (DSE) often leads to blurred or jagged focus/defocus boundaries in fused images, which affects the quality of the image. To address this issue, this paper proposes a novel model that embeds the Kolmogorov–Arnold network with convolutional layers in parallel within the U-Net architecture (KCUNet). This model keeps the spatial dimensions of the feature map constant to maintain high-resolution details while progressively increasing the number of channels to capture multi-level features at the encoding stage. In addition, KCUNet incorporates a content-guided attention mechanism to enhance edge information processing, which is crucial for DSE reduction and edge preservation. The model’s performance is optimized through a hybrid loss function that evaluates in several aspects, including edge alignment, mask prediction, and image quality. Finally, comparative evaluations against 15 state-of-the-art methods demonstrate KCUNet’s superior performance in both qualitative and quantitative analyses. Full article
(This article belongs to the Section Signal and Data Analysis)
19 pages, 8743 KiB  
Article
Role of Feature Diversity in the Performance of Hybrid Models—An Investigation of Brain Tumor Classification from Brain MRI Scans
by Subhash Chand Gupta, Shripal Vijayvargiya and Vandana Bhattacharjee
Diagnostics 2025, 15(15), 1863; https://doi.org/10.3390/diagnostics15151863 - 24 Jul 2025
Abstract
Introduction: Brain tumor, marked by abnormal and rapid cell growth, poses severe health risks and requires accurate diagnosis for effective treatment. Classifying brain tumors using deep learning techniques applied to Magnetic Resonance Imaging (MRI) images has attracted the attention of many researchers, [...] Read more.
Introduction: Brain tumor, marked by abnormal and rapid cell growth, poses severe health risks and requires accurate diagnosis for effective treatment. Classifying brain tumors using deep learning techniques applied to Magnetic Resonance Imaging (MRI) images has attracted the attention of many researchers, and specifically, reducing the bias of models and enhancing robustness is still a very pertinent active topic of attention. Methods: For capturing diverse information from different feature sets, we propose a Features Concatenation-based Brain Tumor Classification (FCBTC) Framework using Hybrid Deep Learning Models. For this, we have chosen three pretrained models—ResNet50; VGG16; and DensetNet121—as the baseline models. Our proposed hybrid models are built by the fusion of feature vectors. Results: The testing phase results show that, for the FCBTC Model-3, values for Precision, Recall, F1-score, and Accuracy are 98.33%, 98.26%, 98.27%, and 98.40%, respectively. This reinforces our idea that feature diversity does improve the classifier’s performance. Conclusions: Comparative performance evaluation of our work shows that, the proposed hybrid FCBTC Models have performed better than other proposed baseline models. Full article
(This article belongs to the Special Issue Machine Learning in Precise and Personalized Diagnosis)
Show Figures

Figure 1

23 pages, 13580 KiB  
Article
Enabling Smart Grid Resilience with Deep Learning-Based Battery Health Prediction in EV Fleets
by Muhammed Cavus and Margaret Bell
Batteries 2025, 11(8), 283; https://doi.org/10.3390/batteries11080283 - 24 Jul 2025
Abstract
The widespread integration of electric vehicles (EVs) into smart grid infrastructures necessitates intelligent and robust battery health diagnostics to ensure system resilience and performance longevity. While numerous studies have addressed the estimation of State of Health (SOH) and the prediction of remaining useful [...] Read more.
The widespread integration of electric vehicles (EVs) into smart grid infrastructures necessitates intelligent and robust battery health diagnostics to ensure system resilience and performance longevity. While numerous studies have addressed the estimation of State of Health (SOH) and the prediction of remaining useful life (RUL) using machine and deep learning, most existing models fail to capture both short-term degradation trends and long-range contextual dependencies jointly. In this study, we introduce V2G-HealthNet, a novel hybrid deep learning framework that uniquely combines Long Short-Term Memory (LSTM) networks with Transformer-based attention mechanisms to model battery degradation under dynamic vehicle-to-grid (V2G) scenarios. Unlike prior approaches that treat SOH estimation in isolation, our method directly links health prediction to operational decisions by enabling SOH-informed adaptive load scheduling and predictive maintenance across EV fleets. Trained on over 3400 proxy charge-discharge cycles derived from 1 million telemetry samples, V2G-HealthNet achieved state-of-the-art performance (SOH RMSE: 0.015, MAE: 0.012, R2: 0.97), outperforming leading baselines including XGBoost and Random Forest. For RUL prediction, the model maintained an MAE of 0.42 cycles over a five-cycle horizon. Importantly, deployment simulations revealed that V2G-HealthNet triggered maintenance alerts at least three cycles ahead of critical degradation thresholds and redistributed high-load tasks away from ageing batteries—capabilities not demonstrated in previous works. These findings establish V2G-HealthNet as a deployable, health-aware control layer for smart city electrification strategies. Full article
Show Figures

Figure 1

25 pages, 6911 KiB  
Article
Image Inpainting Algorithm Based on Structure-Guided Generative Adversarial Network
by Li Zhao, Tongyang Zhu, Chuang Wang, Feng Tian and Hongge Yao
Mathematics 2025, 13(15), 2370; https://doi.org/10.3390/math13152370 - 24 Jul 2025
Abstract
To address the challenges of image inpainting in scenarios with extensive or irregular missing regions—particularly detail oversmoothing, structural ambiguity, and textural incoherence—this paper proposes an Image Structure-Guided (ISG) framework that hierarchically integrates structural priors with semantic-aware texture synthesis. The proposed methodology advances a [...] Read more.
To address the challenges of image inpainting in scenarios with extensive or irregular missing regions—particularly detail oversmoothing, structural ambiguity, and textural incoherence—this paper proposes an Image Structure-Guided (ISG) framework that hierarchically integrates structural priors with semantic-aware texture synthesis. The proposed methodology advances a two-stage restoration paradigm: (1) Structural Prior Extraction, where adaptive edge detection algorithms identify residual contours in corrupted regions, and a transformer-enhanced network reconstructs globally consistent structural maps through contextual feature propagation; (2) Structure-Constrained Texture Synthesis, wherein a multi-scale generator with hybrid dilated convolutions and channel attention mechanisms iteratively refines high-fidelity textures under explicit structural guidance. The framework introduces three innovations: (1) a hierarchical feature fusion architecture that synergizes multi-scale receptive fields with spatial-channel attention to preserve long-range dependencies and local details simultaneously; (2) spectral-normalized Markovian discriminator with gradient-penalty regularization, enabling adversarial training stability while enforcing patch-level structural consistency; and (3) dual-branch loss formulation combining perceptual similarity metrics with edge-aware constraints to align synthesized content with both semantic coherence and geometric fidelity. Our experiments on the two benchmark datasets (Places2 and CelebA) have demonstrated that our framework achieves more unified textures and structures, bringing the restored images closer to their original semantic content. Full article
Show Figures

Figure 1

25 pages, 2129 KiB  
Article
Zero-Shot 3D Reconstruction of Industrial Assets: A Completion-to-Reconstruction Framework Trained on Synthetic Data
by Yongjie Xu, Haihua Zhu and Barmak Honarvar Shakibaei Asli
Electronics 2025, 14(15), 2949; https://doi.org/10.3390/electronics14152949 - 24 Jul 2025
Abstract
Creating high-fidelity digital twins (DTs) for Industry 4.0 applications, it is fundamentally reliant on the accurate 3D modeling of physical assets, a task complicated by the inherent imperfections of real-world point cloud data. This paper addresses the challenge of reconstructing accurate, watertight, and [...] Read more.
Creating high-fidelity digital twins (DTs) for Industry 4.0 applications, it is fundamentally reliant on the accurate 3D modeling of physical assets, a task complicated by the inherent imperfections of real-world point cloud data. This paper addresses the challenge of reconstructing accurate, watertight, and topologically sound 3D meshes from sparse, noisy, and incomplete point clouds acquired in complex industrial environments. We introduce a robust two-stage completion-to-reconstruction framework, C2R3D-Net, that systematically tackles this problem. The methodology first employs a pretrained, self-supervised point cloud completion network to infer a dense and structurally coherent geometric representation from degraded inputs. Subsequently, a novel adaptive surface reconstruction network generates the final high-fidelity mesh. This network features a hybrid encoder (FKAConv-LSA-DC), which integrates fixed-kernel and deformable convolutions with local self-attention to robustly capture both coarse geometry and fine details, and a boundary-aware multi-head interpolation decoder, which explicitly models sharp edges and thin structures to preserve geometric fidelity. Comprehensive experiments on the large-scale synthetic ShapeNet benchmark demonstrate state-of-the-art performance across all standard metrics. Crucially, we validate the framework’s strong zero-shot generalization capability by deploying the model—trained exclusively on synthetic data—to reconstruct complex assets from a custom-collected industrial dataset without any additional fine-tuning. The results confirm the method’s suitability as a robust and scalable approach for 3D asset modeling, a critical enabling step for creating high-fidelity DTs in demanding, unseen industrial settings. Full article
Show Figures

Figure 1

19 pages, 1711 KiB  
Article
TSDCA-BA: An Ultra-Lightweight Speech Enhancement Model for Real-Time Hearing Aids with Multi-Scale STFT Fusion
by Zujie Fan, Zikun Guo, Yanxing Lai and Jaesoo Kim
Appl. Sci. 2025, 15(15), 8183; https://doi.org/10.3390/app15158183 - 23 Jul 2025
Abstract
Lightweight speech denoising models have made remarkable progress in improving both speech quality and computational efficiency. However, most models rely on long temporal windows as input, limiting their applicability in low-latency, real-time scenarios on edge devices. To address this challenge, we propose a [...] Read more.
Lightweight speech denoising models have made remarkable progress in improving both speech quality and computational efficiency. However, most models rely on long temporal windows as input, limiting their applicability in low-latency, real-time scenarios on edge devices. To address this challenge, we propose a lightweight hybrid module, Temporal Statistics Enhancement, Squeeze-and-Excitation-based Dual Convolutional Attention, and Band-wise Attention (TSE, SDCA, BA) Module. The TSE module enhances single-frame spectral features by concatenating statistical descriptors—mean, standard deviation, maximum, and minimum—thereby capturing richer local information without relying on temporal context. The SDCA and BA module integrates a simplified residual structure and channel attention, while the BA component further strengthens the representation of critical frequency bands through band-wise partitioning and differentiated weighting. The proposed model requires only 0.22 million multiply–accumulate operations (MMACs) and contains a total of 112.3 K parameters, making it well suited for low-latency, real-time speech enhancement applications. Experimental results demonstrate that among lightweight models with fewer than 200K parameters, the proposed approach outperforms most existing methods in both denoising performance and computational efficiency, significantly reducing processing overhead. Furthermore, real-device deployment on an improved hearing aid confirms an inference latency as low as 2 milliseconds, validating its practical potential for real-time edge applications. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

20 pages, 695 KiB  
Article
Deep Hybrid Model for Fault Diagnosis of Ship’s Main Engine
by Se-Ha Kim, Tae-Gyeong Kim, Junseok Lee, Hyoung-Kyu Song, Hyeonjoon Moon and Chang-Jae Chun
J. Mar. Sci. Eng. 2025, 13(8), 1398; https://doi.org/10.3390/jmse13081398 - 23 Jul 2025
Abstract
Ships play a crucial role in modern society, serving purposes such as marine transportation, tourism, and exploration. Malfunctions or defects in the main engine, which is a core component of ship operations, can disrupt normal functionality and result in substantial financial losses. Consequently, [...] Read more.
Ships play a crucial role in modern society, serving purposes such as marine transportation, tourism, and exploration. Malfunctions or defects in the main engine, which is a core component of ship operations, can disrupt normal functionality and result in substantial financial losses. Consequently, early fault diagnosis of abnormal engine conditions is critical for effective maintenance. In this paper, we propose a deep hybrid model for fault diagnosis of ship main engines, utilizing exhaust gas temperature data. The proposed model utilizes both time-domain features (TDFs) and time-series raw data. In order to effectively extract features from each type of data, two distinct feature extraction networks and an attention module-based classifier are designed. The model performance is evaluated using real-world cylinder exhaust gas temperature data collected from the large ship low-speed two-stroke main engine. The experimental results demonstrate that the proposed method outperforms conventional methods in fault diagnosis accuracy. The experimental results demonstrate that the proposed method improves fault diagnosis accuracy by 6.146% compared to the best conventional method. Furthermore, the proposed method maintains superior performanceeven in noisy environments under realistic industrial conditions. This study demonstrates the potential of using exhaust gas temperature using a single sensor signal for data-driven fault detection and provides a scalable foundation for future multi-sensor diagnostic systems. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

22 pages, 2420 KiB  
Article
BiEHFFNet: A Water Body Detection Network for SAR Images Based on Bi-Encoder and Hybrid Feature Fusion
by Bin Han, Xin Huang and Feng Xue
Mathematics 2025, 13(15), 2347; https://doi.org/10.3390/math13152347 - 23 Jul 2025
Abstract
Water body detection in synthetic aperture radar (SAR) imagery plays a critical role in applications such as disaster response, water resource management, and environmental monitoring. However, it remains challenging due to complex background interference in SAR images. To address this issue, a bi-encoder [...] Read more.
Water body detection in synthetic aperture radar (SAR) imagery plays a critical role in applications such as disaster response, water resource management, and environmental monitoring. However, it remains challenging due to complex background interference in SAR images. To address this issue, a bi-encoder and hybrid feature fuse network (BiEHFFNet) is proposed for achieving accurate water body detection. First, a bi-encoder structure based on ResNet and Swin Transformer is used to jointly extract local spatial details and global contextual information, enhancing feature representation in complex scenarios. Additionally, the convolutional block attention module (CBAM) is employed to suppress irrelevant information of the output features of each ResNet stage. Second, a cross-attention-based hybrid feature fusion (CABHFF) module is designed to interactively integrate local and global features through cross-attention, followed by channel attention to achieve effective hybrid feature fusion, thus improving the model’s ability to capture water structures. Third, a multi-scale content-aware upsampling (MSCAU) module is designed by integrating atrous spatial pyramid pooling (ASPP) with the Content-Aware ReAssembly of FEatures (CARAFE), aiming to enhance multi-scale contextual learning while alleviating feature distortion caused by upsampling. Finally, a composite loss function combining Dice loss and Active Contour loss is used to provide stronger boundary supervision. Experiments conducted on the ALOS PALSAR dataset demonstrate that the proposed BiEHFFNet outperforms existing methods across multiple evaluation metrics, achieving more accurate water body detection. Full article
(This article belongs to the Special Issue Advanced Mathematical Methods in Remote Sensing)
Show Figures

Figure 1

12 pages, 11599 KiB  
Article
Dual pH- and Temperature-Responsive Fluorescent Hybrid Materials Based on Carbon Dot-Grafted Triamino-Tetraphenylethylene/N-Isopropylacrylamide Copolymers
by Huan Liu, Yuxin Ding, Longping Zhou, Shirui Xu and Bo Liao
C 2025, 11(3), 53; https://doi.org/10.3390/c11030053 - 22 Jul 2025
Abstract
Carbon dots (CDs), a class of carbon-based fluorescent nanomaterials, have garnered significant attention due to their tunable optical properties and functional versatility. In this study, we developed a hybrid material by grafting pH- and temperature-responsive copolymers onto CDs via reversible addition-fragmentation chain-transfer (RAFT) [...] Read more.
Carbon dots (CDs), a class of carbon-based fluorescent nanomaterials, have garnered significant attention due to their tunable optical properties and functional versatility. In this study, we developed a hybrid material by grafting pH- and temperature-responsive copolymers onto CDs via reversible addition-fragmentation chain-transfer (RAFT) polymerization. Triamino-tetraphenylethylene (ATPE) and N-isopropylacrylamide (NIPAM) were copolymerized at varying ratios and covalently linked to CDs, forming a dual-responsive system. Structural characterization using FTIR, 1H NMR, and TEM confirmed the successful grafting of the copolymers onto CDs. The hybrid material exhibited pH-dependent fluorescence changes in acidic aqueous solutions, with emission shifting from 450 nm (attributed to CDs) to 500 nm (aggregation-induced emission, AIE, from ATPE) above a critical pH threshold. Solid films of the hybrid material demonstrated reversible fluorescence quenching under HCl vapor and recovery/enhancement under NH3 vapor, showing excellent fatigue resistance over multiple cycles. Temperature responsiveness was attributed to the thermosensitive poly(NIPAM) segments, with fluorescence intensity increasing above 35 °C due to polymer chain collapse and ATPE aggregation. This work provides a strategy for designing multifunctional hybrid materials with potential applications in recyclable optical pH/temperature sensors. Full article
Show Figures

Graphical abstract

24 pages, 5200 KiB  
Article
DRFAN: A Lightweight Hybrid Attention Network for High-Fidelity Image Super-Resolution in Visual Inspection Applications
by Ze-Long Li, Bai Jiang, Liang Xu, Zhe Lu, Zi-Teng Wang, Bin Liu, Si-Ye Jia, Hong-Dan Liu and Bing Li
Algorithms 2025, 18(8), 454; https://doi.org/10.3390/a18080454 - 22 Jul 2025
Viewed by 22
Abstract
Single-image super-resolution (SISR) plays a critical role in enhancing visual quality for real-world applications, including industrial inspection and embedded vision systems. While deep learning-based approaches have made significant progress in SR, existing lightweight SR models often fail to accurately reconstruct high-frequency textures, especially [...] Read more.
Single-image super-resolution (SISR) plays a critical role in enhancing visual quality for real-world applications, including industrial inspection and embedded vision systems. While deep learning-based approaches have made significant progress in SR, existing lightweight SR models often fail to accurately reconstruct high-frequency textures, especially under complex degradation scenarios, resulting in blurry edges and structural artifacts. To address this challenge, we propose a Dense Residual Fused Attention Network (DRFAN), a novel lightweight hybrid architecture designed to enhance high-frequency texture recovery in challenging degradation conditions. Moreover, by coupling convolutional layers and attention mechanisms through gated interaction modules, the DRFAN enhances local details and global dependencies with linear computational complexity, enabling the efficient utilization of multi-level spatial information while effectively alleviating the loss of high-frequency texture details. To evaluate its effectiveness, we conducted ×4 super-resolution experiments on five public benchmarks. The DRFAN achieves the best performance among all compared lightweight models. Visual comparisons show that the DRFAN restores more accurate geometric structures, with up to +1.2 dB/+0.0281 SSIM gain over SwinIR-S on Urban100 samples. Additionally, on a domain-specific rice grain dataset, the DRFAN outperforms SwinIR-S by +0.19 dB in PSNR and +0.0015 in SSIM, restoring clearer textures and grain boundaries essential for industrial quality inspection. The proposed method provides a compelling balance between model complexity and image reconstruction fidelity, making it well-suited for deployment in resource-constrained visual systems and industrial applications. Full article
Show Figures

Figure 1

17 pages, 1927 KiB  
Article
ConvTransNet-S: A CNN-Transformer Hybrid Disease Recognition Model for Complex Field Environments
by Shangyun Jia, Guanping Wang, Hongling Li, Yan Liu, Linrong Shi and Sen Yang
Plants 2025, 14(15), 2252; https://doi.org/10.3390/plants14152252 - 22 Jul 2025
Viewed by 38
Abstract
To address the challenges of low recognition accuracy and substantial model complexity in crop disease identification models operating in complex field environments, this study proposed a novel hybrid model named ConvTransNet-S, which integrates Convolutional Neural Networks (CNNs) and transformers for crop disease identification [...] Read more.
To address the challenges of low recognition accuracy and substantial model complexity in crop disease identification models operating in complex field environments, this study proposed a novel hybrid model named ConvTransNet-S, which integrates Convolutional Neural Networks (CNNs) and transformers for crop disease identification tasks. Unlike existing hybrid approaches, ConvTransNet-S uniquely introduces three key innovations: First, a Local Perception Unit (LPU) and Lightweight Multi-Head Self-Attention (LMHSA) modules were introduced to synergistically enhance the extraction of fine-grained plant disease details and model global dependency relationships, respectively. Second, an Inverted Residual Feed-Forward Network (IRFFN) was employed to optimize the feature propagation path, thereby enhancing the model’s robustness against interferences such as lighting variations and leaf occlusions. This novel combination of a LPU, LMHSA, and an IRFFN achieves a dynamic equilibrium between local texture perception and global context modeling—effectively resolving the trade-offs inherent in standalone CNNs or transformers. Finally, through a phased architecture design, efficient fusion of multi-scale disease features is achieved, which enhances feature discriminability while reducing model complexity. The experimental results indicated that ConvTransNet-S achieved a recognition accuracy of 98.85% on the PlantVillage public dataset. This model operates with only 25.14 million parameters, a computational load of 3.762 GFLOPs, and an inference time of 7.56 ms. Testing on a self-built in-field complex scene dataset comprising 10,441 images revealed that ConvTransNet-S achieved an accuracy of 88.53%, which represents improvements of 14.22%, 2.75%, and 0.34% over EfficientNetV2, Vision Transformer, and Swin Transformer, respectively. Furthermore, the ConvTransNet-S model achieved up to 14.22% higher disease recognition accuracy under complex background conditions while reducing the parameter count by 46.8%. This confirms that its unique multi-scale feature mechanism can effectively distinguish disease from background features, providing a novel technical approach for disease diagnosis in complex agricultural scenarios and demonstrating significant application value for intelligent agricultural management. Full article
(This article belongs to the Section Plant Modeling)
Show Figures

Figure 1

31 pages, 7723 KiB  
Article
A Hybrid CNN–GRU–LSTM Algorithm with SHAP-Based Interpretability for EEG-Based ADHD Diagnosis
by Makbal Baibulova, Murat Aitimov, Roza Burganova, Lazzat Abdykerimova, Umida Sabirova, Zhanat Seitakhmetova, Gulsiya Uvaliyeva, Maksym Orynbassar, Aislu Kassekeyeva and Murizah Kassim
Algorithms 2025, 18(8), 453; https://doi.org/10.3390/a18080453 - 22 Jul 2025
Viewed by 111
Abstract
This study proposes an interpretable hybrid deep learning framework for classifying attention deficit hyperactivity disorder (ADHD) using EEG signals recorded during cognitively demanding tasks. The core architecture integrates convolutional neural networks (CNNs), gated recurrent units (GRUs), and long short-term memory (LSTM) layers to [...] Read more.
This study proposes an interpretable hybrid deep learning framework for classifying attention deficit hyperactivity disorder (ADHD) using EEG signals recorded during cognitively demanding tasks. The core architecture integrates convolutional neural networks (CNNs), gated recurrent units (GRUs), and long short-term memory (LSTM) layers to jointly capture spatial and temporal dynamics. In addition to the final hybrid architecture, the CNN–GRU–LSTM model alone demonstrates excellent accuracy (99.63%) with minimal variance, making it a strong baseline for clinical applications. To evaluate the role of global attention mechanisms, transformer encoder models with two and three attention blocks, along with a spatiotemporal transformer employing 2D positional encoding, are benchmarked. A hybrid CNN–RNN–transformer model is introduced, combining convolutional, recurrent, and transformer-based modules into a unified architecture. To enhance interpretability, SHapley Additive exPlanations (SHAP) are employed to identify key EEG channels contributing to classification outcomes. Experimental evaluation using stratified five-fold cross-validation demonstrates that the proposed hybrid model achieves superior performance, with average accuracy exceeding 99.98%, F1-scores above 0.9999, and near-perfect AUC and Matthews correlation coefficients. In contrast, transformer-only models, despite high training accuracy, exhibit reduced generalization. SHAP-based analysis confirms the hybrid model’s clinical relevance. This work advances the development of transparent and reliable EEG-based tools for pediatric ADHD screening. Full article
Show Figures

Graphical abstract

27 pages, 4136 KiB  
Article
Quantum-Enhanced Attention Neural Networks for PM2.5 Concentration Prediction
by Tichen Huang, Yuyan Jiang, Rumeijiang Gan and Fuyu Wang
Modelling 2025, 6(3), 69; https://doi.org/10.3390/modelling6030069 - 21 Jul 2025
Viewed by 124
Abstract
As industrialization and economic growth accelerate, PM2.5 pollution has become a critical environmental concern. Predicting PM2.5 concentration is challenging due to its nonlinear and complex temporal dynamics, limiting the accuracy and robustness of traditional machine learning models. To enhance prediction accuracy, [...] Read more.
As industrialization and economic growth accelerate, PM2.5 pollution has become a critical environmental concern. Predicting PM2.5 concentration is challenging due to its nonlinear and complex temporal dynamics, limiting the accuracy and robustness of traditional machine learning models. To enhance prediction accuracy, this study focuses on Ma’anshan City, China and proposes a novel hybrid model (QMEWOA-QCAM-BiTCN-BiLSTM) based on an “optimization first, prediction later” approach. Feature selection using Pearson correlation and RFECV reduces model complexity, while the Whale Optimization Algorithm (WOA) optimizes model parameters. To address the local optima and premature convergence issues of WOA, we introduce a quantum-enhanced multi-strategy improved WOA (QMEWOA) for global optimization. A Quantum Causal Attention Mechanism (QCAM) is incorporated, leveraging Quantum State Mapping (QSM) for higher-order feature extraction. The experimental results show that our model achieves a MedAE of 1.997, MAE of 3.173, MAPE of 10.56%, and RMSE of 5.218, outperforming comparison models. Furthermore, generalization experiments confirm its superior performance across diverse datasets, demonstrating its robustness and effectiveness in PM2.5 concentration prediction. Full article
Show Figures

Graphical abstract

26 pages, 2162 KiB  
Article
Developing Performance Measurement Framework for Sustainable Facility Management (SFM) in Office Buildings Using Bayesian Best Worst Method
by Ayşe Pınar Özyılmaz, Fehmi Samet Demirci, Ozan Okudan and Zeynep Işık
Sustainability 2025, 17(14), 6639; https://doi.org/10.3390/su17146639 - 21 Jul 2025
Viewed by 264
Abstract
The confluence of financial constraints, climate change mitigation efforts, and evolving user expectations has significantly transformed the concept of facility management (FM). Traditional FM has now evolved to enhance sustainability in the built environment. Sustainable facility management (SFM) can add value to companies, [...] Read more.
The confluence of financial constraints, climate change mitigation efforts, and evolving user expectations has significantly transformed the concept of facility management (FM). Traditional FM has now evolved to enhance sustainability in the built environment. Sustainable facility management (SFM) can add value to companies, organizations, and governments by balancing the financial, environmental, and social outcomes of the FM processes. The systematic literature review revealed a limited number of studies developing a performance measurement framework for SFM in office buildings and/or other building types in the literature. Given that the lack of this theoretical basis inhibits the effective deployment of SFM practices, this study aims to fill this gap by developing a performance measurement framework for SFM in office buildings. Accordingly, an in-depth literature review was initially conducted to synthesize sustainable performance measurement factors. Next, a series of focus group discussion (FGD) sessions were organized to refine and verify the factors and develop a novel performance measurement framework for SFM. Lastly, consistency analysis, the Bayesian best worst method (BBWM), and sensitivity analysis were implemented to determine the priorities of the factors. What the proposed framework introduces is the combined use of two performance measurement mechanisms, such as continuous performance measurement and comprehensive performance measurement. The continuous performance measurement is conducted using high-priority factors. On the other hand, the comprehensive performance measurement is conducted with all the factors proposed in this study. Also, the BBWM results showed that “Energy-efficient material usage”, “Percentage of energy generated from renewable energy resources to total energy consumption”, and “Promoting hybrid or remote work conditions” are the top three factors, with scores of 0.0741, 0.0598, and 0.0555, respectively. Moreover, experts should also pay the utmost attention to factors related to waste management, indoor air quality, thermal comfort, and H&S measures. In addition to its theoretical contributions, the paper makes practical contributions by enabling decision makers to measure the SFM performance of office buildings and test the outcomes of their managerial processes in terms of performance. Full article
Show Figures

Figure 1

22 pages, 1805 KiB  
Article
A Hybrid Semantic and Multi-Attention Mechanism Approach for Detecting Vulnerabilities in Smart Contract Code
by Zhenxiang He, Yanling Liu and Xiaohui Sun
Symmetry 2025, 17(7), 1161; https://doi.org/10.3390/sym17071161 - 21 Jul 2025
Viewed by 159
Abstract
Driven by blockchain technology, numerous industries are increasingly adopting smart contracts to enhance efficiency, reduce costs, and improve transparency. As a result, ensuring the security of smart contracts has become critical. Traditional detection methods often suffer from low efficiency, are prone to missing [...] Read more.
Driven by blockchain technology, numerous industries are increasingly adopting smart contracts to enhance efficiency, reduce costs, and improve transparency. As a result, ensuring the security of smart contracts has become critical. Traditional detection methods often suffer from low efficiency, are prone to missing complex vulnerabilities, and have limited accuracy. Although deep learning approaches address some of these challenges, issues with both accuracy and efficiency remain in current solutions. To overcome these limitations, this paper proposes a symmetry-inspired solution that harmonizes bidirectional and generative semantic patterns. First, we generate distinct feature extraction segments for different vulnerabilities. We then use the Bidirectional Encoder Representations from Transformers (BERT) module to extract original semantic features from these segments and the Generative Pre-trained Transformer (GPT) module to extract generative semantic features. Finally, the two sets of semantic features are fused using a multi-attention mechanism and input into a classifier for result prediction. Our method was tested on three datasets, achieving F1 scores of 93.33%, 93.65%, and 92.31%, respectively. The results demonstrate that our approach outperforms most existing methods in smart contract detection. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

Back to TopTop