Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (16,040)

Search Parameters:
Keywords = network classification

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 7086 KiB  
Article
Gas Leak Detection and Leakage Rate Identification in Underground Utility Tunnels Using a Convolutional Recurrent Neural Network
by Ziyang Jiang, Canghai Zhang, Zhao Xu and Wenbin Song
Appl. Sci. 2025, 15(14), 8022; https://doi.org/10.3390/app15148022 - 18 Jul 2025
Abstract
An underground utility tunnel (UUT) is essential for the efficient use of urban underground space. However, current maintenance systems rely on patrol personnel and professional equipment. This study explores industrial detection methods for identifying and monitoring natural gas leaks in UUTs. Via infrared [...] Read more.
An underground utility tunnel (UUT) is essential for the efficient use of urban underground space. However, current maintenance systems rely on patrol personnel and professional equipment. This study explores industrial detection methods for identifying and monitoring natural gas leaks in UUTs. Via infrared thermal imaging gas experiments, data were acquired and a dataset established. To address the low-resolution problem of existing imaging devices, video super-resolution (VSR) was used to improve the data quality. Based on a convolutional recurrent neural network (CRNN), the image features at each moment were extracted, and the time series data were modeled to realize the risk-level classification mechanism based on the automatic classification of the leakage rate. The experimental results show that when the sliding window size was set to 10 frames, the classification accuracy of the CRNN was the highest, which could reach 0.98. This method improves early warning precision and response efficiency, offering practical technical support for UUT maintenance management. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence in Industrial Engineering)
22 pages, 3235 KiB  
Article
Advanced Multi-Scale CNN-BiLSTM for Robust Photovoltaic Fault Detection
by Xiaojuan Zhang, Bo Jing, Xiaoxuan Jiao and Ruixu Yao
Sensors 2025, 25(14), 4474; https://doi.org/10.3390/s25144474 - 18 Jul 2025
Abstract
The increasing deployment of photovoltaic (PV) systems necessitates robust fault detection mechanisms to ensure operational reliability and safety. Conventional approaches, however, struggle in complex industrial environments characterized by high noise, data incompleteness, and class imbalance. This study proposes an innovative Advanced CNN-BiLSTM architecture [...] Read more.
The increasing deployment of photovoltaic (PV) systems necessitates robust fault detection mechanisms to ensure operational reliability and safety. Conventional approaches, however, struggle in complex industrial environments characterized by high noise, data incompleteness, and class imbalance. This study proposes an innovative Advanced CNN-BiLSTM architecture integrating multi-scale feature extraction with hierarchical attention to enhance PV fault detection. The proposed framework employs four parallel CNN branches with kernel sizes of 3, 7, 15, and 31 to capture temporal patterns across various time scales. These features are then integrated by an adaptive feature fusion network that utilizes multi-head attention. A two-layer bidirectional LSTM with temporal attention mechanism processes the fused features for final classification. Comprehensive evaluation on the GPVS-Faults dataset using a progressive difficulty validation framework demonstrates exceptional performance improvements. Under extreme industrial conditions, the proposed method achieves 83.25% accuracy, representing a substantial 119.48% relative improvement over baseline CNN-BiLSTM (37.93%). Ablation studies reveal that the multi-scale CNN contributes 28.0% of the total performance improvement, while adaptive feature fusion accounts for 22.0%. Furthermore, the proposed method demonstrates superior robustness under severe noise (σ = 0.20), high levels of missing data (15%), and significant outlier contamination (8%). These characteristics make the architecture highly suitable for real-world industrial deployment and establish a new paradigm for temporal feature fusion in renewable energy fault detection. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

28 pages, 8982 KiB  
Article
Decision-Level Multi-Sensor Fusion to Improve Limitations of Single-Camera-Based CNN Classification in Precision Farming: Application in Weed Detection
by Md. Nazmuzzaman Khan, Adibuzzaman Rahi, Mohammad Al Hasan and Sohel Anwar
Computation 2025, 13(7), 174; https://doi.org/10.3390/computation13070174 - 18 Jul 2025
Abstract
The United States leads in corn production and consumption in the world with an estimated USD 50 billion per year. There is a pressing need for the development of novel and efficient techniques aimed at enhancing the identification and eradication of weeds in [...] Read more.
The United States leads in corn production and consumption in the world with an estimated USD 50 billion per year. There is a pressing need for the development of novel and efficient techniques aimed at enhancing the identification and eradication of weeds in a manner that is both environmentally sustainable and economically advantageous. Weed classification for autonomous agricultural robots is a challenging task for a single-camera-based system due to noise, vibration, and occlusion. To address this issue, we present a multi-camera-based system with decision-level sensor fusion to improve the limitations of a single-camera-based system in this paper. This study involves the utilization of a convolutional neural network (CNN) that was pre-trained on the ImageNet dataset. The CNN subsequently underwent re-training using a limited weed dataset to facilitate the classification of three distinct weed species: Xanthium strumarium (Common Cocklebur), Amaranthus retroflexus (Redroot Pigweed), and Ambrosia trifida (Giant Ragweed). These weed species are frequently encountered within corn fields. The test results showed that the re-trained VGG16 with a transfer-learning-based classifier exhibited acceptable accuracy (99% training, 97% validation, 94% testing accuracy) and inference time for weed classification from the video feed was suitable for real-time implementation. But the accuracy of CNN-based classification from video feed from a single camera was found to deteriorate due to noise, vibration, and partial occlusion of weeds. Test results from a single-camera video feed show that weed classification accuracy is not always accurate for the spray system of an agricultural robot (AgBot). To improve the accuracy of the weed classification system and to overcome the shortcomings of single-sensor-based classification from CNN, an improved Dempster–Shafer (DS)-based decision-level multi-sensor fusion algorithm was developed and implemented. The proposed algorithm offers improvement on the CNN-based weed classification when the weed is partially occluded. This algorithm can also detect if a sensor is faulty within an array of sensors and improves the overall classification accuracy by penalizing the evidence from a faulty sensor. Overall, the proposed fusion algorithm showed robust results in challenging scenarios, overcoming the limitations of a single-sensor-based system. Full article
(This article belongs to the Special Issue Moving Object Detection Using Computational Methods and Modeling)
Show Figures

Figure 1

15 pages, 4874 KiB  
Article
A Novel 3D Convolutional Neural Network-Based Deep Learning Model for Spatiotemporal Feature Mapping for Video Analysis: Feasibility Study for Gastrointestinal Endoscopic Video Classification
by Mrinal Kanti Dhar, Mou Deb, Poonguzhali Elangovan, Keerthy Gopalakrishnan, Divyanshi Sood, Avneet Kaur, Charmy Parikh, Swetha Rapolu, Gianeshwaree Alias Rachna Panjwani, Rabiah Aslam Ansari, Naghmeh Asadimanesh, Shiva Sankari Karuppiah, Scott A. Helgeson, Venkata S. Akshintala and Shivaram P. Arunachalam
J. Imaging 2025, 11(7), 243; https://doi.org/10.3390/jimaging11070243 - 18 Jul 2025
Abstract
Accurate analysis of medical videos remains a major challenge in deep learning (DL) due to the need for effective spatiotemporal feature mapping that captures both spatial detail and temporal dynamics. Despite advances in DL, most existing models in medical AI focus on static [...] Read more.
Accurate analysis of medical videos remains a major challenge in deep learning (DL) due to the need for effective spatiotemporal feature mapping that captures both spatial detail and temporal dynamics. Despite advances in DL, most existing models in medical AI focus on static images, overlooking critical temporal cues present in video data. To bridge this gap, a novel DL-based framework is proposed for spatiotemporal feature extraction from medical video sequences. As a feasibility use case, this study focuses on gastrointestinal (GI) endoscopic video classification. A 3D convolutional neural network (CNN) is developed to classify upper and lower GI endoscopic videos using the hyperKvasir dataset, which contains 314 lower and 60 upper GI videos. To address data imbalance, 60 matched pairs of videos are randomly selected across 20 experimental runs. Videos are resized to 224 × 224, and the 3D CNN captures spatiotemporal information. A 3D version of the parallel spatial and channel squeeze-and-excitation (P-scSE) is implemented, and a new block called the residual with parallel attention (RPA) block is proposed by combining P-scSE3D with a residual block. To reduce computational complexity, a (2 + 1)D convolution is used in place of full 3D convolution. The model achieves an average accuracy of 0.933, precision of 0.932, recall of 0.944, F1-score of 0.935, and AUC of 0.933. It is also observed that the integration of P-scSE3D increased the F1-score by 7%. This preliminary work opens avenues for exploring various GI endoscopic video-based prospective studies. Full article
Show Figures

Figure 1

36 pages, 4475 KiB  
Article
Technical Condition Assessment of Light-Alloy Wheel Rims Based on Acoustic Parameter Analysis Using a Neural Network
by Arkadiusz Rychlik
Sensors 2025, 25(14), 4473; https://doi.org/10.3390/s25144473 - 18 Jul 2025
Abstract
Light alloy wheel rims, despite their widespread use, remain vulnerable to fatigue-related defects and mechanical damage. This study presents a method for assessing their technical condition based on acoustic parameter analysis and classification using a deep neural network. Diagnostic data were collected using [...] Read more.
Light alloy wheel rims, despite their widespread use, remain vulnerable to fatigue-related defects and mechanical damage. This study presents a method for assessing their technical condition based on acoustic parameter analysis and classification using a deep neural network. Diagnostic data were collected using a custom-developed ADF (Acoustic Diagnostic Features) system, incorporating the reverberation time (T60), sound absorption coefficient (α), and acoustic energy (E). These parameters were measured during laboratory fatigue testing on a Wheel Resistance Test Rig (WRTR) and from used rims obtained under real-world operating conditions. The neural network was trained on WRTR data and subsequently employed to classify field samples as either “serviceable” or “unserviceable”. Results confirmed the high effectiveness of the proposed method, including its robustness in detecting borderline cases, as demonstrated in a case study involving a mechanically damaged rim. The developed approach offers potential support for diagnostic decision-making in workshop settings and may, in the future, serve as a foundation for sensor-based real-time rim condition monitoring. Full article
Show Figures

Figure 1

18 pages, 35958 KiB  
Article
OpenFungi: A Machine Learning Dataset for Fungal Image Recognition Tasks
by Anca Cighir, Roland Bolboacă and Teri Lenard
Life 2025, 15(7), 1132; https://doi.org/10.3390/life15071132 - 18 Jul 2025
Abstract
A key aspect driving advancements in machine learning applications in medicine is the availability of publicly accessible datasets. Evidently, there are studies conducted in the past with promising results, but they are not reproducible due to the fact that the data used are [...] Read more.
A key aspect driving advancements in machine learning applications in medicine is the availability of publicly accessible datasets. Evidently, there are studies conducted in the past with promising results, but they are not reproducible due to the fact that the data used are closed or proprietary or the authors were not able to publish them. The current study aims to narrow this gap for researchers who focus on image recognition tasks in microbiology, specifically in fungal identification and classification. An open database named OpenFungi is made available in this work; it contains high-quality images of macroscopic and microscopic fungal genera. The fungal cultures were grown from food products such as green leaf spices and cereals. The quality of the dataset is demonstrated by solving a classification problem with a simple convolutional neural network. A thorough experimental analysis was conducted, where six performance metrics were measured in three distinct validation scenarios. The results obtained demonstrate that in the fungal species classification task, the model achieved an overall accuracy of 99.79%, a true-positive rate of 99.55%, a true-negative rate of 99.96%, and an F1 score of 99.63% on the macroscopic dataset. On the microscopic dataset, the model reached a 97.82% accuracy, a 94.89% true-positive rate, a 99.19% true-negative rate, and a 95.20% F1 score. The results also reveal that the model maintains promising performance even when trained on smaller datasets, highlighting its robustness and generalization capabilities. Full article
(This article belongs to the Section Microbiology)
Show Figures

Figure 1

19 pages, 4026 KiB  
Article
The Fusion of Focused Spectral and Image Texture Features: A New Exploration of the Nondestructive Detection of Degeneration Degree in Pleurotus geesteranus
by Yifan Jiang, Jin Shang, Yueyue Cai, Shiyang Liu, Ziqin Liao, Jie Pang, Yong He and Xuan Wei
Agriculture 2025, 15(14), 1546; https://doi.org/10.3390/agriculture15141546 - 18 Jul 2025
Abstract
The degradation of edible fungi can lead to a decrease in cultivation yield and economic losses. In this study, a nondestructive detection method for strain degradation based on the fusion of hyperspectral technology and image texture features is presented. Hyperspectral and microscopic image [...] Read more.
The degradation of edible fungi can lead to a decrease in cultivation yield and economic losses. In this study, a nondestructive detection method for strain degradation based on the fusion of hyperspectral technology and image texture features is presented. Hyperspectral and microscopic image data were acquired from Pleurotus geesteranus strains exhibiting varying degrees of degradation, followed by preprocessing using Savitzky–Golay smoothing (SG), multivariate scattering correction (MSC), and standard normal variate transformation (SNV). Spectral features were extracted by the successive projections algorithm (SPA), competitive adaptive reweighted sampling (CARS), and principal component analysis (PCA), while the texture features were derived using gray-level co-occurrence matrix (GLCM) and local binary pattern (LBP) models. The spectral and texture features were then fused and used to construct a classification model based on convolutional neural networks (CNN). The results showed that combining hyperspectral and image texture features significantly improved the classification accuracy. Among the tested models, the CARS + LBP-CNN configuration achieved the best performance, with an overall accuracy of 95.6% and a kappa coefficient of 0.96. This approach provides a new technical solution for the nondestructive detection of strain degradation in Pleurotus geesteranus. Full article
(This article belongs to the Section Agricultural Product Quality and Safety)
Show Figures

Figure 1

13 pages, 2199 KiB  
Article
Non-Invasive Composition Identification in Organic Solar Cells via Deep Learning
by Yi-Hsun Chang, You-Lun Zhang, Cheng-Hao Cheng, Shu-Han Wu, Cheng-Han Li, Su-Yu Liao, Zi-Chun Tseng, Ming-Yi Lin and Chun-Ying Huang
Nanomaterials 2025, 15(14), 1112; https://doi.org/10.3390/nano15141112 - 17 Jul 2025
Abstract
Accurate identification of active-layer compositions in organic photovoltaic (OPV) devices often relies on invasive techniques such as electrical measurements or material extraction, which risk damaging the device. In this study, we propose a non-invasive classification approach based on simulated full-device absorption spectra. To [...] Read more.
Accurate identification of active-layer compositions in organic photovoltaic (OPV) devices often relies on invasive techniques such as electrical measurements or material extraction, which risk damaging the device. In this study, we propose a non-invasive classification approach based on simulated full-device absorption spectra. To account for fabrication-related variability, the active-layer thickness varied by over ±15% around the optimal value, creating a realistic and diverse training dataset. A multilayer perceptron (MLP) neural network was applied with various activation functions, optimization algorithms, and data split ratios. The optimized model achieved classification accuracies exceeding 99% on both training and testing sets, with minimal sensitivity to random initialization or data partitioning. These results demonstrate the potential of applying deep learning to spectral data for reliable, non-destructive OPV composition classification, paving the way for integration into automated manufacturing diagnostics and quality control workflows. Full article
Show Figures

Figure 1

17 pages, 2699 KiB  
Article
How to Talk to Your Classifier: Conditional Text Generation with Radar–Visual Latent Space
by Julius Ott, Huawei Sun, Lorenzo Servadei and Robert Wille
Sensors 2025, 25(14), 4467; https://doi.org/10.3390/s25144467 - 17 Jul 2025
Abstract
Many radar applications rely primarily on visual classification for their evaluations. However, new research is integrating textual descriptions alongside visual input and showing that such multimodal fusion improves contextual understanding. A critical issue in this area is the effective alignment of coded text [...] Read more.
Many radar applications rely primarily on visual classification for their evaluations. However, new research is integrating textual descriptions alongside visual input and showing that such multimodal fusion improves contextual understanding. A critical issue in this area is the effective alignment of coded text with corresponding images. To this end, our paper presents an adversarial training framework that generates descriptive text from the latent space of a visual radar classifier. Our quantitative evaluations show that this dual-task approach maintains a robust classification accuracy of 98.3% despite the inclusion of Gaussian-distributed latent spaces. Beyond these numerical validations, we conduct a qualitative study of the text output in relation to the classifier’s predictions. This analysis highlights the correlation between the generated descriptions and the assigned categories and provides insight into the classifier’s visual interpretation processes, particularly in the context of normally uninterpretable radar data. Full article
Show Figures

Graphical abstract

26 pages, 5414 KiB  
Article
Profile-Based Building Detection Using Convolutional Neural Network and High-Resolution Digital Surface Models
by Behaeen Farajelahi and Hossein Arefi
Remote Sens. 2025, 17(14), 2496; https://doi.org/10.3390/rs17142496 - 17 Jul 2025
Abstract
This research presents a novel method for detecting building roof types using deep learning models based on height profiles from high-resolution digital surface models. While deep learning has proven effective in digit, handwritten, and time series classification, this study focuses on the emerging [...] Read more.
This research presents a novel method for detecting building roof types using deep learning models based on height profiles from high-resolution digital surface models. While deep learning has proven effective in digit, handwritten, and time series classification, this study focuses on the emerging and crucial area of height profile detection for building roof type classification. We propose an innovative approach to automatically generate, classify, and detect building roof types using height profiles derived from normalized digital surface models. We present three distinct methods to detect seven roof types from two height profiles of the building cross-section. The first two methods detect the building roof type from two-dimensional (2D) height profiles: two binary images and a two-band spectral image. The third method, vector-based, detects the building roof type from two one-dimensional (1D) height profiles represented as two 1D vectors. We trained various one- and two-dimensional convolutional neural networks on these 1D and 2D height profiles. The DenseNet201 network could directly detect the roof type of a building from two height profiles stored as a two-band spectral image with an average accuracy of 97%, even in the presence of consecutive chimneys, dormers, and noise. The strengths of this approach include the generation of a large, detailed, and storage-efficient labeled height profile dataset, the development of a robust classification method using both 1D and 2D height profiles, and an automated workflow that enhances building roof type detection. Full article
Show Figures

Figure 1

21 pages, 5633 KiB  
Article
Duck Egg Crack Detection Using an Adaptive CNN Ensemble with Multi-Light Channels and Image Processing
by Vasutorn Chaowalittawin, Woranidtha Krungseanmuang, Posathip Sathaporn and Boonchana Purahong
Appl. Sci. 2025, 15(14), 7960; https://doi.org/10.3390/app15147960 - 17 Jul 2025
Abstract
Duck egg quality classification is critical in farms, hatcheries, and salted egg processing plants, where cracked eggs must be identified before further processing or distribution. However, duck eggs present a unique challenge due to their white eggshells, which make cracks difficult to detect [...] Read more.
Duck egg quality classification is critical in farms, hatcheries, and salted egg processing plants, where cracked eggs must be identified before further processing or distribution. However, duck eggs present a unique challenge due to their white eggshells, which make cracks difficult to detect visually. In current practice, human inspectors use standard white light for crack detection, and many researchers have focused primarily on improving detection algorithms without addressing lighting limitations. Therefore, this paper presents duck egg crack detection using an adaptive convolutional neural network (CNN) model ensemble with multi-light channels. We began by developing a portable crack detection system capable of controlling various light sources to determine the optimal lighting conditions for crack visibility. A total of 23,904 images were collected and evenly distributed across four lighting channels (red, green, blue, and white), with 1494 images per channel. The dataset was then split into 836 images for training, 209 images for validation, and 449 images for testing per lighting condition. To enhance image quality prior to model training, several image pre-processing techniques were applied, including normalization, histogram equalization (HE), and contrast-limited adaptive histogram equalization (CLAHE). The Adaptive MobileNetV2 was employed to evaluate the performance of crack detection under different lighting and pre-processing conditions. The results indicated that, under red lighting, the model achieved 100.00% accuracy, precision, recall, and F1-score across almost all pre-processing methods. Under green lighting, the highest accuracy of 99.80% was achieved using the image normalization method. For blue lighting, the model reached 100.00% accuracy with the HE method. Under white lighting, the highest accuracy of 99.83% was achieved using both the original and HE methods. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

21 pages, 5313 KiB  
Article
MixtureRS: A Mixture of Expert Network Based Remote Sensing Land Classification
by Yimei Liu, Changyuan Wu, Minglei Guan and Jingzhe Wang
Remote Sens. 2025, 17(14), 2494; https://doi.org/10.3390/rs17142494 - 17 Jul 2025
Abstract
Accurate land-use classification is critical for urban planning and environmental monitoring, yet effectively integrating heterogeneous data sources such as hyperspectral imagery and laser radar (LiDAR) remains challenging. To address this, we propose MixtureRS, a compact multimodal network that effectively integrates hyperspectral imagery and [...] Read more.
Accurate land-use classification is critical for urban planning and environmental monitoring, yet effectively integrating heterogeneous data sources such as hyperspectral imagery and laser radar (LiDAR) remains challenging. To address this, we propose MixtureRS, a compact multimodal network that effectively integrates hyperspectral imagery and LiDAR data for land-use classification. Our approach employs a 3-D plus heterogeneous convolutional stack to extract rich spectral–spatial features, which are then tokenized and fused via a cross-modality transformer. To enhance model capacity without incurring significant computational overhead, we replace conventional dense feed-forward blocks with a sparse Mixture-of-Experts (MoE) layer that selectively activates the most relevant experts for each token. Evaluated on a 15-class urban benchmark, MixtureRS achieves an overall accuracy of 88.6%, an average accuracy of 90.2%, and a Kappa coefficient of 0.877, outperforming the best homogeneous transformer by over 12 percentage points. Notably, the largest improvements are observed in water, railway, and parking categories, highlighting the advantages of incorporating height information and conditional computation. These results demonstrate that conditional, expert-guided fusion is a promising and efficient strategy for advancing multimodal remote sensing models. Full article
Show Figures

Graphical abstract

24 pages, 2281 KiB  
Article
Multilayer Network Modeling for Brand Knowledge Discovery: Integrating TF-IDF and TextRank in Heterogeneous Semantic Space
by Peng Xu, Rixu Zang, Zongshui Wang and Zhuo Sun
Information 2025, 16(7), 614; https://doi.org/10.3390/info16070614 - 17 Jul 2025
Abstract
In the era of homogenized competition, brand knowledge has become a critical factor that influences consumer purchasing decisions. However, traditional single-layer network models fail to capture the multi-dimensional semantic relationships embedded in brand-related textual data. To address this gap, this study proposes a [...] Read more.
In the era of homogenized competition, brand knowledge has become a critical factor that influences consumer purchasing decisions. However, traditional single-layer network models fail to capture the multi-dimensional semantic relationships embedded in brand-related textual data. To address this gap, this study proposes a BKMN framework integrating TF-IDF and TextRank algorithms for comprehensive brand knowledge discovery. By analyzing 19,875 consumer reviews of a mobile phone brand from JD website, we constructed a tri-layer network comprising TF-IDF-derived keywords, TextRank-derived keywords, and their overlapping nodes. The model incorporates co-occurrence matrices and centrality metrics (degree, closeness, betweenness, eigenvector) to identify semantic hubs and interlayer associations. The results reveal that consumers prioritize attributes such as “camera performance”, “operational speed”, “screen quality”, and “battery life”. Notably, the overlap layer exhibits the highest node centrality, indicating convergent consumer focus across algorithms. The network demonstrates small-world characteristics (average path length = 1.627) with strong clustering (average clustering coefficient = 0.848), reflecting cohesive consumer discourse around key features. Meanwhile, this study proposes the Mul-LSTM model for sentiment analysis of reviews, achieving a 93% sentiment classification accuracy, revealing that consumers have a higher proportion of positive attitudes towards the brand’s cell phones, which provides a quantitative basis for enterprises to understand users’ emotional tendencies and optimize brand word-of-mouth management. This research advances brand knowledge modeling by synergizing heterogeneous algorithms and multilayer network analysis. Its practical implications include enabling enterprises to pinpoint competitive differentiators and optimize marketing strategies. Future work could extend the framework to incorporate sentiment dynamics and cross-domain applications in smart home or cosmetic industries. Full article
Show Figures

Figure 1

15 pages, 3364 KiB  
Article
Potential Benefits of Polar Transformation of Time–Frequency Electrocardiogram (ECG) Signals for Evaluation of Cardiac Arrhythmia
by Hanbit Kang, Daehyun Kwon and Yoon-Chul Kim
Appl. Sci. 2025, 15(14), 7980; https://doi.org/10.3390/app15147980 - 17 Jul 2025
Abstract
There is a lack of studies on the effectiveness of polar-transformed spectrograms in the visualization and prediction of cardiac arrhythmias from electrocardiogram (ECG) data. In this study, single-lead ECG waveforms were converted into two-dimensional rectangular time–frequency spectrograms and polar time–frequency spectrograms. Three pre-trained [...] Read more.
There is a lack of studies on the effectiveness of polar-transformed spectrograms in the visualization and prediction of cardiac arrhythmias from electrocardiogram (ECG) data. In this study, single-lead ECG waveforms were converted into two-dimensional rectangular time–frequency spectrograms and polar time–frequency spectrograms. Three pre-trained convolutional neural network (CNN) models (ResNet50, MobileNet, and DenseNet121) served as baseline networks for model development and testing. Prediction performance and visualization quality were evaluated across various image resolutions. The trade-offs between image resolution and model capacity were quantitatively analyzed. Polar-transformed spectrograms demonstrated superior delineation of R-R intervals at lower image resolutions (e.g., 96 × 96 pixels) compared to conventional spectrograms. For deep-learning-based classification of cardiac arrhythmias, polar-transformed spectrograms achieved comparable accuracy to conventional spectrograms across all evaluated resolutions. The results suggest that polar-transformed spectrograms are particularly advantageous for deep CNN predictions at lower resolutions, making them suitable for edge computing applications where the reduced use of computing resources, such as memory and power consumption, is desirable. Full article
Show Figures

Figure 1

37 pages, 6677 KiB  
Article
Spatial and Spectral Structure-Aware Mamba Network for Hyperspectral Image Classification
by Jie Zhang, Ming Sun and Sheng Chang
Remote Sens. 2025, 17(14), 2489; https://doi.org/10.3390/rs17142489 - 17 Jul 2025
Abstract
Recently, a network based on selective state space models (SSMs), Mamba, has emerged as a research focus in hyperspectral image (HSI) classification due to its linear computational complexity and strong long-range dependency modeling capability. Originally designed for 1D causal sequence modeling, Mamba is [...] Read more.
Recently, a network based on selective state space models (SSMs), Mamba, has emerged as a research focus in hyperspectral image (HSI) classification due to its linear computational complexity and strong long-range dependency modeling capability. Originally designed for 1D causal sequence modeling, Mamba is challenging for HSI tasks that require simultaneous awareness of spatial and spectral structures. Current Mamba-based HSI classification methods typically convert spatial structures into 1D sequences and employ various scanning patterns to capture spatial dependencies. However, these approaches inevitably disrupt spatial structures, leading to ineffective modeling of complex spatial relationships and increased computational costs due to elongated scanning paths. Moreover, the lack of neighborhood spectral information utilization fails to mitigate the impact of spatial variability on classification performance. To address these limitations, we propose a novel model, Dual-Aware Discriminative Fusion Mamba (DADFMamba), which is simultaneously aware of spatial-spectral structures and adaptively integrates discriminative features. Specifically, we design a Spatial-Structure-Aware Fusion Module (SSAFM) to directly establish spatial neighborhood connectivity in the state space, preserving structural integrity. Then, we introduce a Spectral-Neighbor-Group Fusion Module (SNGFM). It enhances target spectral features by leveraging neighborhood spectral information before partitioning them into multiple spectral groups to explore relations across these groups. Finally, we introduce a Feature Fusion Discriminator (FFD) to discriminate the importance of spatial and spectral features, enabling adaptive feature fusion. Extensive experiments on four benchmark HSI datasets demonstrate that DADFMamba outperforms state-of-the-art deep learning models in classification accuracy while maintaining low computational costs and parameter efficiency. Notably, it achieves superior performance with only 30 training samples per class, highlighting its data efficiency. Our study reveals the great potential of Mamba in HSI classification and provides valuable insights for future research. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Back to TopTop