Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (4)

Search Parameters:
Keywords = 3DPCANet

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 15361 KB  
Article
DPCSANet: Dual-Path Convolutional Self-Attention for Small Ship Detection in Optical Remote Sensing Images
by Jiajie Chen, Xin Tian and Chong Du
Electronics 2025, 14(6), 1225; https://doi.org/10.3390/electronics14061225 - 20 Mar 2025
Cited by 2 | Viewed by 811
Abstract
Detecting small ships in optical RSIS is challenging. Due to resolution limitations, the texture and edge information of many ship targets are blurred, making feature extraction difficult and thereby reducing detection accuracy. To address this issue, we propose a novel dual-path convolutional self-attention [...] Read more.
Detecting small ships in optical RSIS is challenging. Due to resolution limitations, the texture and edge information of many ship targets are blurred, making feature extraction difficult and thereby reducing detection accuracy. To address this issue, we propose a novel dual-path convolutional self-attention network, DPCSANet, for ship detection. The model first incorporates a dual-path convolutional self-attention module to enhance its ability to extract local and global features and strengthen target features. This module integrates two parallel branches to process features extracted by convolution and attention mechanisms, respectively, thereby mitigating the potential conflicts between local and global information. Additionally, a high-dimensional hybrid spatial pyramid pooling module is introduced into the model to expand the scale range of feature extraction. This enables the model to fully utilize background contextual features to compensate for weak feature representations of the target. To further improve the detection accuracy for small ships, we developed a focal complete intersection over union loss function. This regression loss guides the model to focus on weak targets during training by increasing the contribution of low-accuracy prediction boxes to the loss. Experimental results demonstrate that the proposed method effectively enhances the model’s detection ability for small ships. On the LEVIR-ship, OSSD, and DOTA-ship datasets, DPCANet achieves an average precision improvement of 0.9% to 11.4% over the baseline, outperforming other state-of-the-art object detection models. Full article
Show Figures

Figure 1

18 pages, 13718 KB  
Article
A Hybrid EEG-Based Stress State Classification Model Using Multi-Domain Transfer Entropy and PCANet
by Yuefang Dong, Lin Xu, Jian Zheng, Dandan Wu, Huanli Li, Yongcong Shao, Guohua Shi and Weiwei Fu
Brain Sci. 2024, 14(6), 595; https://doi.org/10.3390/brainsci14060595 - 12 Jun 2024
Cited by 4 | Viewed by 2095
Abstract
This paper proposes a new hybrid model for classifying stress states using EEG signals, combining multi-domain transfer entropy (TrEn) with a two-dimensional PCANet (2D-PCANet) approach. The aim is to create an automated system for identifying stress levels, which is crucial for early intervention [...] Read more.
This paper proposes a new hybrid model for classifying stress states using EEG signals, combining multi-domain transfer entropy (TrEn) with a two-dimensional PCANet (2D-PCANet) approach. The aim is to create an automated system for identifying stress levels, which is crucial for early intervention and mental health management. A major challenge in this field lies in extracting meaningful emotional information from the complex patterns observed in EEG. Our model addresses this by initially applying independent component analysis (ICA) to purify the EEG signals, enhancing the clarity for further analysis. We then leverage the adaptability of the fractional Fourier transform (FrFT) to represent the EEG data in time, frequency, and time–frequency domains. This multi-domain representation allows for a more nuanced understanding of the brain’s activity in response to stress. The subsequent stage involves the deployment of a two-layer 2D-PCANet network designed to autonomously distill EEG features associated with stress. These features are then classified by a support vector machine (SVM) to determine the stress state. Moreover, stress induction and data acquisition experiments are designed. We employed two distinct tasks known to trigger stress responses. Other stress-inducing elements that enhance the stress response were included in the experimental design, such as time limits and performance feedback. The EEG data collected from 15 participants were retained. The proposed algorithm achieves an average accuracy of over 92% on this self-collected dataset, enabling stress state detection under different task-induced conditions. Full article
(This article belongs to the Section Cognitive, Social and Affective Neuroscience)
Show Figures

Figure 1

18 pages, 3106 KB  
Article
A Video Sequence Face Expression Recognition Method Based on Squeeze-and-Excitation and 3DPCA Network
by Chang Li, Chenglin Wen and Yiting Qiu
Sensors 2023, 23(2), 823; https://doi.org/10.3390/s23020823 - 11 Jan 2023
Cited by 9 | Viewed by 3631
Abstract
Expression recognition is a very important direction for computers to understand human emotions and human-computer interaction. However, for 3D data such as video sequences, the complex structure of traditional convolutional neural networks, which stretch the input 3D data into vectors, not only leads [...] Read more.
Expression recognition is a very important direction for computers to understand human emotions and human-computer interaction. However, for 3D data such as video sequences, the complex structure of traditional convolutional neural networks, which stretch the input 3D data into vectors, not only leads to a dimensional explosion, but also fails to retain structural information in 3D space, simultaneously leading to an increase in computational cost and a lower accuracy rate of expression recognition. This paper proposes a video sequence face expression recognition method based on Squeeze-and-Excitation and 3DPCA Network (SE-3DPCANet). The introduction of a 3DPCA algorithm in the convolution layer directly constructs tensor convolution kernels to extract the dynamic expression features of video sequences from the spatial and temporal dimensions, without weighting the convolution kernels of adjacent frames by shared weights. Squeeze-and-Excitation Network is introduced in the feature encoding layer, to automatically learn the weights of local channel features in the tensor features, thus increasing the representation capability of the model and further improving recognition accuracy. The proposed method is validated on three video face expression datasets. Comparisons were made with other common expression recognition methods, achieving higher recognition rates while significantly reducing the time required for training. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

15 pages, 6751 KB  
Article
Using a Reinforcement Q-Learning-Based Deep Neural Network for Playing Video Games
by Cheng-Jian Lin, Jyun-Yu Jhang, Hsueh-Yi Lin, Chin-Ling Lee and Kuu-Young Young
Electronics 2019, 8(10), 1128; https://doi.org/10.3390/electronics8101128 - 7 Oct 2019
Cited by 20 | Viewed by 5663
Abstract
This study proposed a reinforcement Q-learning-based deep neural network (RQDNN) that combined a deep principal component analysis network (DPCANet) and Q-learning to determine a playing strategy for video games. Video game images were used as the inputs. The proposed DPCANet was used to [...] Read more.
This study proposed a reinforcement Q-learning-based deep neural network (RQDNN) that combined a deep principal component analysis network (DPCANet) and Q-learning to determine a playing strategy for video games. Video game images were used as the inputs. The proposed DPCANet was used to initialize the parameters of the convolution kernel and capture the image features automatically. It performs as a deep neural network and requires less computational complexity than traditional convolution neural networks. A reinforcement Q-learning method was used to implement a strategy for playing the video game. Both Flappy Bird and Atari Breakout games were implemented to verify the proposed method in this study. Experimental results showed that the scores of our proposed RQDNN were better than those of human players and other methods. In addition, the training time of the proposed RQDNN was also far less than other methods. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

Back to TopTop