Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (11)

Search Parameters:
Keywords = SSnet

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 6287 KB  
Article
Spatiotemporal Sensitive Network for Non-Contact Heart Rate Prediction from Facial Videos
by Liying Su, Yitao Wang, Dezhao Zhai, Yuping Shi, Yinghao Ding, Guohua Gao, Qinwei Li, Ming Yu and Hang Wu
Appl. Sci. 2024, 14(20), 9551; https://doi.org/10.3390/app14209551 - 19 Oct 2024
Cited by 1 | Viewed by 2166
Abstract
Heart rate (HR) is an important indicator reflecting the overall physical and mental health of the human body, playing a crucial role in diagnosing cardiovascular and neurological diseases. Recent research has revealed that variations in the light absorption of human skin captured through [...] Read more.
Heart rate (HR) is an important indicator reflecting the overall physical and mental health of the human body, playing a crucial role in diagnosing cardiovascular and neurological diseases. Recent research has revealed that variations in the light absorption of human skin captured through facial video over the cardiac cycle, due to changes in blood volume, can be utilized for non-contact HR estimation. However, most existing methods rely on single-modal video sources (such as RGB or NIR), which often yield suboptimal results due to noise and the limitations of a single information source. To overcome these challenges, this paper proposes a multimodal information fusion architecture named the spatiotemporal sensitive network (SS-Net) for non-contact heart rate estimation. Firstly, spatiotemporal feature maps are utilized to extract physiological signals from RGB and NIR videos effectively. Next, a spatiotemporal sensitive (SS) module is introduced to extract useful physiological signal information from both RGB and NIR spatiotemporal maps. Finally, a multi-level spatiotemporal context fusion (MLSC) module is designed to fuse and complement information between the visible light and infrared modalities. Then, different levels of fused features are refined in task-specific branches to predict both remote photoplethysmography (rPPG) signals and heart rate (HR) signals. Experiments conducted on three datasets demonstrate that the proposed SS-Net achieves superior performance compared to existing methods. Full article
Show Figures

Figure 1

21 pages, 8627 KB  
Article
Enhancing Panax notoginseng Leaf Disease Classification with Inception-SSNet and Image Generation via Improved Diffusion Model
by Ruoxi Wang, Xiaofan Zhang, Qiliang Yang, Lian Lei, Jiaping Liang and Ling Yang
Agronomy 2024, 14(9), 1982; https://doi.org/10.3390/agronomy14091982 - 1 Sep 2024
Cited by 5 | Viewed by 2113
Abstract
The rapid and accurate classification of Panax notoginseng leaf diseases is vital for timely disease control and reducing economic losses. Recently, image classification algorithms have shown great promise for plant disease diagnosis, but dataset quantity and quality are crucial. Moreover, classifying P. notoginseng [...] Read more.
The rapid and accurate classification of Panax notoginseng leaf diseases is vital for timely disease control and reducing economic losses. Recently, image classification algorithms have shown great promise for plant disease diagnosis, but dataset quantity and quality are crucial. Moreover, classifying P. notoginseng leaf diseases faces severe challenges, including the small features of anthrax and the strong similarity between round spot and melasma diseases. In order to address these problems, we have proposed an ECA-based diffusion model and Inception-SSNet for the classification of the six major P. notoginseng leaf diseases, namely gray mold, powdery mildew, virus infection, anthrax, melasma, and round spot. Specifically, we propose an image generation scheme, in which the lightweight attention mechanism, ECA, is used to capture the dependencies between channels for improving the dataset quantity and quality. To extract disease features more accurately, we developed an Inception-SSNet hybrid model with skip connection, attention feature fusion, and self-calibrated convolutional. These innovative methods enable the model to make better use of local and global information, especially when dealing with diseases with similar features and small targets. The experimental results show that our proposed ECA-based diffusion model FID reaches 42.73, compared with the baseline model, which improved by 74.71%. Further, we tested the classification model using the data set of P. notoginseng leaf disease generation, and the accuracy of 11 mainstream classification models was improved. Our proposed Inception-SSNet classification model achieves an accuracy of 97.04% on the non-generated dataset, which is an improvement of 0.11% compared with the baseline model. On the generated dataset, the accuracy reached 99.44%, which is an improvement of 1.02% compared to the baseline model. This study provides an effective solution for the monitoring of Panax notoginseng diseases. Full article
(This article belongs to the Special Issue The Applications of Deep Learning in Smart Agriculture)
Show Figures

Figure 1

17 pages, 8308 KB  
Article
Spatio-Temporal Self-Attention Network for Origin–Destination Matrix Prediction in Urban Rail Transit
by Wenzhong Zhou, Tao Tang and Chunhai Gao
Sustainability 2024, 16(6), 2555; https://doi.org/10.3390/su16062555 - 20 Mar 2024
Cited by 1 | Viewed by 1537
Abstract
Short-term origin–destination (OD) prediction in urban rail transit (URT) is vital for improving URT operation. However, due to the problems such as the unavailability of the OD matrix of the current day, high dimension and long-range spatio-temporal dependencies, it is difficult to further [...] Read more.
Short-term origin–destination (OD) prediction in urban rail transit (URT) is vital for improving URT operation. However, due to the problems such as the unavailability of the OD matrix of the current day, high dimension and long-range spatio-temporal dependencies, it is difficult to further improve the prediction accuracy of an OD matrix. In this paper, a novel spatio-temporal self-attention network (SSNet) for OD matrix prediction in URT is proposed to further improve the prediction accuracy. In the proposed SSNet, a lightweight yet effective spatio-temporal self-attention module (STSM) is proposed to capture complex long-range spatio-temporal dependencies, thus helping improve the prediction accuracy of the proposed SSNet. Additionally, the finished OD matrices on previous days are used as the only data source without the passenger flow data on the current day in the proposed SSNet, which makes it possible to predict the OD matrices of all time intervals on the current day before the operation of the current day. It is demonstrated by experiments that the proposed SSNet outperforms three advanced deep learning methods for short-term OD prediction in URT, and the proposed STSM plays an important role in improving the prediction accuracy. Full article
(This article belongs to the Section Sustainable Transportation)
Show Figures

Figure 1

18 pages, 561 KB  
Article
Construction and Analysis for the Optimal Supersaturated Designs with a Large Number of Inert Factors
by Xiaoqing Hong, Weiping Zhou, Xiao Wang and Min Li
Axioms 2024, 13(3), 179; https://doi.org/10.3390/axioms13030179 - 8 Mar 2024
Cited by 1 | Viewed by 1895
Abstract
Supersaturated designs (SSDs) refer to those designs in which the run size is much smaller than the main effects to be estimated. They are commonly used to identify a few, but critical, active factors from a large set of potentially active ones, keeping [...] Read more.
Supersaturated designs (SSDs) refer to those designs in which the run size is much smaller than the main effects to be estimated. They are commonly used to identify a few, but critical, active factors from a large set of potentially active ones, keeping the cost as low as possible. In this regard, the development of new construction and analysis methods has recently seen a rapid increase. In this paper, we provide some methods to construct equi- and mixed-level E(f NOD) optimal SSDs with a large number of inert factors using the substitution method. The proposed methods are easy to implement, and many new SSDs can then be constructed from them. We also study a variable selection method based on the screening-selection network (SSnet) method for regression problems. A real example is analyzed to illustrate that it is able to effectively identify active factors. Eight different analysis methods are used to analyze the data generated from the proposed designs. Three scenarios with different setups of parameters are designed, and the performances of each method are illustrated by extensive simulation studies. Among all these methods, SSnet produced the most satisfactory results according to power. Full article
Show Figures

Figure 1

19 pages, 1748 KB  
Article
RiSSNet: Contrastive Learning Network with a Relaxed Identity Sampling Strategy for Remote Sensing Image Semantic Segmentation
by Haifeng Li, Wenxuan Jing, Guo Wei, Kai Wu, Mingming Su, Lu Liu, Hao Wu, Penglong Li and Ji Qi
Remote Sens. 2023, 15(13), 3427; https://doi.org/10.3390/rs15133427 - 6 Jul 2023
Cited by 1 | Viewed by 2413
Abstract
Contrastive learning techniques make it possible to pretrain a general model in a self-supervised paradigm using a large number of unlabeled remote sensing images. The core idea is to pull positive samples defined by data augmentation techniques closer together while pushing apart randomly [...] Read more.
Contrastive learning techniques make it possible to pretrain a general model in a self-supervised paradigm using a large number of unlabeled remote sensing images. The core idea is to pull positive samples defined by data augmentation techniques closer together while pushing apart randomly sampled negative samples to serve as supervised learning signals. This strategy is based on the strict identity hypothesis, i.e., positive samples are strictly defined by each (anchor) sample’s own augmentation transformation. However, this leads to the over-instancing of the features learned by the model and the loss of the ability to fully identify ground objects. Therefore, we proposed a relaxed identity hypothesis governing the feature distribution of different instances within the same class of features. The implementation of the relaxed identity hypothesis requires the sampling and discrimination of the relaxed identical samples. In this study, to realize the sampling of relaxed identical samples under the unsupervised learning paradigm, the remote sensing image was used to show that nearby objects often present a large correlation; neighborhood sampling was carried out around the anchor sample; and the similarity between the sampled samples and the anchor samples was defined as the semantic similarity. To achieve sample discrimination under the relaxed identity hypothesis, the feature loss was calculated and reordered for the samples in the relaxed identical sample queue and the anchor samples, and the feature loss between the anchor samples and the sample queue was defined as the feature similarity. Through the sampling and discrimination of the relaxed identical samples, the leap from instance-level features to class-level features was achieved to a certain extent while enhancing the network’s invariant learning of features. We validated the effectiveness of the proposed method on three datasets, and our method achieved the best experimental results on all three datasets compared to six self-supervised methods. Full article
Show Figures

Graphical abstract

19 pages, 5290 KB  
Article
Attention-Guided Multispectral and Panchromatic Image Classification
by Cheng Shi, Yenan Dang, Li Fang, Zhiyong Lv and Huifang Shen
Remote Sens. 2021, 13(23), 4823; https://doi.org/10.3390/rs13234823 - 27 Nov 2021
Cited by 4 | Viewed by 3141
Abstract
Multi-sensor image can provide supplementary information, usually leading to better performance in classification tasks. However, the general deep neural network-based multi-sensor classification method learns each sensor image separately, followed by a stacked concentrate for feature fusion. This way requires a large time cost [...] Read more.
Multi-sensor image can provide supplementary information, usually leading to better performance in classification tasks. However, the general deep neural network-based multi-sensor classification method learns each sensor image separately, followed by a stacked concentrate for feature fusion. This way requires a large time cost for network training, and insufficient feature fusion may cause. Considering efficient multi-sensor feature extraction and fusion with a lightweight network, this paper proposes an attention-guided classification method (AGCNet), especially for multispectral (MS) and panchromatic (PAN) image classification. In the proposed method, a share-split network (SSNet) including a shared branch and multiple split branches performs feature extraction for each sensor image, where the shared branch learns basis features of MS and PAN images with fewer learn-able parameters, and the split branch extracts the privileged features of each sensor image via multiple task-specific attention units. Furthermore, a selective classification network (SCNet) with a selective kernel unit is used for adaptive feature fusion. The proposed AGCNet can be trained by an end-to-end fashion without manual intervention. The experimental results are reported on four MS and PAN datasets, and compared with state-of-the-art methods. The classification maps and accuracies show the superiority of the proposed AGCNet model. Full article
(This article belongs to the Special Issue Applications of Remote Sensing Imagery for Urban Areas)
Show Figures

Graphical abstract

20 pages, 3744 KB  
Article
Scale-Sensitive Feature Reassembly Network for Pedestrian Detection
by Xiaoting Yang and Qiong Liu
Sensors 2021, 21(12), 4189; https://doi.org/10.3390/s21124189 - 18 Jun 2021
Cited by 11 | Viewed by 2434
Abstract
Serious scale variation is a key challenge in pedestrian detection. Most works typically employ a feature pyramid network to detect objects at diverse scales. Such a method suffers from information loss during channel unification. Inadequate sampling of the backbone network also affects the [...] Read more.
Serious scale variation is a key challenge in pedestrian detection. Most works typically employ a feature pyramid network to detect objects at diverse scales. Such a method suffers from information loss during channel unification. Inadequate sampling of the backbone network also affects the power of pyramidal features. Moreover, an arbitrary RoI (region of interest) allocation scheme of these detectors incurs coarse RoI representation, which becomes worse under the dilemma of small pedestrian relative scale (PRS). In this paper, we propose a novel scale-sensitive feature reassembly network (SSNet) for pedestrian detection in road scenes. Specifically, a multi-parallel branch sampling module is devised with flexible receptive fields and an adjustable anchor stride to improve the sensitivity to pedestrians imaged at multiple scales. Meanwhile, a context enhancement fusion module is also proposed to alleviate information loss by injecting various spatial context information into the original features. For more accurate prediction, an adaptive reassembly strategy is designed to obtain recognizable RoI features in the proposal refinement stage. Extensive experiments are conducted on CityPersons and Caltech datasets to demonstrate the effectiveness of our method. The detection results show that our SSNet surpasses the baseline method significantly by integrating lightweight modules and achieves competitive performance with other methods without bells and whistles. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

17 pages, 12389 KB  
Article
Predicting Potential SARS-COV-2 Drugs—In Depth Drug Database Screening Using Deep Neural Network Framework SSnet, Classical Virtual Screening and Docking
by Nischal Karki, Niraj Verma, Francesco Trozzi, Peng Tao, Elfi Kraka and Brian Zoltowski
Int. J. Mol. Sci. 2021, 22(4), 1573; https://doi.org/10.3390/ijms22041573 - 4 Feb 2021
Cited by 38 | Viewed by 5701
Abstract
Severe Acute Respiratory Syndrome Corona Virus 2 has altered life on a global scale. A concerted effort from research labs around the world resulted in the identification of potential pharmaceutical treatments for CoVID-19 using existing drugs, as well as the discovery of multiple [...] Read more.
Severe Acute Respiratory Syndrome Corona Virus 2 has altered life on a global scale. A concerted effort from research labs around the world resulted in the identification of potential pharmaceutical treatments for CoVID-19 using existing drugs, as well as the discovery of multiple vaccines. During an urgent crisis, rapidly identifying potential new treatments requires global and cross-discipline cooperation, together with an enhanced open-access research model to distribute new ideas and leads. Herein, we introduce an application of a deep neural network based drug screening method, validating it using a docking algorithm on approved drugs for drug repurposing efforts, and extending the screen to a large library of 750,000 compounds for de novo drug discovery effort. The results of large library screens are incorporated into an open-access web interface to allow researchers from diverse fields to target molecules of interest. Our combined approach allows for both the identification of existing drugs that may be able to be repurposed and de novo design of ACE2-regulatory compounds. Through these efforts we demonstrate the utility of a new machine learning algorithm for drug discovery, SSnet, that can function as a tool to triage large molecular libraries to identify classes of molecules with possible efficacy. Full article
Show Figures

Graphical abstract

27 pages, 3802 KB  
Article
SSnet: A Deep Learning Approach for Protein-Ligand Interaction Prediction
by Niraj Verma, Xingming Qu, Francesco Trozzi, Mohamed Elsaied, Nischal Karki, Yunwen Tao, Brian Zoltowski, Eric C. Larson and Elfi Kraka
Int. J. Mol. Sci. 2021, 22(3), 1392; https://doi.org/10.3390/ijms22031392 - 30 Jan 2021
Cited by 33 | Viewed by 9456
Abstract
Computational prediction of Protein-Ligand Interaction (PLI) is an important step in the modern drug discovery pipeline as it mitigates the cost, time, and resources required to screen novel therapeutics. Deep Neural Networks (DNN) have recently shown excellent performance in PLI prediction. However, the [...] Read more.
Computational prediction of Protein-Ligand Interaction (PLI) is an important step in the modern drug discovery pipeline as it mitigates the cost, time, and resources required to screen novel therapeutics. Deep Neural Networks (DNN) have recently shown excellent performance in PLI prediction. However, the performance is highly dependent on protein and ligand features utilized for the DNN model. Moreover, in current models, the deciphering of how protein features determine the underlying principles that govern PLI is not trivial. In this work, we developed a DNN framework named SSnet that utilizes secondary structure information of proteins extracted as the curvature and torsion of the protein backbone to predict PLI. We demonstrate the performance of SSnet by comparing against a variety of currently popular machine and non-Machine Learning (ML) models using various metrics. We visualize the intermediate layers of SSnet to show a potential latent space for proteins, in particular to extract structural elements in a protein that the model finds influential for ligand binding, which is one of the key features of SSnet. We observed in our study that SSnet learns information about locations in a protein where a ligand can bind, including binding sites, allosteric sites and cryptic sites, regardless of the conformation used. We further observed that SSnet is not biased to any specific molecular interaction and extracts the protein fold information critical for PLI prediction. Our work forms an important gateway to the general exploration of secondary structure-based Deep Learning (DL), which is not just confined to protein-ligand interactions, and as such will have a large impact on protein research, while being readily accessible for de novo drug designers as a standalone package. Full article
Show Figures

Figure 1

20 pages, 11041 KB  
Article
Pyramid Pooling Module-Based Semi-Siamese Network: A Benchmark Model for Assessing Building Damage from xBD Satellite Imagery Datasets
by Yanbing Bai, Junjie Hu, Jinhua Su, Xing Liu, Haoyu Liu, Xianwen He, Shengwang Meng, Erick Mas and Shunichi Koshimura
Remote Sens. 2020, 12(24), 4055; https://doi.org/10.3390/rs12244055 - 11 Dec 2020
Cited by 50 | Viewed by 6918
Abstract
Most mainstream research on assessing building damage using satellite imagery is based on scattered datasets and lacks unified standards and methods to quantify and compare the performance of different models. To mitigate these problems, the present study develops a novel end-to-end benchmark model, [...] Read more.
Most mainstream research on assessing building damage using satellite imagery is based on scattered datasets and lacks unified standards and methods to quantify and compare the performance of different models. To mitigate these problems, the present study develops a novel end-to-end benchmark model, termed the pyramid pooling module semi-Siamese network (PPM-SSNet), based on a large-scale xBD satellite imagery dataset. The high precision of the proposed model is achieved by adding residual blocks with dilated convolution and squeeze-and-excitation blocks into the network. Simultaneously, the highly automated process of satellite imagery input and damage classification result output is reached by employing concurrent learned attention mechanisms through a semi-Siamese network for end-to-end input and output purposes. Our proposed method achieves F1 scores of 0.90, 0.41, 0.65, and 0.70 for the undamaged, minor-damaged, major-damaged, and destroyed building classes, respectively. From the perspective of end-to-end methods, the ablation experiments and comparative analysis confirm the effectiveness and originality of the PPM-SSNet method. Finally, the consistent prediction results of our model for data from the 2011 Tohoku Earthquake verify the high performance of our model in terms of the domain shift problem, which implies that it is effective for evaluating future disasters. Full article
Show Figures

Graphical abstract

22 pages, 4106 KB  
Article
SSNet: Learning Mid-Level Image Representation Using Salient Superpixel Network
by Zhihang Ji, Fan Wang, Xiang Gao, Lijuan Xu and Xiaopeng Hu
Appl. Sci. 2020, 10(1), 140; https://doi.org/10.3390/app10010140 - 23 Dec 2019
Cited by 1 | Viewed by 3246
Abstract
In the standard bag-of-visual-words (BoVW) model, the burstiness problem of features and the ignorance of high-order information often weakens the discriminative power of image representation. To tackle them, we present a novel framework, named the Salient Superpixel Network, to learn the mid-level image [...] Read more.
In the standard bag-of-visual-words (BoVW) model, the burstiness problem of features and the ignorance of high-order information often weakens the discriminative power of image representation. To tackle them, we present a novel framework, named the Salient Superpixel Network, to learn the mid-level image representation. For reducing the impact of burstiness occurred in the background region, we use the salient regions instead of the whole image to extract local features, and a fast saliency detection algorithm based on the Gestalt grouping principle is proposed to generate image saliency maps. In order to introduce the high-order information, we propose a weighted second-order pooling (WSOP) method, which is capable of exploiting the high-order information and further alleviating the impact of burstiness in the foreground region. Then, we conduct experiments on six image classification benchmark datasets, and the results demonstrate the effectiveness of the proposed framework with either the handcrafted or the off-the-shelf CNN features. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

Back to TopTop