Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (20)

Search Parameters:
Keywords = two-layer causal network

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 1888 KB  
Article
Wind Power Prediction for Extreme Meteorological Conditions Based on SSA-TCN-GCNN and Inverse Adaptive Transfer Learning
by Jiale Liu, Weisi Deng, Weidong Gao, Haohuai Wang, Chonghao Li and Yan Chen
Processes 2026, 14(2), 353; https://doi.org/10.3390/pr14020353 - 19 Jan 2026
Viewed by 256
Abstract
Extreme weather conditions, specifically typhoons and strong gusts, create a highly transient environment for wind power data collection, leading to performance degradation that significantly impacts the safety and stability of the wind power system. To accurately predict wind power trends under these conditions, [...] Read more.
Extreme weather conditions, specifically typhoons and strong gusts, create a highly transient environment for wind power data collection, leading to performance degradation that significantly impacts the safety and stability of the wind power system. To accurately predict wind power trends under these conditions, this paper proposes a prediction model integrating Singular Spectrum Analysis (SSA), Temporal Convolutional Network (TCN), Convolutional Neural Network (CNN), and a global average pooling layer, combined with inverse adaptive transfer learning. First, SSA is applied to reduce noise in the collected wind power operation data and extract key information. Subsequently, a prediction model is constructed based on TCN, CNN, and global average pooling. The model employs dilated causal convolutions to capture long-term dependencies and uses two-dimensional convolution kernels to extract local mutation features. Furthermore, a domain-adaptive transfer learning module is designed to adjust the model’s parameter weights via backward optimization based on the Maximum Mean Discrepancy (MMD) between the source and target domains. Experimental validation is conducted using real-world wind power operation data from a wind farm in Guangxi, containing 3000 samples sampled at 10 min intervals specifically during severe typhoon periods. Experimental results demonstrate that even with only 60% of the target data, the proposed method outperforms the traditional TCN neural network, reducing the Root Mean Square Error (RMSE) by 58.1% and improving the Coefficient of Determination (R2) by 32.7%, thereby verifying its effectiveness in data-scarce extreme scenarios. Full article
(This article belongs to the Special Issue Adaptive Control and Optimization in Power Grids)
Show Figures

Figure 1

20 pages, 3802 KB  
Review
Omics Evidence Chains for Complex Traits in Beef Cattle: From Cross-Layer Colocalization to Genetic Evaluation and Application
by Ying Lu, Dongfang Li, Ruoshan Ma, Yuyang Gao, Zhendong Gao, Yuwei Qian, Dongmei Xi, Weidong Deng and Jiao Wu
Biology 2025, 14(12), 1725; https://doi.org/10.3390/biology14121725 - 1 Dec 2025
Viewed by 810
Abstract
Multi-omics studies have multiplied associations, but many still lack causal resolution and a clear path to application. We present a practical roadmap built on four sequential steps: first, identify signals from genome-wide association studies; second, confirm these signals through regulatory colocalization and transcriptome-wide [...] Read more.
Multi-omics studies have multiplied associations, but many still lack causal resolution and a clear path to application. We present a practical roadmap built on four sequential steps: first, identify signals from genome-wide association studies; second, confirm these signals through regulatory colocalization and transcriptome-wide association analyses; third, integrate the evidence using network analyses and causal inference; and, fourth, test shortlisted candidates through functional and phenotypic validation. The roadmap is supported by three safeguards that make results reliable and reusable: containerized workflows that ensure end-to-end reproducibility, harmonization across batches with concise minimum-information records, and consistent identifier mapping with quality control across data layers. Across four classes of traits—growth and development, carcass and meat quality, reproduction, and environmental adaptation and resilience—we prioritize signals that remain robust across ancestries and environments, highlight modules with explicit regulatory support, and advance candidates that have already progressed to functional testing. Two application tracks follow from this process: integrating stable candidates into selection indices with context-dependent weighting, and recording and targeting mechanistic nodes for nutritional and management interventions. Taken together, this roadmap improves causal interpretability, strengthens cross-population robustness, and shortens the path from statistical association to genetic evaluation and industry uptake. Full article
Show Figures

Figure 1

25 pages, 3379 KB  
Article
LPGGNet: Learning from Local–Partition–Global Graph Representations for Motor Imagery EEG Recognition
by Nanqing Zhang, Hongcai Jian, Xingchen Li, Guoqian Jiang and Xianlun Tang
Brain Sci. 2025, 15(12), 1257; https://doi.org/10.3390/brainsci15121257 - 23 Nov 2025
Viewed by 638
Abstract
Objectives: Existing motor imagery electroencephalography (MI-EEG) decoding approaches are constrained by their reliance on sole representations of brain connectivity graphs, insufficient utilization of multi-scale information, and lack of adaptability. Methods: To address these constraints, we propose a novel Local–Partition–Global Graph learning [...] Read more.
Objectives: Existing motor imagery electroencephalography (MI-EEG) decoding approaches are constrained by their reliance on sole representations of brain connectivity graphs, insufficient utilization of multi-scale information, and lack of adaptability. Methods: To address these constraints, we propose a novel Local–Partition–Global Graph learning Network (LPGGNet). The Local Learning module first constructs functional adjacency matrices using partial directed coherence (PDC), effectively capturing causal dynamic interactions among electrodes. It then employs two layers of temporal convolutions to capture high-level temporal features, followed by Graph Convolutional Networks (GCNs) to capture local topological features. In the Partition Learning module, EEG electrodes are divided into four partitions through a task-driven strategy. For each partition, a novel Gaussian median distance is used to construct adjacency matrices, and Gaussian graph filtering is applied to enhance feature consistency within each partition. After merging the local and partitioned features, the model proceeds to the Global Learning module. In this module, a global adjacency matrix is dynamically computed based on cosine similarity, and residual graph convolutions are then applied to extract highly task-relevant global representations. Finally, two fully connected layers perform the classification. Results: Experiments were conducted on both the BCI Competition IV-2a dataset and a laboratory-recorded dataset, achieving classification accuracies of 82.9% and 87.5%, respectively, which surpass several state-of-the-art models. The contribution of each module was further validated through ablation studies. Conclusions: This study demonstrates the superiority of integrating multi-view brain connectivities with dynamically constructed graph structures for MI-EEG decoding. Moreover, the proposed model offers a novel and efficient solution for EEG signal decoding. Full article
Show Figures

Figure 1

26 pages, 3666 KB  
Article
Distribution Network Fault Segment Localization Method Based on Transfer Entropy MTF and Improved AlexNet
by Sizu Hou and Xiaoyan Wang
Energies 2025, 18(17), 4627; https://doi.org/10.3390/en18174627 - 30 Aug 2025
Viewed by 847
Abstract
In order to improve the localization accuracy and model interpretability of single-phase ground fault sections in distribution networks, a knowledge-integrated and data-driven fault localization model is proposed. The model transforms the transient zero-sequence currents into Markov Transition Field (MTF) images based on transfer [...] Read more.
In order to improve the localization accuracy and model interpretability of single-phase ground fault sections in distribution networks, a knowledge-integrated and data-driven fault localization model is proposed. The model transforms the transient zero-sequence currents into Markov Transition Field (MTF) images based on transfer entropy, and improves the two-channel feature expression with both causal and temporal structures. On this basis, a knowledge guidance mechanism based on a physical mechanism is introduced to focus on the waveform backpropagation characteristics of upstream and downstream nodes of the fault through the feature attention module, and a similarity weighting strategy is constructed by integrating the Hausdorff distance in the all-connectivity layer in order to enhance the model’s capability of discriminating between the key segments. The dataset is constructed in an improved IEEE 14-node simulation system, and the effectiveness of the proposed method is verified by t-SNE feature visualization, comparison experiments with different parameters, misclassification correction analysis, and anti-noise performance evaluation. For misclassified sample datasets, this method achieves an accuracy rate of 99.53%, indicating that it outperforms traditional convolutional neural network models in terms of fault section localization accuracy, generalization capability, and noise robustness. Research shows that the deep integration of knowledge and data can significantly enhance the model’s discriminative ability and engineering practicality, providing new insights for the construction of intelligent power systems with explainability. Full article
Show Figures

Figure 1

15 pages, 909 KB  
Article
AIPI: Network Status Identification on Multi-Protocol Wireless Sensor Networks
by Peng Jiang, Xinglin Feng, Renhai Feng and Junpeng Cui
Sensors 2025, 25(5), 1347; https://doi.org/10.3390/s25051347 - 22 Feb 2025
Cited by 1 | Viewed by 816
Abstract
Topology control is important for extending networks lifetime and reducing interference. The accuracy of topology identification plays a crucial role in topology control. Traditional passive interception can only identify the connectivity among cooperative sensor networks with known protocol. This paper proposes a novel [...] Read more.
Topology control is important for extending networks lifetime and reducing interference. The accuracy of topology identification plays a crucial role in topology control. Traditional passive interception can only identify the connectivity among cooperative sensor networks with known protocol. This paper proposes a novel method called Active Interfere and Passive Interception (AIPI) to identify the topology of non-cooperative sensor networks by using both active and passive interceptions. Active interception uses full duplex sensors to disrupt communication until frequency hopped to acquire distance information, and thus, infer their connectivity and calculate the location after modifying error in a non-cooperative sensor network. Passive interception uses Granger causality to infer the connectivity between two communication nodes after getting the time frame structure in physical layer. Passive interception is applied to conserve power consumption after obtaining physical information via active interception. Simulation results indicate that AIPI can identify the topology of non-cooperative sensor networks with a higher accuracy than traditional method. Full article
(This article belongs to the Special Issue Security Issues and Solutions in Sensing Systems and Networks)
Show Figures

Figure 1

35 pages, 9453 KB  
Article
A Two-Layer Causal Knowledge Network Construction Method Based on Quality Problem-Solving Data
by Yubin Wang, Shirong Qiang, Xin Yue, Tao Li and Keyong Zhang
Systems 2025, 13(3), 142; https://doi.org/10.3390/systems13030142 - 20 Feb 2025
Cited by 2 | Viewed by 1316
Abstract
“Cause analysis” constitutes an indispensable component in quality management systems, serving to systematically identify the causes of quality defects, thereby enabling the development of targeted improvement strategies that concurrently address surface-level manifestations and fundamental drivers. However, relying solely on personal experience makes it [...] Read more.
“Cause analysis” constitutes an indispensable component in quality management systems, serving to systematically identify the causes of quality defects, thereby enabling the development of targeted improvement strategies that concurrently address surface-level manifestations and fundamental drivers. However, relying solely on personal experience makes it challenging to conduct a comprehensive and in-depth analysis of quality problems. The reason is that, when analyzing the causes of quality problems, it is essential not only to consider the specific context in which the problems occur. This enables “specific problems” to be “specifically analyzed” for the formulation of temporary containment measures. Additionally, the context of the problem needs to be stripped. This allows for a general and in-depth analysis of the “class problem” or the causal linkages underlying the problem, thereby determining the root cause of the problem and formulating a corresponding long-term program. The analysis of the causes of quality problems exhibits “duality” characteristics. Based on this, this study proposes and constructs a two-layer causal knowledge network by leveraging the causal knowledge generated and applied in the process of quality problem solving to address the “duality” characteristic of the cause analysis of quality problems. The proposed network can assist front-line employees in analyzing the quality problems of products from diverse perspectives and overcome the challenge of relying solely on personal experience to comprehensively and profoundly analyze the causal relationships of quality problems. Our method not only contributes to enhancing the efficiency of quality problem solving but also makes a valuable contribution to the advancement of theories and methods related to quality management and knowledge management. Full article
(This article belongs to the Special Issue Data-Driven Methods in Business Process Management)
Show Figures

Figure 1

29 pages, 11619 KB  
Article
MSA-GCN: Multistage Spatio-Temporal Aggregation Graph Convolutional Networks for Traffic Flow Prediction
by Ji Feng, Jiashuang Huang, Chang Guo and Zhenquan Shi
Mathematics 2024, 12(21), 3338; https://doi.org/10.3390/math12213338 - 24 Oct 2024
Viewed by 1905
Abstract
Timely and accurate traffic flow prediction is crucial for stabilizing road conditions, reducing environmental pollution, and mitigating economic losses. While current graph convolution methods have achieved certain results, they do not fully leverage the true advantages of graph convolution. There is still room [...] Read more.
Timely and accurate traffic flow prediction is crucial for stabilizing road conditions, reducing environmental pollution, and mitigating economic losses. While current graph convolution methods have achieved certain results, they do not fully leverage the true advantages of graph convolution. There is still room for improvement in simultaneously addressing multi-graph convolution, optimizing graphs, and simulating road conditions. Based on this, this paper proposes MSA-GCN: Multistage Spatio-Temporal Aggregation Graph Convolutional Networks for Traffic Flow Prediction. This method overcomes the aforementioned issues by dividing the process into different stages and achieves promising prediction results. In the first stage, we construct a latent similarity adjacency matrix and address the randomness interference features in similarity features through two optimizations using the proposed ConvGRU Attention Layer (CGAL module) and the Causal Similarity Capture Module (CSC module), which includes Granger causality tests. In the second stage, we mine the potential correlation between roads using the Correlation Completion Module (CC module) to create a global correlation adjacency matrix as a complement for potential correlations. In the third stage, we utilize the proposed Auto-LRU autoencoder to pre-train various weather features, encoding them into the model’s prediction process to enhance its ability to simulate the real world and improve interpretability. Finally, in the fourth stage, we fuse these features and use a Bidirectional Gated Recurrent Unit (BiGRU) to model time dependencies, outputting the prediction results through a linear layer. Our model demonstrates a performance improvement of 29.33%, 27.03%, and 23.07% on three real-world datasets (PEMSD8, LOSLOOP, and SZAREA) compared to advanced baseline methods, and various ablation experiments validate the effectiveness of each stage and module. Full article
(This article belongs to the Topic New Advances in Granular Computing and Data Mining)
Show Figures

Figure 1

22 pages, 10393 KB  
Article
Exploring the Mechanisms and Preventive Strategies for the Progression from Idiopathic Pulmonary Fibrosis to Lung Cancer: Insights from Transcriptomics and Genetic Factors
by Kai Xie, Xiaoyan Tan, Zhe Chen, Yu Yao, Jing Luo, Haitao Ma, Yu Feng and Wei Jiang
Biomedicines 2024, 12(10), 2382; https://doi.org/10.3390/biomedicines12102382 - 18 Oct 2024
Cited by 3 | Viewed by 2528 | Correction
Abstract
Background: Idiopathic pulmonary fibrosis (IPF) leads to excessive fibrous tissue in the lungs, increasing the risk of lung cancer (LC) due to heightened fibroblast activity. Advances in nucleotide point mutation studies offer insights into fibrosis-to-cancer transitions. Methods: A two-sample Mendelian randomization (TSMR) approach [...] Read more.
Background: Idiopathic pulmonary fibrosis (IPF) leads to excessive fibrous tissue in the lungs, increasing the risk of lung cancer (LC) due to heightened fibroblast activity. Advances in nucleotide point mutation studies offer insights into fibrosis-to-cancer transitions. Methods: A two-sample Mendelian randomization (TSMR) approach was used to explore the causal relationship between IPF and LC. A weighted gene co-expression network analysis (WGCNA) identified shared gene modules related to immunogenic cell death (ICD) from transcriptomic datasets. Machine learning selected key genes, and a multi-layer perceptron (MLP) model was developed for IPF prediction and diagnosis. SMR and PheWAS were used to assess the expression of key genes concerning IPF risk. The impact of core genes on immune cells in the IPF microenvironment was explored, and in vivo experiments were conducted to examine the progression from IPF to LC. Results: The TSMR approach indicated a genetic predisposition for IPF progressing to LC. The predictive model, which includes eight ICD key genes, demonstrated a strong predictive capability (AUC = 0.839). The SMR analysis revealed that the elevated expression of MS4A4A was associated with an increased risk of IPF (OR = 1.275, 95% CI: 1.029–1.579; p = 0.026). The PheWAS did not identify any significant traits linked to MS4A4A expression. The rs9265808 locus in MS4A4A was identified as a susceptibility site for the progression of IPF to LC, with mutations potentially reprogramming lung neutrophils and increasing the LC risk. In vivo studies suggested MS4A4A as a promising therapeutic target. Conclusions: A causal link between IPF and LC was established, an effective prediction model was developed, and MS4A4A was highlighted as a therapeutic target to prevent IPF from progressing to LC. Full article
(This article belongs to the Special Issue Biology of Fibroblasts and Fibrosis)
Show Figures

Figure 1

17 pages, 11167 KB  
Article
Temporal Convolutional Network-Based Axle Load Estimation from Pavement Vibration Data
by Zeying Bian, Mengyuan Zeng, Hongduo Zhao, Mu Guo and Juewei Cai
Appl. Sci. 2023, 13(24), 13264; https://doi.org/10.3390/app132413264 - 14 Dec 2023
Cited by 3 | Viewed by 2320
Abstract
Measuring the axle loads of vehicles with more accuracy is a crucial step in weight enforcement and pavement condition assessment. This paper proposed a vibration-based method, which has an extended sensing range, high temporal sampling rate, and dense spatial sampling rate, to estimate [...] Read more.
Measuring the axle loads of vehicles with more accuracy is a crucial step in weight enforcement and pavement condition assessment. This paper proposed a vibration-based method, which has an extended sensing range, high temporal sampling rate, and dense spatial sampling rate, to estimate axle loads in concrete pavement using distributed optical vibration sensing (DOVS) technology. Temporal convolutional networks (TCN), which consist of non-causal convolutional layers and a concatenate layer, were proposed and trained by over 6000 samples of vibration data and ground truth of axle loads. Moreover, the TCN could learn the complex inverse mapping between pavement structure inputs and outputs. The performance of the proposed method was calibrated in two field tests with various conditions. The results demonstrate that the proposed method obtained estimated axle loads within 11.5% error, under diverse circumstances that consisted of different pavement types and loads moving at speeds ranging from 0~35 m/s. The proposed method demonstrates significant promise in the field of axle load reconstruction and estimation. Its error, closely approaching the 10% threshold specified by LTPP, underscores its efficacy. Additionally, the method aligns with the standards set by Cost-323, with an error level-up to category C. This indicates its capability to provide valuable support in the assessment and decision-making processes related to pavement structure conditions. Full article
(This article belongs to the Special Issue New Technology for Road Surface Detection)
Show Figures

Figure 1

26 pages, 36202 KB  
Article
Physics-Informed Neural Networks-Based Salinity Modeling in the Sacramento–San Joaquin Delta of California
by Dong Min Roh, Minxue He, Zhaojun Bai, Prabhjot Sandhu, Francis Chung, Zhi Ding, Siyu Qi, Yu Zhou, Raymond Hoang, Peyman Namadi, Bradley Tom and Jamie Anderson
Water 2023, 15(13), 2320; https://doi.org/10.3390/w15132320 - 21 Jun 2023
Cited by 6 | Viewed by 4369
Abstract
Salinity in estuarine environments has been traditionally simulated using process-based models. More recently, data-driven models including artificial neural networks (ANNs) have been developed for simulating salinity. Compared to process-based models, ANNs yield faster salinity simulations with comparable accuracy. However, ANNs are often purely [...] Read more.
Salinity in estuarine environments has been traditionally simulated using process-based models. More recently, data-driven models including artificial neural networks (ANNs) have been developed for simulating salinity. Compared to process-based models, ANNs yield faster salinity simulations with comparable accuracy. However, ANNs are often purely data-driven and not constrained by physical laws, making it difficult to interpret the causality between input and output data. Physics-informed neural networks (PINNs) are emerging machine-learning models to integrate the benefits of both process-based models and data-driven ANNs. PINNs can embed the knowledge of physical laws in terms of the partial differential equations (PDE) that govern the dynamics of salinity transport into the training of the neural networks. This study explores the application of PINNs in salinity modeling by incorporating the one-dimensional advection–dispersion salinity transport equation into the neural networks. Two PINN models are explored in this study, namely PINNs and FoNets. PINNs are multilayer perceptrons (MLPs) that incorporate the advection–dispersion equation, while FoNets are an extension of PINNs with an additional encoding layer. The exploration is exemplified at four study locations in the Sacramento–San Joaquin Delta of California: Pittsburg, Chipps Island, Port Chicago, and Martinez. Both PINN models and benchmark ANNs are trained and tested using simulated daily salinity from 1991 to 2015 at study locations. Results indicate that PINNs and FoNets outperform the benchmark ANNs in simulating salinity at the study locations. Specifically, PINNs and FoNets have lower absolute biases and higher correlation coefficients and Nash–Sutcliffe efficiency values than ANNs. In addition, PINN models overcome some limitations of purely data-driven ANNs (e.g., neuron saturation) and generate more realistic salinity simulations. Overall, this study demonstrates the potential of PINNs to supplement existing process-based and ANN models in providing accurate and timely salinity estimation. Full article
Show Figures

Figure 1

14 pages, 1214 KB  
Article
Deep-Learning-Based Sequence Causal Long-Term Recurrent Convolutional Network for Data Fusion Using Video Data
by DaeHyeon Jeon and Min-Suk Kim
Electronics 2023, 12(5), 1115; https://doi.org/10.3390/electronics12051115 - 24 Feb 2023
Cited by 6 | Viewed by 3607
Abstract
The purpose of AI-Based schemes in intelligent systems is to advance and optimize system performance. Most intelligent systems adopt sequential data types derived from such systems. Realtime video data, for example, are continuously updated as a sequence to make necessary predictions for efficient [...] Read more.
The purpose of AI-Based schemes in intelligent systems is to advance and optimize system performance. Most intelligent systems adopt sequential data types derived from such systems. Realtime video data, for example, are continuously updated as a sequence to make necessary predictions for efficient system performance. The majority of deep-learning-based network architectures such as long short-term memory (LSTM), data fusion, two streams, and temporal convolutional network (TCN) for sequence data fusion are generally used to enhance robust system efficiency. In this paper, we propose a deep-learning-based neural network architecture for non-fix data that uses both a causal convolutional neural network (CNN) and a long-term recurrent convolutional network (LRCN). Causal CNNs and LRCNs use incorporated convolutional layers for feature extraction, so both architectures are capable of processing sequential data such as time series or video data that can be used in a variety of applications. Both architectures also have extracted features from the input sequence data to reduce the dimensionality of the data and capture the important information, and learn hierarchical representations for effective sequence processing tasks. We have also adopted a concept of series compact convolutional recurrent neural network (SCCRNN), which is a type of neural network architecture designed for processing sequential data combined by both convolutional and recurrent layers compactly, reducing the number of parameters and memory usage to maintain high accuracy. The architecture is challenge-able and suitable for continuously incoming sequence video data, and doing so allowed us to bring advantages to both LSTM-based networks and CNNbased networks. To verify this method, we evaluated it through a sequence learning model with network parameters and memory that are required in real environments based on the UCF-101 dataset, which is an action recognition data set of realistic action videos, collected from YouTube with 101 action categories. The results show that the proposed model in a sequence causal long-term recurrent convolutional network (SCLRCN) provides a performance improvement of at least 12% approximately or more to be compared with the existing models (LRCN and TCN). Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) for Image Processing)
Show Figures

Figure 1

22 pages, 12511 KB  
Article
Full-Coupled Convolutional Transformer for Surface-Based Duct Refractivity Inversion
by Jiajing Wu, Zhiqiang Wei, Jinpeng Zhang, Yushi Zhang, Dongning Jia, Bo Yin and Yunchao Yu
Remote Sens. 2022, 14(17), 4385; https://doi.org/10.3390/rs14174385 - 3 Sep 2022
Cited by 5 | Viewed by 2286
Abstract
A surface-based duct (SBD) is an abnormal atmospheric structure with a low probability of occurrence buta strong ability to trap electromagnetic waves. However, the existing research is based on the assumption that the range direction of the surface duct is homogeneous, which will [...] Read more.
A surface-based duct (SBD) is an abnormal atmospheric structure with a low probability of occurrence buta strong ability to trap electromagnetic waves. However, the existing research is based on the assumption that the range direction of the surface duct is homogeneous, which will lead to low productivity and large errors when applied in a real-marine environment. To alleviate these issues, we propose a framework for the inversion of inhomogeneous SBD M-profile based on a full-coupled convolutional Transformer (FCCT) deep learning network. We first designed a one-dimensional residual dilated causal convolution autoencoder to extract the feature representations from a high-dimension range direction inhomogeneous M-profile. Second, to improve efficiency and precision, we proposed a full-coupled convolutional Transformer (FCCT) that incorporated dilated causal convolutional layers to gain exponentially receptive field growth of the M-profile and help Transformer-like models improve the receptive field of each range direction inhomogeneous SBD M-profile information. We tested our proposed method performance on two sets of simulated sea clutter power data where the inversion of the simulated data reached 96.99% and 97.69%, which outperformed the existing baseline methods. Full article
Show Figures

Figure 1

18 pages, 1517 KB  
Article
SC-CAN: Spectral Convolution and Channel Attention Network for Wheat Stress Classification
by Wijayanti Nurul Khotimah, Farid Boussaid, Ferdous Sohel, Lian Xu, David Edwards, Xiu Jin and Mohammed Bennamoun
Remote Sens. 2022, 14(17), 4288; https://doi.org/10.3390/rs14174288 - 30 Aug 2022
Cited by 13 | Viewed by 3236
Abstract
Biotic and abiotic plant stress (e.g., frost, fungi, diseases) can significantly impact crop production. It is thus essential to detect such stress at an early stage before visual symptoms and damage become apparent. To this end, this paper proposes a novel deep learning [...] Read more.
Biotic and abiotic plant stress (e.g., frost, fungi, diseases) can significantly impact crop production. It is thus essential to detect such stress at an early stage before visual symptoms and damage become apparent. To this end, this paper proposes a novel deep learning method, called Spectral Convolution and Channel Attention Network (SC-CAN), which exploits the difference in spectral responses of healthy and stressed crops. The proposed SC-CAN method comprises two main modules: (i) a spectral convolution module, which consists of dilated causal convolutional layers stacked in a residual manner to capture the spectral features; (ii) a channel attention module, which consists of a global pooling layer and fully connected layers that compute inter-relationship between feature map channels before scaling them based on their importance level (attention score). Unlike standard convolution, which focuses on learning local features, the dilated convolution layers can learn both local and global features. These layers also have long receptive fields, making them suitable for capturing long dependency patterns in hyperspectral data. However, because not all feature maps produced by the dilated convolutional layers are important, we propose a channel attention module that weights the feature maps according to their importance level. We used SC-CAN to classify salt stress (i.e., abiotic stress) on four datasets (Chinese Spring (CS), Aegilops columnaris (co(CS)), Ae. speltoides auchery (sp(CS)), and Kharchia datasets) and Fusarium head blight disease (i.e., biotic stress) on Fusarium dataset. Reported experimental results show that the proposed method outperforms existing state-of-the-art techniques with an overall accuracy of 83.08%, 88.90%, 82.44%, 82.10%, and 82.78% on CS, co(CS), sp(CS), Kharchia, and Fusarium datasets, respectively. Full article
(This article belongs to the Special Issue Remote Sensing of Crop Lands and Crop Production)
Show Figures

Figure 1

13 pages, 3287 KB  
Article
Short-Term Load Forecasting Algorithm Based on LST-TCN in Power Distribution Network
by Wanxing Sheng, Keyan Liu, Dongli Jia, Shuo Chen and Rongheng Lin
Energies 2022, 15(15), 5584; https://doi.org/10.3390/en15155584 - 1 Aug 2022
Cited by 20 | Viewed by 3509
Abstract
In this paper, a neural network model called Long Short-Term Temporal Convolutional Network (LST-TCN) model is proposed for short-term load forecasting. This model refers to the 1-D fully convolution network, causal convolution, and void convolution structure. In the convolution layer, a residual connection [...] Read more.
In this paper, a neural network model called Long Short-Term Temporal Convolutional Network (LST-TCN) model is proposed for short-term load forecasting. This model refers to the 1-D fully convolution network, causal convolution, and void convolution structure. In the convolution layer, a residual connection layer is added. Additionally, the model makes use of two networks to extract features from long-term data and periodic short-term data, respectively, and fuses the two features to calculate the final predicted value. Long Short-Term Memory (LSTM) and Temporal Convolutional Network (TCN) are used as comparison algorithms to train and forecast 3 h, 6 h, 12 h, 24 h, and 48 h ahead of daily electricity load together with LST-TCN. Three different performance metrics, including pinball loss, root mean squared error (RMSE), and mean absolute error (RASE), were used to evaluate the performance of the proposed algorithms. The results of the test set proved that LST-TCN has better generalization effects and smaller prediction errors. The algorithm has a pinball loss of 1.2453 for 3 h ahead forecast and a pinball loss of 1.4885 for 48 h ahead forecast. Generally speaking, LST-TCN has better performance than LSTM, TCN, and other algorithms. Full article
(This article belongs to the Section F5: Artificial Intelligence and Smart Energy)
Show Figures

Figure 1

18 pages, 1631 KB  
Article
Classification and Prediction on the Effects of Nutritional Intake on Overweight/Obesity, Dyslipidemia, Hypertension and Type 2 Diabetes Mellitus Using Deep Learning Model: 4–7th Korea National Health and Nutrition Examination Survey
by Hyerim Kim, Dong Hoon Lim and Yoona Kim
Int. J. Environ. Res. Public Health 2021, 18(11), 5597; https://doi.org/10.3390/ijerph18115597 - 24 May 2021
Cited by 50 | Viewed by 8230
Abstract
Few studies have been conducted to classify and predict the influence of nutritional intake on overweight/obesity, dyslipidemia, hypertension and type 2 diabetes mellitus (T2DM) based on deep learning such as deep neural network (DNN). The present study aims to classify and predict associations [...] Read more.
Few studies have been conducted to classify and predict the influence of nutritional intake on overweight/obesity, dyslipidemia, hypertension and type 2 diabetes mellitus (T2DM) based on deep learning such as deep neural network (DNN). The present study aims to classify and predict associations between nutritional intake and risk of overweight/obesity, dyslipidemia, hypertension and T2DM by developing a DNN model, and to compare a DNN model with the most popular machine learning models such as logistic regression and decision tree. Subjects aged from 40 to 69 years in the 4–7th (from 2007 through 2018) Korea National Health and Nutrition Examination Survey (KNHANES) were included. Diagnostic criteria of dyslipidemia (n = 10,731), hypertension (n = 10,991), T2DM (n = 3889) and overweight/obesity (n = 10,980) were set as dependent variables. Nutritional intakes were set as independent variables. A DNN model comprising one input layer with 7 nodes, three hidden layers with 30 nodes, 12 nodes, 8 nodes in each layer and one output layer with one node were implemented in Python programming language using Keras with tensorflow backend. In DNN, binary cross-entropy loss function for binary classification was used with Adam optimizer. For avoiding overfitting, dropout was applied to each hidden layer. Structural equation modelling (SEM) was also performed to simultaneously estimate multivariate causal association between nutritional intake and overweight/obesity, dyslipidemia, hypertension and T2DM. The DNN model showed the higher prediction accuracy with 0.58654 for dyslipidemia, 0.79958 for hypertension, 0.80896 for T2DM and 0.62496 for overweight/obesity compared with two other machine leaning models with five-folds cross-validation. Prediction accuracy for dyslipidemia, hypertension, T2DM and overweight/obesity were 0.58448, 0.79929, 0.80818 and 0.62486, respectively, when analyzed by a logistic regression, also were 0.52148, 0.66773, 0.71587 and 0.54026, respectively, when analyzed by a decision tree. This study observed a DNN model with three hidden layers with 30 nodes, 12 nodes, 8 nodes in each layer had better prediction accuracy than two conventional machine learning models of a logistic regression and decision tree. Full article
Show Figures

Figure 1

Back to TopTop