Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (30)

Search Parameters:
Keywords = C-DCNN model

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 2842 KB  
Article
Drug–Target Interaction Prediction via Dual-Interaction Fusion
by Xingyang Li, Zepeng Li, Bo Wei and Yuni Zeng
Molecules 2026, 31(3), 498; https://doi.org/10.3390/molecules31030498 - 31 Jan 2026
Viewed by 135
Abstract
Accurate prediction of drug–target interaction (DTI) is crucial for modern drug discovery. However, experimental assays are costly, and many existing computational models still face challenges in capturing multi-scale features, fusing cross-modal information, and modeling fine-grained drug–protein interactions. To address these challenges, We propose [...] Read more.
Accurate prediction of drug–target interaction (DTI) is crucial for modern drug discovery. However, experimental assays are costly, and many existing computational models still face challenges in capturing multi-scale features, fusing cross-modal information, and modeling fine-grained drug–protein interactions. To address these challenges, We propose Gated-Attention Dual-Fusion Drug–Target Interaction (GADFDTI), whose core contribution is a fusion module that constructs an explicit atom–residue similarity field, refines it with a lightweight 2D neighborhood operator, and performs gated bidirectional aggregation to obtain interaction-aware representations. To provide strong and width-aligned unimodal inputs to this fusion module, we integrate a compact multi-scale dense GCN for drug graphs and a masked multi-scale self-attention protein encoder augmented by a narrow 1D-CNN branch for local motif aggregation. Experiments on two benchmarks, Human and C. elegans, show that GADFDTI consistently outperforms several recently proposed DTI models, achieving AUC values of 0.986 and 0.996, respectively, with corresponding gains in precision and recall. A SARS-CoV-2 case study further demonstrates that GADFDTI can reliably prioritize clinically supported antiviral agents while suppressing inactive compounds, indicating its potential as an efficient in silico prescreening tool for lead-target discovery. Full article
Show Figures

Figure 1

24 pages, 4416 KB  
Article
A Gas Production Classification Method for Cable Insulation Materials Based on Deep Convolutional Neural Networks
by Zihao Wang, Yinan Chai, Jingwen Gong, Wenbin Xie, Yidong Chen and Wei Gong
Polymers 2026, 18(2), 155; https://doi.org/10.3390/polym18020155 - 7 Jan 2026
Viewed by 192
Abstract
As a non-invasive diagnostic technique, evolved gas analysis (EGA) holds significant value in assessing the insulation conditions of critical equipment such as power cables. Current analytical methods face two major challenges: insulation materials may undergo multiple aging mechanisms simultaneously, leading to interfering characteristic [...] Read more.
As a non-invasive diagnostic technique, evolved gas analysis (EGA) holds significant value in assessing the insulation conditions of critical equipment such as power cables. Current analytical methods face two major challenges: insulation materials may undergo multiple aging mechanisms simultaneously, leading to interfering characteristic gases; and traditional approaches lack the multi-label recognition capability to address concurrent fault patterns when processing mixed-gas data. These limitations hinder the accuracy and comprehensiveness of insulation condition assessment, underscoring the urgent need for intelligent analytical methods. This study proposes a deep convolutional neural network (DCNN)-based multi-label classification framework to accurately identify the gas generation characteristics of five typical power cable insulation materials—ethylene propylene diene monomer (EPDM), ethylene-vinyl acetate copolymer (EVA), silicone rubber (SR), polyamide (PA), and cross-linked polyethylene (XLPE)—under fault conditions. The method leverages concentration data of six characteristic gases (CO2, C2H4, C2H6, CH4, CO, and H2), integrating modern data analysis and deep learning techniques, including logarithmic transformation, Z-score normalization, multi-scale convolution, residual connections, channel attention mechanisms, and weighted binary cross-entropy loss functions, to enable simultaneous prediction of multiple degradation states or concurrent fault pattern combinations. By constructing a gas dataset covering diverse materials and operating conditions and conducting comparative experiments to validate the proposed DCNN model’s performance, the results demonstrate that the model can effectively learn material-specific gas generation patterns and accurately identify complex label co-occurrence scenarios. This approach provides technical support for improving the accuracy of insulation condition assessment in power cable equipment. Full article
(This article belongs to the Section Artificial Intelligence in Polymer Science)
Show Figures

Figure 1

30 pages, 7475 KB  
Article
Agentic AI Framework to Automate Traditional Farming for Smart Agriculture
by Muhammad Murad, Muhammad Ahmed, Nizam ul din, Muhammad Farrukh Shahid, Shahbaz Siddiqui, Daniel Byers, Muhammad Hassan Tanveer and Razvan C. Voicu
AgriEngineering 2026, 8(1), 8; https://doi.org/10.3390/agriengineering8010008 - 1 Jan 2026
Viewed by 1036
Abstract
Artificial intelligence (AI) shows great promise for transforming the agriculture sector and can enable the development of many modern farming practices over conventional methods. Nowadays, AI agents and agentic AI have attained popularity due to their autonomous structure and working mechanism. This research [...] Read more.
Artificial intelligence (AI) shows great promise for transforming the agriculture sector and can enable the development of many modern farming practices over conventional methods. Nowadays, AI agents and agentic AI have attained popularity due to their autonomous structure and working mechanism. This research work proposes an agentic AI framework that integrates multiple agents developed for farming land to promote climate-smart agriculture and support United Nations (UN) sustainable development goals (SDGs). The developed structure has four agents: Agent A for monitoring soil properties, Agent B for weather sensing, Agent C for disease detection vision sensing in rice crops, and Agent D, a multi-agent supervisor agent chatbot connected with the other agents. The overall objective was to connect all agents on a single platform to obtain sensor data and perform a predictive analysis. This will help farmers and landowners obtain information about weather conditions, soil properties, and vision-based disease detection so that appropriate measures can be taken on agricultural land for rice crops. For soil properties (nitrogen, phosphorus, and potassium) from Agent A and climate data (temperature and humidity) from Agent B, we deployed the long short-term memory (LSTM), gated recurrent unit (GRU), and one-dimensional convolutional neural network (1D-CNN) predictive models, which achieved an accuracy of 93.4%, 94%, and 96% for Agent A; a 0.27 mean absolute error (MAE) for temperature; and a 2.9 MAE for humidity on the Agent B data. For Agent C, we used vision transformer (ViT), MobileViT, and RiceNet (with a diffusion model layer as a feature extractor) models to detect disease. The models achieved accuracies of 95%, 98.5%, and 85.4% during training respectively. Overall, the proposed framework demonstrates how agentic AI can be used to transform conventional farming practices into a digital process, thereby supporting smart agriculture. Full article
Show Figures

Figure 1

21 pages, 1827 KB  
Article
A Multi-Model Fusion Framework for Aeroengine Remaining Useful Life Prediction
by Bing Tan, Yang Zhang, Xia Wei, Lei Wang, Yanming Chang, Li Zhang, Yingzhe Fan and Caio Graco Rodrigues Leandro Roza
Eng 2025, 6(9), 210; https://doi.org/10.3390/eng6090210 - 1 Sep 2025
Cited by 2 | Viewed by 1014
Abstract
As the core component of aircraft systems, aeroengines require accurate Remaining Useful Life (RUL) prediction to ensure flight safety, which serves as a key part of Prognostics and Health Management (PHM). Traditional RUL prediction methods primarily fall into two main categories: physics-based and [...] Read more.
As the core component of aircraft systems, aeroengines require accurate Remaining Useful Life (RUL) prediction to ensure flight safety, which serves as a key part of Prognostics and Health Management (PHM). Traditional RUL prediction methods primarily fall into two main categories: physics-based and data-driven approaches. Physics-based methods mainly rely on extensive prior knowledge, limiting their scalability, while data-driven methods (including statistical analysis and machine learning) struggle with handling high-dimensional data and suboptimal modeling of multi-scale temporal dependencies. To address these challenges and enhance prediction accuracy and robustness, we propose a novel hybrid deep learning framework (CLSTM-TCN) integrating 2D Convolutional Neural Network (2D-CNN), Long Short-Term Memory (LSTM) network, and Temporal Convolutional Network (TCN) modules. The CLSTM-TCN framework follows a progressive feature refinement logic: 2D-CNN first extracts short-term local features and inter-feature interactions from input data; the LSTM network then models long-term temporal dependencies in time series to strengthen global temporal dynamics representation; and TCN ultimately captures multi-scale temporal features via dilated convolutions, overcoming the limitations of the LSTM network in long-range dependency modeling while enabling parallel computing. Validated on the NASA C-MAPSS data set (focusing on FD001), the CLSTM-TCN model achieves a root mean square error (RMSE) of 13.35 and a score function (score) of 219. Compared to the CNN-LSTM, CNN-TCN, and LSTM-TCN models, it reduces the RMSE by 27.94%, 30.79%, and 30.88%, respectively, and significantly outperforms the traditional single-model methods (e.g., standalone CNN or LSTM network). Notably, the model maintains stability across diverse operational conditions, with RMSE fluctuations capped within 15% for all test cases. Ablation studies confirm the synergistic effect of each module: removing 2D-CNN, LSTM, or TCN leads to an increase in the RMSE and score. This framework effectively handles high-dimensional data and multi-scale temporal dependencies, providing an accurate and robust solution for aeroengine RUL prediction. While current performance is validated under single operating conditions, ongoing efforts to optimize hyperparameter tuning, enhance adaptability to complex operating scenarios, and integrate uncertainty analysis will further strengthen its practical value in aircraft health management. Full article
Show Figures

Figure 1

24 pages, 4427 KB  
Article
Three-Dimensional Convolutional Neural Networks (3D-CNN) in the Classification of Varieties and Quality Assessment of Soybean Seeds (Glycine max L. Merrill)
by Piotr Rybacki, Kiril Bahcevandziev, Diego Jarquin, Ireneusz Kowalik, Andrzej Osuch, Ewa Osuch and Janetta Niemann
Agronomy 2025, 15(9), 2074; https://doi.org/10.3390/agronomy15092074 - 28 Aug 2025
Viewed by 1456
Abstract
The precise identification, classification, sorting, and rapid and accurate quality assessment of soybean seeds are extremely important in terms of the continuity of agricultural production, varietal purity, seed processing, protein extraction, and food safety. Currently, commonly used methods for the identification and quality [...] Read more.
The precise identification, classification, sorting, and rapid and accurate quality assessment of soybean seeds are extremely important in terms of the continuity of agricultural production, varietal purity, seed processing, protein extraction, and food safety. Currently, commonly used methods for the identification and quality assessment of soybean seeds include morphological analysis, chemical analysis, protein electrophoresis, liquid chromatography, spectral analysis, and image analysis. The use of image analysis and artificial intelligence is the aim of the presented research, in which a method for the automatic classification of soybean varieties, the assessment of the degree of damage, and the identification of geometric features of soybean seeds based on numerical models obtained using a 3D scanner has been proposed. Unlike traditional two-dimensional images, which only represent height and width, 3D imaging adds a third dimension, allowing for a more realistic representation of the shape of the seeds. The research was conducted on soybean seeds with a moisture content of 13%, and the seeds were stored in a room with a temperature of 20–23 °C and air humidity of 60%. Individual soybean seeds were scanned to create 3D models, allowing for the measurement of their geometric parameters, assessment of texture, evaluation of damage, and identification of characteristic varietal features. The developed 3D-CNN network model comprised an architecture consisting of an input layer, three hidden layers, and one output layer with a single neuron. The aim of the conducted research is to design a new, three-dimensional 3D-CNN architecture, the main task of which is the classification of soybean seeds. For the purposes of network analysis and testing, 22 input criteria were defined, with a hierarchy of their importance. The training, testing, and validation database of the SB3D-NET network consisted of 3D models obtained as a result of scanning individual soybean seeds, 100 for each variety. The accuracy of the training process of the proposed SB3D-NET model for the qualitative classification of 3D models of soybean seeds, based on the adopted criteria, was 95.54%, and the accuracy of its validation was 90.74%. The relative loss value during the training process of the SB3D-NET model was 18.53%, and during its validation process, it was 37.76%. The proposed SB3D-NET neural network model for all twenty-two criteria achieves values of global error (GE) of prediction and classification of seeds at the level of 0.0992. Full article
Show Figures

Figure 1

22 pages, 6640 KB  
Article
IonoBench: Evaluating Spatiotemporal Models for Ionospheric Forecasting Under Solar-Balanced and Storm-Aware Conditions
by Mert Can Turkmen, Yee Hui Lee and Eng Leong Tan
Remote Sens. 2025, 17(15), 2557; https://doi.org/10.3390/rs17152557 - 23 Jul 2025
Cited by 2 | Viewed by 903
Abstract
Accurate modeling of ionospheric variability is critical for space weather forecasting and GNSS applications. While machine learning approaches have shown promise, progress is hindered by the absence of standardized benchmarking practices and narrow test periods. In this paper, we take the first step [...] Read more.
Accurate modeling of ionospheric variability is critical for space weather forecasting and GNSS applications. While machine learning approaches have shown promise, progress is hindered by the absence of standardized benchmarking practices and narrow test periods. In this paper, we take the first step toward fostering rigorous and reproducible evaluation of AI models for ionospheric forecasting by introducing IonoBench: a benchmarking framework that employs a stratified data split, balancing solar intensity across subsets while preserving 16 high-impact geomagnetic storms (Dst ≤ 100 nT) for targeted stress testing. Using this framework, we benchmark a field-specific model (DCNN) against state-of-the-art spatiotemporal architectures (SwinLSTM and SimVPv2) using the climatological IRI 2020 model as a baseline reference. DCNN, though effective under quiet conditions, exhibits significant degradation during elevated solar and storm activity. SimVPv2 consistently provides the best performance, with superior evaluation metrics and stable error distributions. Compared to the C1PG baseline (the CODE 1-day forecast product), SimVPv2 achieves a notable RMSE reduction up to 32.1% across various subsets under diverse solar conditions. The reported results highlight the value of cross-domain architectural transfer and comprehensive evaluation frameworks in ionospheric modeling. With IonoBench, we aim to provide an open-source foundation for reproducible comparisons, supporting more meticulous model evaluation and helping to bridge the gap between ionospheric research and modern spatiotemporal deep learning. Full article
Show Figures

Figure 1

28 pages, 4356 KB  
Article
Hyperspectral Image Classification Based on Fractional Fourier Transform
by Jing Liu, Lina Lian, Yuanyuan Li and Yi Liu
Remote Sens. 2025, 17(12), 2065; https://doi.org/10.3390/rs17122065 - 15 Jun 2025
Cited by 1 | Viewed by 1756
Abstract
To effectively utilize the rich spectral information of hyperspectral remote sensing images (HRSIs), the fractional Fourier transform (FRFT) feature of HRSIs is proposed to reflect the time-domain and frequency-domain characteristics of a spectral pixel simultaneously, and an FRFT order selection criterion is also [...] Read more.
To effectively utilize the rich spectral information of hyperspectral remote sensing images (HRSIs), the fractional Fourier transform (FRFT) feature of HRSIs is proposed to reflect the time-domain and frequency-domain characteristics of a spectral pixel simultaneously, and an FRFT order selection criterion is also proposed based on maximizing separability. Firstly, FRFT is applied to the spectral pixels, and the amplitude spectrum is taken as the FRFT feature of HRSIs. The FRFT feature is mixed with the pixel spectral to form the presented spectral and fractional Fourier transform mixed feature (SF2MF), which contains time–frequency mixing information and spectral information of pixels. K-nearest neighbor, logistic regression, and random forest classifiers are used to verify the superiority of the proposed feature. A 1-dimensional convolutional neural network (1D-CNN) and a two-branch CNN network (Two-CNNSF2MF-Spa) are designed to extract the depth SF2MF feature and the SF2MF-spatial joint feature, respectively. Moreover, to compensate for the defect that CNN cannot effectively capture the long-range features of spectral pixels, a long short-term memory (LSTM) network is introduced to be combined with CNN to form a two-branch network C-CLSTMSF2MF for extracting deeper and more efficient fusion features. A 3D-CNNSF2MF model is designed, which firstly performs the principal component analysis on the spa-SF2MF cube containing spatial information and then feeds it into the 3-dimensional convolutional neural network 3D-CNNSF2MF to extract the SF2MF-spatial joint feature effectively. The experimental results of three real HRSIs show that the presented mixed feature SF2MF can effectively improve classification accuracy. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

20 pages, 1830 KB  
Article
Improved Video Action Recognition Based on Pyramid Pooling and Dual-Stream C3D Networks
by Yuzhe Tan, Xueliang Fu and Honghui Li
Appl. Sci. 2025, 15(8), 4454; https://doi.org/10.3390/app15084454 - 17 Apr 2025
Viewed by 1270
Abstract
This paper proposes an innovative video behaviour classification method based on pyramid pooling and a variable-scale training strategy, which aims to improve the video behaviour-recognition performance of a 3D convolutional neural network (3D-CNN) and a dual-stream C3D network. By introducing pyramid pooling and [...] Read more.
This paper proposes an innovative video behaviour classification method based on pyramid pooling and a variable-scale training strategy, which aims to improve the video behaviour-recognition performance of a 3D convolutional neural network (3D-CNN) and a dual-stream C3D network. By introducing pyramid pooling and secondary pooling operations, the number of pooling layers is optimised, the number of parameters of the model is significantly reduced, and the recognition accuracy is effectively improved. In the improved dual-stream C3D network, the early fusion strategy is adopted to better combine the spatio-temporal features and improve the accuracy of the model. In addition, by introducing the optical flow feature, the model’s perception ability of video dynamic information is enhanced, and the recognition performance is further improved. Experimental results show that the proposed method performs well on multiple video datasets, which is better than the existing mainstream methods, which proves the innovation and efficiency of the proposed method in the field of video behaviour recognition. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

39 pages, 5524 KB  
Article
Research on Methods for the Recognition of Ship Lights and the Autonomous Determination of the Types of Approaching Vessels
by Xiangyu Gao and Yuelin Zhao
J. Mar. Sci. Eng. 2025, 13(4), 643; https://doi.org/10.3390/jmse13040643 - 24 Mar 2025
Viewed by 1274
Abstract
The acquisition of approaching vessels’ information is a critical technological challenge for maritime risk warning and intelligent collision avoidance decision-making. This paper proposes a method for autonomously identifying types of approaching vessels based on an improved YOLOv8 model and ship light features, aiming [...] Read more.
The acquisition of approaching vessels’ information is a critical technological challenge for maritime risk warning and intelligent collision avoidance decision-making. This paper proposes a method for autonomously identifying types of approaching vessels based on an improved YOLOv8 model and ship light features, aiming to infer the propulsion mode, size, movement, and operational nature of the approaching vessels in real-time through the color, quantity, and spatial distribution of lights. Firstly, to address the challenges of the small target characteristics of ship lights and complex environmental interference, an improved YOLOv8 model is developed: The dilation-wise residual (DWR) module is introduced to optimize the feature extraction capability of the C2f structure. The bidirectional feature pyramid network (BiFPN) is adopted to enhance multi-scale feature fusion. A hybrid attention transformer (HAT) is employed to enhance the small target detection capability of the detection head. This framework achieves precise ship light recognition under complex maritime circumstances. Secondly, 23 spatio-semantic feature indicators are established to encode ship light patterns, and a multi-viewing angle dataset is constructed. This dataset covers 36 vessel types under four viewing angles (front, port-side, starboard, and stern viewing angles), including the color, quantity, combinations, and spatial distribution of the ship lights. Finally, a two-stage discriminative model is proposed: ECA-1D-CNN is utilized for the rapid assessment of the viewing angle of the vessel. Deep learning algorithms are dynamically applied for vessel type determination within the assessed viewing angles. Experimental results show that this method achieves high determination accuracy. This paper provides a kind of technical support for intelligent situational awareness and the autonomous collision avoidance of ships. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

22 pages, 6327 KB  
Article
Evaluation of Shelf Life Prediction for Broccoli Based on Multispectral Imaging and Multi-Feature Data Fusion
by Xiaoshuo Cui, Xiaoxue Sun, Shuxin Xuan, Jinyu Liu, Dongfang Zhang, Jun Zhang, Xiaofei Fan and Xuesong Suo
Agronomy 2025, 15(4), 788; https://doi.org/10.3390/agronomy15040788 - 23 Mar 2025
Cited by 1 | Viewed by 1351
Abstract
Broccoli is a highly nutritious vegetable that is favored worldwide. Assessing and predicting the shelf life of broccoli holds considerable importance for effective resource optimization and management. The physicochemical parameters and spectral characteristics of broccoli are important indicators partially reflecting its shelf life. [...] Read more.
Broccoli is a highly nutritious vegetable that is favored worldwide. Assessing and predicting the shelf life of broccoli holds considerable importance for effective resource optimization and management. The physicochemical parameters and spectral characteristics of broccoli are important indicators partially reflecting its shelf life. However, few studies have used spectral image information to predict and evaluate the shelf life of broccoli. In this study, multispectral imaging combined with multi-feature data fusion was used to predict and evaluate the shelf life of broccoli. Spectral data and textural features were extracted from multispectral images of broccoli and fused with the physicochemical parameters for analysis. Savitzky–Golay (SG) convolution smoothing and standard normal variate (SNV) and normalization (Norm) preprocessing methods were employed to preprocess the original spectral data and textural features, while a successive projection algorithm (SPA) was used to extract relevant feature bands. The physicochemical parameters for broccoli shelf life were predicted using three methods: support vector regression (SVR), random forest classification (RF), and 2D convolutional neural network (2D-CNN) models. Broccoli shelf life prediction models were evaluated using three classification methods: RF, 1D-CNN, and 2D-CNN. The results demonstrate that, among the models used for predicting and evaluating the shelf life of broccoli, the SPA+SG+RF classification model employing fused data Type C achieves the highest accuracy. Specifically, this method achieves accuracies of 88.98% and 88.64% for the training and validation sets, respectively. Multi-feature data fusion of spectral image information and physical and chemical parameters were combined with different machine learning methods to predict and evaluate the shelf life of broccoli. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

24 pages, 5293 KB  
Article
Smart Grain Storage Solution: Integrated Deep Learning Framework for Grain Storage Monitoring and Risk Alert
by Xinze Li, Wenfu Wu, Hongpeng Guo, Yunshandan Wu, Shuyao Li, Wenyue Wang and Yanhui Lu
Foods 2025, 14(6), 1024; https://doi.org/10.3390/foods14061024 - 18 Mar 2025
Cited by 4 | Viewed by 3578
Abstract
In order to overcome the notable limitations of current methods for monitoring grain storage states, particularly in the early warning of potential risks and the analysis of the spatial distribution of grain temperatures within the granary, this study proposes a multi-model fusion approach [...] Read more.
In order to overcome the notable limitations of current methods for monitoring grain storage states, particularly in the early warning of potential risks and the analysis of the spatial distribution of grain temperatures within the granary, this study proposes a multi-model fusion approach based on a deep learning framework for grain storage state monitoring and risk alert. This approach combines two advanced three-dimensional deep learning models, a grain storage state classification model based on 3D DenseNet and a temperature field prediction model based on 3DCNN-LSTM. First, the grain storage state classification model based on 3D DenseNet efficiently extracts features from three-dimensional grain temperature data to achieve the accurate classification of storage states. Second, the temperature prediction model based on 3DCNN-LSTM incorporates historical grain temperature and absolute water potential data to precisely predict the dynamic changes in the granary’s temperature field. Finally, the grain temperature prediction results are input into the 3D DenseNet to provide early warnings for potential condensation and mildew risks within the grain pile. Comparative experiments with multiple baseline models show that the 3D DenseNet model achieves an accuracy of 97.38% in the grain storage state classification task, significantly outperforming other models. The 3DCNN-LSTM model shows high prediction accuracy in temperature forecasting, with MAE of 0.24 °C and RMSE of 0.28 °C. Furthermore, in potential risk alert experiments, the model effectively captures the temperature trend in the grain storage environment and provides early warnings, particularly for mildew and condensation risks, demonstrating the potential of this method for grain storage safety monitoring and risk alerting. This study provides a smart grain storage solution which contributes to ensuring food safety and enhancing the efficiency of grain storage management. Full article
Show Figures

Figure 1

44 pages, 12626 KB  
Article
Hyperspectral Image Segmentation for Optimal Satellite Operations: In-Orbit Deployment of 1D-CNN
by Jon Alvarez Justo, Dennis D. Langer, Simen Berg, Jens Nieke, Radu Tudor Ionescu, Per Gunnar Kjeldsberg and Tor Arne Johansen
Remote Sens. 2025, 17(4), 642; https://doi.org/10.3390/rs17040642 - 13 Feb 2025
Cited by 4 | Viewed by 2809
Abstract
AI on spaceborne platforms optimizes operations and increases automation, crucial for satellites with limited downlink capacity. It can ensure that only valuable information is transmitted, minimizing resources spent on unnecessary data, which is especially important in hyperspectral Earth Observation missions, producing large data [...] Read more.
AI on spaceborne platforms optimizes operations and increases automation, crucial for satellites with limited downlink capacity. It can ensure that only valuable information is transmitted, minimizing resources spent on unnecessary data, which is especially important in hyperspectral Earth Observation missions, producing large data volumes. Our previous work showed that the 1D-CNN, 1D-Justo-LiuNet, outperformed 2D-CNNs and Vision Transformers for hyperspectral segmentation with an accuracy of 0.93 and 4563 parameters, making our model the best choice for in-orbit deployment. While the state of the art has deployed 1D-CNNs on low-power platforms, such as Unmanned Aerial Vehicles, they have still not been deployed in space before. In this work, we mark the first deployment and testing of a 1D-CNN in a satellite. We implement a C version of the 1D-Justo-LiuNet and, after ground validation, we deploy it on board the HYPSO-1 satellite. We demonstrate in-flight segmentation of hyperspectral images via the 1D-CNN to classify pixels into sea, land, and cloud categories. We show how in-orbit segmentation improves satellite operations, increases automation, and optimizes downlink. We give examples of how in-orbit segmentation addresses mission challenges in HYPSO-1, such as incomplete data reception, incorrect satellite pointing, and cloud cover, helping to decide whether to transmit or discard data on board. An additional CNN autonomously interprets the segmented images, enabling on-board decisions on data downlink. Full article
Show Figures

Figure 1

18 pages, 10611 KB  
Article
Residual Life Prediction of SA-CNN-BILSTM Aero-Engine Based on a Multichannel Hybrid Network
by Yonghao He, Changjun Wen and Wei Xu
Appl. Sci. 2025, 15(2), 966; https://doi.org/10.3390/app15020966 - 20 Jan 2025
Cited by 5 | Viewed by 1473
Abstract
As the core component of an airplane, the health status of the aviation engine is crucial for the safe operation of the aircraft. Therefore, predicting the remaining service life of the engine is of great significance for ensuring its safety and reliability. In [...] Read more.
As the core component of an airplane, the health status of the aviation engine is crucial for the safe operation of the aircraft. Therefore, predicting the remaining service life of the engine is of great significance for ensuring its safety and reliability. In this paper, a multichannel hybrid network is proposed; this network is a combination of the one-dimensional convolutional neural network (1D-CNN), the bidirectional long short-term memory network (BiLSTM), and the self-attention mechanism. For each sensor of the engine, an SA-CNN-BiLSTM network is established. The one-dimensional convolutional neural network and the bidirectional long short-term memory network are used to extract the spatial features and temporal features of the input data, respectively. Moreover, multichannel modeling is utilized to achieve the parallel processing of different sensors. Subsequently, the results are stitched together to establish a mapping relationship with the engine’s remaining useful life (RUL). Experimental validation was conducted on the aero-engine C-MAPSS dataset. The prediction results were compared with those of the other seven models to verify the effectiveness of this method in predicting the remaining service life. The results indicate that the proposed method significantly reduces the prediction error compared to other models. Specifically, for the two datasets, their mean absolute errors were only 11.47 and 11.76, the root-mean-square error values were only 12.26 and 12.78, and the scoring function values were only 195 and 227. Full article
Show Figures

Figure 1

22 pages, 10697 KB  
Article
Lip-Reading Classification of Turkish Digits Using Ensemble Learning Architecture Based on 3DCNN
by Ali Erbey and Necaattin Barışçı
Appl. Sci. 2025, 15(2), 563; https://doi.org/10.3390/app15020563 - 8 Jan 2025
Viewed by 1750
Abstract
Understanding others correctly is of great importance for maintaining effective communication. Factors such as hearing difficulties or environmental noise can disrupt this process. Lip reading offers an effective solution to these challenges. With the growing success of deep learning architectures, research on lip [...] Read more.
Understanding others correctly is of great importance for maintaining effective communication. Factors such as hearing difficulties or environmental noise can disrupt this process. Lip reading offers an effective solution to these challenges. With the growing success of deep learning architectures, research on lip reading has gained momentum. The aim of this study is to create a lip reading dataset for Turkish digit recognition and to conduct predictive analyses. The dataset has divided into two subsets: the face region and the lip region. CNN, LSTM, and 3DCNN-based models, including C3D, I3D, and 3DCNN+BiLSTM, were used. While LSTM models are effective in processing temporal data, 3DCNN-based models, which can process both spatial and temporal information, achieved higher accuracy in this study. Experimental results showed that the dataset containing only the lip region performed better; accuracy rates for CNN, LSTM, C3D, and I3D on the lip region were 67.12%, 75.53%, 86.32%, and 93.24%, respectively. The 3DCNN-based models achieved higher accuracy due to their ability to process spatio-temporal data. Furthermore, an additional 1.23% improvement was achieved through ensemble learning, with the best result reaching 94.53% accuracy. Ensemble learning, by combining the strengths of different models, provided a meaningful improvement in overall performance. These results demonstrate that 3DCNN architectures and ensemble learning methods yield high success in addressing the problem of lip reading in the Turkish language. While our study focuses on Turkish digit recognition, the proposed methods have the potential to be successful in other languages or broader lip reading applications. Full article
Show Figures

Figure 1

25 pages, 4342 KB  
Article
A Comparative Analysis of the Novel Conditional Deep Convolutional Neural Network Model, Using Conditional Deep Convolutional Generative Adversarial Network-Generated Synthetic and Augmented Brain Tumor Datasets for Image Classification
by Efe Precious Onakpojeruo, Mubarak Taiwo Mustapha, Dilber Uzun Ozsahin and Ilker Ozsahin
Brain Sci. 2024, 14(6), 559; https://doi.org/10.3390/brainsci14060559 - 30 May 2024
Cited by 34 | Viewed by 3239
Abstract
Disease prediction is greatly challenged by the scarcity of datasets and privacy concerns associated with real medical data. An approach that stands out to circumvent this hurdle is the use of synthetic data generated using Generative Adversarial Networks (GANs). GANs can increase data [...] Read more.
Disease prediction is greatly challenged by the scarcity of datasets and privacy concerns associated with real medical data. An approach that stands out to circumvent this hurdle is the use of synthetic data generated using Generative Adversarial Networks (GANs). GANs can increase data volume while generating synthetic datasets that have no direct link to personal information. This study pioneers the use of GANs to create synthetic datasets and datasets augmented using traditional augmentation techniques for our binary classification task. The primary aim of this research was to evaluate the performance of our novel Conditional Deep Convolutional Neural Network (C-DCNN) model in classifying brain tumors by leveraging these augmented and synthetic datasets. We utilized advanced GAN models, including Conditional Deep Convolutional Generative Adversarial Network (DCGAN), to produce synthetic data that retained essential characteristics of the original datasets while ensuring privacy protection. Our C-DCNN model was trained on both augmented and synthetic datasets, and its performance was benchmarked against state-of-the-art models such as ResNet50, VGG16, VGG19, and InceptionV3. The evaluation metrics demonstrated that our C-DCNN model achieved accuracy, precision, recall, and F1 scores of 99% on both synthetic and augmented images, outperforming the comparative models. The findings of this study highlight the potential of using GAN-generated synthetic data in enhancing the training of machine learning models for medical image classification, particularly in scenarios with limited data available. This approach not only improves model accuracy but also addresses privacy concerns, making it a viable solution for real-world clinical applications in disease prediction and diagnosis. Full article
(This article belongs to the Section Neuro-oncology)
Show Figures

Figure 1

Back to TopTop