Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (30)

Search Parameters:
Keywords = recursive neural networks (RNNs)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
30 pages, 9222 KiB  
Article
Using Deep Learning in Forecasting the Production of Electricity from Photovoltaic and Wind Farms
by Michał Pikus, Jarosław Wąs and Agata Kozina
Energies 2025, 18(15), 3913; https://doi.org/10.3390/en18153913 - 23 Jul 2025
Viewed by 311
Abstract
Accurate forecasting of electricity production is crucial for the stability of the entire energy sector. However, predicting future renewable energy production and its value is difficult due to the complex processes that affect production using renewable energy sources. In this article, we examine [...] Read more.
Accurate forecasting of electricity production is crucial for the stability of the entire energy sector. However, predicting future renewable energy production and its value is difficult due to the complex processes that affect production using renewable energy sources. In this article, we examine the performance of basic deep learning models for electricity forecasting. We designed deep learning models, including recursive neural networks (RNNs), which are mainly based on long short-term memory (LSTM) networks; gated recurrent units (GRUs), convolutional neural networks (CNNs), temporal fusion transforms (TFTs), and combined architectures. In order to achieve this goal, we have created our benchmarks and used tools that automatically select network architectures and parameters. Data were obtained as part of the NCBR grant (the National Center for Research and Development, Poland). These data contain daily records of all the recorded parameters from individual solar and wind farms over the past three years. The experimental results indicate that the LSTM models significantly outperformed the other models in terms of forecasting. In this paper, multilayer deep neural network (DNN) architectures are described, and the results are provided for all the methods. This publication is based on the results obtained within the framework of the research and development project “POIR.01.01.01-00-0506/21”, realized in the years 2022–2023. The project was co-financed by the European Union under the Smart Growth Operational Programme 2014–2020. Full article
Show Figures

Figure 1

17 pages, 3738 KiB  
Article
Learning and Generation of Drawing Sequences Using a Deep Network for a Drawing Support System
by Homari Matsumoto, Atomu Nakamura and Shun Nishide
Appl. Sci. 2025, 15(13), 7038; https://doi.org/10.3390/app15137038 - 23 Jun 2025
Viewed by 531
Abstract
With rapid advances in image-generative AI, interest has grown in applying these technologies across diverse domains. While current models excel at producing high-quality final images, they rarely model the intermediate stages of the drawing process. This study proposes a drawing support system that [...] Read more.
With rapid advances in image-generative AI, interest has grown in applying these technologies across diverse domains. While current models excel at producing high-quality final images, they rarely model the intermediate stages of the drawing process. This study proposes a drawing support system that leverages generative AI to sequentially generate and modify images during the drawing process. To train the system, we constructed a custom dataset of time-series drawing images, which are typically unavailable in existing AI models. We developed an encoder–decoder model based on convolutional neural networks to predict the next frame from a current input image. To address error accumulation during recursive generation, we introduced a noise-augmented training method, incorporating noisy images into the dataset. Experimental results show that standard training suffers from image degradation over time, while the noise-augmented approach significantly improves stability and quality throughout the sequence. Averaged across all generated frames, the noise-augmented training achieved a PSNR of 20.48, SSIM of 0.81, and LPIPS of 0.13. As a benchmark, we compared this approach to PredRNN, a representative temporal model, which achieved a PSNR of 24.05, SSIM of 0.88, and LPIPS of 0.08. These results demonstrate the effectiveness of noise-augmented learning while also highlighting potential performance gains using temporal architectures. PredRNN also required more computation, with 6.41 M parameters and a 0.188 s inference time per sequence, compared to 5.20 M and 0.005 s for our model. Furthermore, in a sample-level analysis of 10 final images, the proposed model outperformed PredRNN in three cases across all metrics, suggesting its robustness in certain sequences despite architectural simplicity. Future directions include using more advanced models such as Variational Autoencoders and diffusion models, and enhancing consistency in long-term sequences. Our work confirms the feasibility of generating interactive drawing sequences using AI and sets the foundation for more robust and creative drawing support tools. Full article
(This article belongs to the Topic Electronic Communications, IOT and Big Data, 2nd Volume)
Show Figures

Figure 1

20 pages, 923 KiB  
Article
Cybersecurity Challenges in PV-Hydrogen Transport Networks: Leveraging Recursive Neural Networks for Resilient Operation
by Lei Yang, Saddam Aziz and Zhenyang Yu
Energies 2025, 18(9), 2262; https://doi.org/10.3390/en18092262 - 29 Apr 2025
Viewed by 428
Abstract
In the rapidly evolving landscape of transportation technologies, hydrogen vehicle networks integrated with photovoltaic (PV) systems represent a significant advancement toward sustainable mobility. However, the integration of such technologies also introduces complex cybersecurity challenges that must be meticulously managed to ensure operational integrity [...] Read more.
In the rapidly evolving landscape of transportation technologies, hydrogen vehicle networks integrated with photovoltaic (PV) systems represent a significant advancement toward sustainable mobility. However, the integration of such technologies also introduces complex cybersecurity challenges that must be meticulously managed to ensure operational integrity and system resilience. This paper explores the intricate dynamics of cybersecurity in PV-powered hydrogen vehicle networks, focusing on the real-time challenges posed by cyber threats such as False Data Injection Attacks (FDIAs) and their impact on network operations. Our research utilizes a novel hierarchical robust optimization model enhanced by Recursive Neural Networks (RNNs) to improve detection rates and response times to cyber incidents across various severity levels. The initial findings reveal that as the severity of incidents escalates from level 1 to 10, the response time significantly increases from an average of 7 min for low-severity incidents to over 20 min for high-severity scenarios, demonstrating the escalating complexity and resource demands of more severe incidents. Additionally, the study introduces an in-depth examination of the detection dynamics, illustrating that while detection rates generally decrease as incident frequency increases—due to system overload—the employment of advanced RNNs effectively mitigates this trend, sustaining high detection rates of up to 95% even under high-frequency scenarios. Furthermore, we analyze the cybersecurity risks specifically associated with the intermittency of PV-based hydrogen production, demonstrating how fluctuations in solar energy availability can create vulnerabilities that cyberattackers may exploit. We also explore the relationship between incident frequency, detection sensitivity, and the resulting false positive rates, revealing that the optimal adjustment of detection thresholds can reduce false positives by as much as 30%, even under peak load conditions. This paper not only provides a detailed empirical analysis of the cybersecurity landscape in PV-integrated hydrogen vehicle networks but also offers strategic insights into the deployment of AI-enhanced cybersecurity frameworks. The findings underscore the critical need for scalable, responsive cybersecurity solutions that can adapt to the dynamic threat environment of modern transport infrastructures, ensuring the sustainability and safety of solar-powered hydrogen mobility solutions. Full article
Show Figures

Figure 1

21 pages, 3256 KiB  
Article
Assessment of Deep Neural Network Models for Direct and Recursive Multi-Step Prediction of PM10 in Southern Spain
by Javier Gómez-Gómez, Eduardo Gutiérrez de Ravé and Francisco J. Jiménez-Hornero
Forecasting 2025, 7(1), 6; https://doi.org/10.3390/forecast7010006 - 26 Jan 2025
Viewed by 1431
Abstract
Western Europe has been strongly affected in the last decades by Saharan dust incursions, causing a high PM10 concentration and red rain. In this study, dust events and the performance of seven neural network prediction models, including convolutional neural networks (CNN) and recurrent [...] Read more.
Western Europe has been strongly affected in the last decades by Saharan dust incursions, causing a high PM10 concentration and red rain. In this study, dust events and the performance of seven neural network prediction models, including convolutional neural networks (CNN) and recurrent neural networks (RNN), have been analyzed in a PM10 concentration series from a monitoring station in Córdoba, southern Spain. The models were also assessed here for recursive multi-step prediction over different forecast periods in three different situations: background concentration, a strong dust event, and an extreme dust event. A very important increase in the number of dust events has been identified in the last few years. Results show that CNN models outperform the other models in terms of accuracy for direct 24 h prediction (RMSE values between 10.00 and 10.20 μg/m3), whereas the recursive prediction is only suitable for background concentration in the short term (for 2–5-day forecasts). The assessment and improvement of prediction models might help the development of early-warning systems for these events. From the authors’ perspective, the evaluation of trained models beyond the direct multi-step predictions allowed to fill a gap in this research field, which few articles have explored in depth. Full article
(This article belongs to the Section Environmental Forecasting)
Show Figures

Figure 1

16 pages, 240 KiB  
Article
A Comparative Study of Sentiment Analysis on Customer Reviews Using Machine Learning and Deep Learning
by Logan Ashbaugh and Yan Zhang
Computers 2024, 13(12), 340; https://doi.org/10.3390/computers13120340 - 15 Dec 2024
Cited by 4 | Viewed by 7289
Abstract
Sentiment analysis is a key technique in natural language processing that enables computers to understand human emotions expressed in text. It is widely used in applications such as customer feedback analysis, social media monitoring, and product reviews. However, sentiment analysis of customer reviews [...] Read more.
Sentiment analysis is a key technique in natural language processing that enables computers to understand human emotions expressed in text. It is widely used in applications such as customer feedback analysis, social media monitoring, and product reviews. However, sentiment analysis of customer reviews presents unique challenges, including the need for large datasets and the difficulty in accurately capturing subtle emotional nuances in text. In this paper, we present a comparative study of sentiment analysis on customer reviews using both deep learning and traditional machine learning techniques. The deep learning models include Convolutional Neural Network (CNN) and Recursive Neural Network (RNN), while the machine learning methods consist of Logistic Regression, Random Forest, and Naive Bayes. Our dataset is composed of Amazon product reviews, where we utilize the star rating as a proxy for the sentiment expressed in each review. Through comprehensive experiments, we assess the performance of each model in terms of accuracy and effectiveness in detecting sentiment. This study provides valuable insights into the strengths and limitations of both deep learning and traditional machine learning approaches for sentiment analysis. Full article
17 pages, 13756 KiB  
Communication
Sign Language Interpreting System Using Recursive Neural Networks
by Erick A. Borges-Galindo, Nayely Morales-Ramírez, Mario González-Lee, José R. García-Martínez, Mariko Nakano-Miyatake  and Hector Perez-Meana 
Appl. Sci. 2024, 14(18), 8560; https://doi.org/10.3390/app14188560 - 23 Sep 2024
Cited by 1 | Viewed by 2044
Abstract
According to the World Health Organization (WHO), 5% of people around the world have hearing disabilities, which limits their capacity to communicate with others. Recently, scientists have proposed systems based on deep learning techniques to create a sign language-to-text translator, expecting this to [...] Read more.
According to the World Health Organization (WHO), 5% of people around the world have hearing disabilities, which limits their capacity to communicate with others. Recently, scientists have proposed systems based on deep learning techniques to create a sign language-to-text translator, expecting this to help deaf people communicate; however, the performance of such systems is still low for practical scenarios. Furthermore, the proposed systems are language-oriented, which leads to particular problems related to the signs for each language. For this reason, to address this problem, in this paper, we propose a system based on a Recursive Neural Network (RNN) focused on Mexican Sign Language (MSL) that uses the spatial tracking of hands and facial expressions to predict the word that a person intends to communicate. To achieve this, we trained four RNN-based models using a dataset of 600 clips that were 30 s long; each word included 30 clips. We conducted two experiments; we tailored the first experiment to determine the most well-suited model for the target application and measure the accuracy of the resulting system in offline mode; in the second experiment, we measured the accuracy of the system in online mode. We assessed the system’s performance using the following metrics: the precision, recall, F1-score, and the number of errors during online scenarios, and the results computed indicate an accuracy of 0.93 in the offline mode and a higher performance for the online operating mode compared to previously proposed approaches. These results underscore the potential of the proposed scheme in scenarios such as teaching, learning, commercial transactions, and daily communications among deaf and non-deaf people. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

25 pages, 3047 KiB  
Article
Hierarchical Dynamic Spatio-Temporal Graph Convolutional Networks with Self-Supervised Learning for Traffic Flow Forecasting
by Siwei Wei, Yanan Song, Donghua Liu, Sichen Shen, Rong Gao and Chunzhi Wang
Inventions 2024, 9(5), 102; https://doi.org/10.3390/inventions9050102 - 20 Sep 2024
Cited by 1 | Viewed by 2605
Abstract
It is crucial for both traffic management organisations and individual commuters to be able to forecast traffic flows accurately. Graph neural networks made great strides in this field owing to their exceptional capacity to capture spatial correlations. However, existing approaches predominantly focus on [...] Read more.
It is crucial for both traffic management organisations and individual commuters to be able to forecast traffic flows accurately. Graph neural networks made great strides in this field owing to their exceptional capacity to capture spatial correlations. However, existing approaches predominantly focus on local geographic correlations, ignoring cross-region interdependencies in a global context, which is insufficient to extract comprehensive semantic relationships, thereby limiting prediction accuracy. Additionally, most GCN-based models rely on pre-defined graphs and unchanging adjacency matrices to reflect the spatial relationships among node features, neglecting the dynamics of spatio-temporal features and leading to challenges in capturing the complexity and dynamic spatial dependencies in traffic data. To tackle these issues, this paper puts forward a fresh approach: a new self-supervised dynamic spatio-temporal graph convolutional network (SDSC) for traffic flow forecasting. The proposed SDSC model is a hierarchically structured graph–neural architecture that is intended to augment the representation of dynamic traffic patterns through a self-supervised learning paradigm. Specifically, a dynamic graph is created using a combination of temporal, spatial, and traffic data; then, a regional graph is constructed based on geographic correlation using clustering to capture cross-regional interdependencies. In the feature learning module, spatio-temporal correlations in traffic data are subjected to recursive extraction using dynamic graph convolution facilitated by Recurrent Neural Networks (RNNs). Furthermore, self-supervised learning is embedded within the network training process as an auxiliary task, with the objective of enhancing the prediction task by optimising the mutual information of the learned features across the two graph networks. The superior performance of the proposed SDSC model in comparison with SOTA approaches was confirmed by comprehensive experiments conducted on real road datasets, PeMSD4 and PeMSD8. These findings validate the efficacy of dynamic graph modelling and self-supervision tasks in improving the precision of traffic flow prediction. Full article
Show Figures

Figure 1

11 pages, 3743 KiB  
Article
Minimalist Deployment of Neural Network Equalizers in a Bandwidth-Limited Optical Wireless Communication System with Knowledge Distillation
by Yiming Zhu, Yuan Wei, Chaoxu Chen, Nan Chi and Jianyang Shi
Sensors 2024, 24(5), 1612; https://doi.org/10.3390/s24051612 - 1 Mar 2024
Viewed by 2047
Abstract
An equalizer based on a recurrent neural network (RNN), especially with a bidirectional gated recurrent unit (biGRU) structure, is a good choice to deal with nonlinear damage and inter-symbol interference (ISI) in optical communication systems because of its excellent performance in processing time [...] Read more.
An equalizer based on a recurrent neural network (RNN), especially with a bidirectional gated recurrent unit (biGRU) structure, is a good choice to deal with nonlinear damage and inter-symbol interference (ISI) in optical communication systems because of its excellent performance in processing time series information. However, its recursive structure prevents the parallelization of the computation, resulting in a low equalization rate. In order to improve the speed without compromising the equalization performance, we propose a minimalist 1D convolutional neural network (CNN) equalizer, which is reconverted from a biGRU with knowledge distillation (KD). In this work, we applied KD to regression problems and explain how KD helps students learn from teachers in solving regression problems. In addition, we compared the biGRU, 1D-CNN after KD and 1D-CNN without KD in terms of Q-factor and equalization velocity. The experimental data showed that the Q-factor of the 1D-CNN increased by 1 dB after KD learning from the biGRU, and KD increased the RoP sensitivity of the 1D-CNN by 0.89 dB with the HD-FEC threshold of 1 × 10−3. At the same time, compared with the biGRU, the proposed 1D-CNN equalizer reduced the computational time consumption by 97% and the number of trainable parameters by 99.3%, with only a 0.5 dB Q-factor penalty. The results demonstrate that the proposed minimalist 1D-CNN equalizer holds significant promise for future practical deployments in optical wireless communication systems. Full article
(This article belongs to the Special Issue Novel Technology in Optical Communications)
Show Figures

Figure 1

17 pages, 7062 KiB  
Article
Confining Pressure Forecasting of Shield Tunnel Lining Based on GRU Model and RNN Model
by Min Wang, Xiao-Wei Ye, Jin-Dian Jia, Xin-Hong Ying, Yang Ding, Di Zhang and Feng Sun
Sensors 2024, 24(3), 866; https://doi.org/10.3390/s24030866 - 29 Jan 2024
Cited by 9 | Viewed by 1580
Abstract
The confining pressure has a great effect on the internal force of the tunnel. During construction, the confining pressure which has a crucial impact on tunnel construction changes due to the variation of groundwater level and applied load. Therefore, the safety of tunnels [...] Read more.
The confining pressure has a great effect on the internal force of the tunnel. During construction, the confining pressure which has a crucial impact on tunnel construction changes due to the variation of groundwater level and applied load. Therefore, the safety of tunnels must have the magnitude of confining pressure accurately estimated. In this study, a complete tunnel confining pressure time axis was obtained through high-frequency field monitoring, the data are segmented into a training set and a testing set. Using GRU and RNN models, a confining pressure prediction model was established, and the prediction results were analyzed. The results indicate that the GRU model has a fast-training speed and higher accuracy. On the other hand, the training speed of the RNN model is slow, with lower accuracy. The dynamic characteristics of soil pressure during tunnel construction require accurate prediction models to maintain the safety of the tunnel. The comparison between GRU and RNN models not only highlights the advantages of the GRU model but also emphasizes the necessity of balancing speed accuracy in tunnel construction confining pressure prediction modeling. This study is helpful in improving the understanding of soil pressure dynamics and developing effective prediction tools to promote safer and more reliable tunnel construction practices. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

16 pages, 3476 KiB  
Article
Research on Power Device Fault Prediction of Rod Control Power Cabinet Based on Improved Dung Beetle Optimization–Temporal Convolutional Network Transfer Learning Model
by Liqi Ye, Zhi Chen, Jie Liu, Chao Lin and Yifan Jian
Energies 2024, 17(2), 447; https://doi.org/10.3390/en17020447 - 16 Jan 2024
Cited by 2 | Viewed by 1259
Abstract
In order to improve the reliability and maintainability of rod control power cabinets in nuclear power plants, this paper uses insulated gate bipolar transistors (IGBTs), the key power device of rod control power cabinets, as the object of research on cross-working-condition fault prediction. [...] Read more.
In order to improve the reliability and maintainability of rod control power cabinets in nuclear power plants, this paper uses insulated gate bipolar transistors (IGBTs), the key power device of rod control power cabinets, as the object of research on cross-working-condition fault prediction. An improved transfer learning (TL) model based on a temporal convolutional network (TCN) is proposed to solve the problem of low fault prediction accuracy across operating conditions. First, the peak emitter voltage of an IGBT aging dataset is selected as the source domain failure characteristic, and the TCN model is trained after the removal of outliers and noise reduction. Then, the time–frequency features are extracted according to the characteristics of the target domain data, and the target domain representation data are obtained using kernel principal component analysis (KPCA) for dimensionality reduction. Finally, the TCN model trained on the source domain is transferred; the model is fine-tuned according to the target domain data, and the learning rate, the number of hidden layer nodes, and the number of training times in the network model are optimized using the dung beetle optimization (DBO) algorithm to obtain the optimal network, making it more suitable for target sample fault prediction. The prediction results of this TCN model, the long short-term memory (LSTM) model, the gated recurrent unit (GRU) model, and the recursive neural network (RNN) model are compared and analyzed by selecting prediction performance evaluation indexes. The results show that the TCN model has a better predictive effect. Comparing the prediction results of the TCN-based optimized transfer learning model with those of the directly trained TCN model, the mean square error, root mean square error, and mean absolute error are reduced by a factor of two to three, which provides an effective solution for fault prediction across operating conditions. Full article
(This article belongs to the Section F: Electrical Engineering)
Show Figures

Figure 1

17 pages, 3492 KiB  
Article
Time Convolutional Network-Based Maneuvering Target Tracking with Azimuth–Doppler Measurement
by Jianjun Huang, Haoqiang Hu and Li Kang
Sensors 2024, 24(1), 263; https://doi.org/10.3390/s24010263 - 2 Jan 2024
Cited by 2 | Viewed by 1767
Abstract
In the field of maneuvering target tracking, the combined observations of azimuth and Doppler may cause weak observation or non-observation in the application of traditional target-tracking algorithms. Additionally, traditional target tracking algorithms require pre-defined multiple mathematical models to accurately capture the complex motion [...] Read more.
In the field of maneuvering target tracking, the combined observations of azimuth and Doppler may cause weak observation or non-observation in the application of traditional target-tracking algorithms. Additionally, traditional target tracking algorithms require pre-defined multiple mathematical models to accurately capture the complex motion states of targets, while model mismatch and unavoidable measurement noise lead to significant errors in target state prediction. To address those above challenges, in recent years, the target tracking algorithms based on neural networks, such as recurrent neural networks (RNNs), long short-term memory (LSTM) networks, and transformer architectures, have been widely used for their unique advantages to achieve accurate predictions. To better model the nonlinear relationship between the observation time series and the target state time series, as well as the contextual relationship among time series points, we present a deep learning algorithm called recursive downsample–convolve–interact neural network (RDCINN) based on convolutional neural network (CNN) that downsamples time series into subsequences and extracts multi-resolution features to enable the modeling of complex relationships between time series, which overcomes the shortcomings of traditional target tracking algorithms in using observation information inefficiently due to weak observation or non-observation. The experimental results show that our algorithm outperforms other existing algorithms in the scenario of strong maneuvering target tracking with the combined observations of azimuth and Doppler. Full article
(This article belongs to the Special Issue Radar Sensors for Target Tracking and Localization)
Show Figures

Figure 1

15 pages, 2121 KiB  
Article
Using Deep Neural Network Methods for Forecasting Energy Productivity Based on Comparison of Simulation and DNN Results for Central Poland—Swietokrzyskie Voivodeship
by Michal Pikus and Jarosław Wąs
Energies 2023, 16(18), 6632; https://doi.org/10.3390/en16186632 - 15 Sep 2023
Cited by 4 | Viewed by 1952
Abstract
Forecasting electricity demand is of utmost importance for ensuring the stability of the entire energy sector. However, predicting the future electricity demand and its value poses a formidable challenge due to the intricate nature of the processes influenced by renewable energy sources. Within [...] Read more.
Forecasting electricity demand is of utmost importance for ensuring the stability of the entire energy sector. However, predicting the future electricity demand and its value poses a formidable challenge due to the intricate nature of the processes influenced by renewable energy sources. Within this piece, we have meticulously explored the efficacy of fundamental deep learning models designed for electricity forecasting. Among the deep learning models, we have innovatively crafted recursive neural networks (RNNs) predominantly based on LSTM and combined architectures. The dataset employed was procured from a SolarEdge designer. The dataset encompasses daily records spanning the past year, encompassing an exhaustive collection of parameters extracted from solar farm (based on location in Central Europe (Poland Swietokrzyskie Voivodeship)). The experimental findings unequivocally demonstrated the exceptional superiority of the LSTM models over other counterparts concerning forecasting accuracy. Consequently, we compared multilayer DNN architectures with results provided by the simulator. The measurable results of both DNN models are multi-layer LSTM-only accuracy based on R2—0.885 and EncoderDecoderLSTM R2—0.812. Full article
Show Figures

Figure 1

12 pages, 2135 KiB  
Article
A Lightweight Reconstruction Model via a Neural Network for a Video Super-Resolution Model
by Xinkun Tang, Ying Xu, Feng Ouyang and Ligu Zhu
Appl. Sci. 2023, 13(18), 10165; https://doi.org/10.3390/app131810165 - 9 Sep 2023
Cited by 1 | Viewed by 2032
Abstract
Super-resolution in image and video processing has been a challenge in computer vision, with its progression creating substantial societal ramifications. More specifically, video super-resolution methodologies aim to restore spatial details while upholding the temporal coherence among frames. Nevertheless, their extensive parameter counts and [...] Read more.
Super-resolution in image and video processing has been a challenge in computer vision, with its progression creating substantial societal ramifications. More specifically, video super-resolution methodologies aim to restore spatial details while upholding the temporal coherence among frames. Nevertheless, their extensive parameter counts and high demand for computational resources challenge the deployment of existing deep convolutional neural networks on mobile platforms. In response to these concerns, our research undertakes an in-depth investigation into deep convolutional neural networks and offers a lightweight model for video super-resolution, capable of reducing computational load. In this study, we bring forward a unique lightweight model for video super-resolution, the Deep Residual Recursive Network (DRRN). The model applies residual learning to stabilize the Recurrent Neural Network (RNN) training, meanwhile adopting depth-wise separable convolution to boost the efficiency of super-resolution operations. Thorough experimental evaluations reveal that our proposed model excels in computational efficiency and in generating refined and temporally consistent results for video super-resolution. Hence, this research presents a crucial stride toward applying video super-resolution strategies on devices with resource limitations. Full article
(This article belongs to the Special Issue Applications of Video, Digital Image Processing and Deep Learning)
Show Figures

Figure 1

25 pages, 1586 KiB  
Article
Data-Driven Fault Detection of AUV Rudder System: A Mixture Model Approach
by Zhiteng Zhang, Xiaofang Zhang, Tianhong Yan, Shuang Gao and Ze Yu
Machines 2023, 11(5), 551; https://doi.org/10.3390/machines11050551 - 13 May 2023
Cited by 3 | Viewed by 3642
Abstract
Based on data-driven and mixed models, this study proposes a fault detection method for autonomous underwater vehicle (AUV) rudder systems. The proposed method can effectively detect faults in the absence of angle feedback from the rudder. Considering the parameter uncertainty of the AUV [...] Read more.
Based on data-driven and mixed models, this study proposes a fault detection method for autonomous underwater vehicle (AUV) rudder systems. The proposed method can effectively detect faults in the absence of angle feedback from the rudder. Considering the parameter uncertainty of the AUV motion model resulting from the dynamics analysis method, we present a parameter identification method based on the recurrent neural network (RNN). Prior to identification, singular value decomposition (SVD) was chosen to denoise the original sensor data as the data pretreatment step. The proposed method provides more accurate predictions than recursive least squares (RLSs) and a single RNN. In order to reduce the influence of sensor parameter errors and prediction model errors, the adaptive threshold is mentioned as a method for analyzing prediction errors. In the meantime, the results of the threshold analysis were combined with the qualitative force analysis to determine the rudder system’s fault diagnosis and location. Experiments conducted at sea demonstrate the feasibility and effectiveness of the proposed method. Full article
Show Figures

Figure 1

24 pages, 2957 KiB  
Article
Research on the Cooperative Passive Location of Moving Targets Based on Improved Particle Swarm Optimization
by Li Hao, Fan Xiangyu and Shi Manhong
Drones 2023, 7(4), 264; https://doi.org/10.3390/drones7040264 - 12 Apr 2023
Cited by 8 | Viewed by 2397
Abstract
Aiming at the cooperative passive location of moving targets by UAV swarm, this paper constructs a passive location and tracking algorithm for a moving target based on the A optimization criterion and the improved particle swarm optimization (PSO) algorithm. Firstly, the localization method [...] Read more.
Aiming at the cooperative passive location of moving targets by UAV swarm, this paper constructs a passive location and tracking algorithm for a moving target based on the A optimization criterion and the improved particle swarm optimization (PSO) algorithm. Firstly, the localization method of cluster cooperative passive localization is selected and the measurement model is constructed. Then, the problem of improving passive location accuracy is transformed into the problem of obtaining more target information. From the perspective of information theory, using the A criterion as the optimization target, the passive localization process for static targets is further deduced. The Recursive Neural Network (RNN) is used to predict the probability distribution of the target’s location in the next moment so as to improve the localization method and make it suitable for the localization of moving targets. The particle swarm algorithm is improved by using grouping and time period strategy, and the algorithm flow of moving target location is constructed. Finally, through the simulation verification and algorithm comparison, the advantages of the algorithm in this paper are presented. Full article
(This article belongs to the Special Issue Multi-UAV Networks)
Show Figures

Figure 1

Back to TopTop