Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,787)

Search Parameters:
Keywords = deep neural network (DNN)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 4300 KiB  
Article
Optimised DNN-Based Agricultural Land Cover Mapping Using Sentinel-2 and Landsat-8 with Google Earth Engine
by Nisha Sharma, Sartajvir Singh and Kawaljit Kaur
Land 2025, 14(8), 1578; https://doi.org/10.3390/land14081578 (registering DOI) - 1 Aug 2025
Abstract
Agriculture is the backbone of Punjab’s economy, and with much of India’s population dependent on agriculture, the requirement for accurate and timely monitoring of land has become even more crucial. Blending remote sensing with state-of-the-art machine learning algorithms enables the detailed classification of [...] Read more.
Agriculture is the backbone of Punjab’s economy, and with much of India’s population dependent on agriculture, the requirement for accurate and timely monitoring of land has become even more crucial. Blending remote sensing with state-of-the-art machine learning algorithms enables the detailed classification of agricultural lands through thematic mapping, which is critical for crop monitoring, land management, and sustainable development. Here, a Hyper-tuned Deep Neural Network (Hy-DNN) model was created and used for land use and land cover (LULC) classification into four classes: agricultural land, vegetation, water bodies, and built-up areas. The technique made use of multispectral data from Sentinel-2 and Landsat-8, processed on the Google Earth Engine (GEE) platform. To measure classification performance, Hy-DNN was contrasted with traditional classifiers—Convolutional Neural Network (CNN), Random Forest (RF), Classification and Regression Tree (CART), Minimum Distance Classifier (MDC), and Naive Bayes (NB)—using performance metrics including producer’s and consumer’s accuracy, Kappa coefficient, and overall accuracy. Hy-DNN performed the best, with overall accuracy being 97.60% using Sentinel-2 and 91.10% using Landsat-8, outperforming all base models. These results further highlight the superiority of the optimised Hy-DNN in agricultural land mapping and its potential use in crop health monitoring, disease diagnosis, and strategic agricultural planning. Full article
Show Figures

Figure 1

21 pages, 3746 KiB  
Article
DCP: Learning Accelerator Dataflow for Neural Networks via Propagation
by Peng Xu, Wenqi Shao and Ping Luo
Electronics 2025, 14(15), 3085; https://doi.org/10.3390/electronics14153085 (registering DOI) - 1 Aug 2025
Viewed by 29
Abstract
Deep neural network (DNN) hardware (HW) accelerators have achieved great success in improving DNNs’ performance and efficiency. One key reason is the dataflow in executing a DNN layer, including on-chip data partitioning, computation parallelism, and scheduling policy, which have large impacts on latency [...] Read more.
Deep neural network (DNN) hardware (HW) accelerators have achieved great success in improving DNNs’ performance and efficiency. One key reason is the dataflow in executing a DNN layer, including on-chip data partitioning, computation parallelism, and scheduling policy, which have large impacts on latency and energy consumption. Unlike prior works that required considerable efforts from HW engineers to design suitable dataflows for different DNNs, this work proposes an efficient data-centric approach, named Dataflow Code Propagation (DCP), to automatically find the optimal dataflow for DNN layers in seconds without human effort. It has several attractive benefits that prior studies lack, including the following: (i) We translate the HW dataflow configuration into a code representation in a unified dataflow coding space, which can be optimized by back-propagating gradients given a DNN layer or network. (ii) DCP learns a neural predictor to efficiently update the dataflow codes towards the desired gradient directions to minimize various optimization objectives, e.g., latency and energy. (iii) It can be easily generalized to unseen HW configurations in a zero-shot or few-shot learning manner. For example, without using additional training data, Extensive experiments on several representative models such as MobileNet, ResNet, and ViT show that DCP outperforms its counterparts in various settings. Full article
(This article belongs to the Special Issue Applied Machine Learning in Data Science)
Show Figures

Figure 1

33 pages, 14330 KiB  
Article
Noisy Ultrasound Kidney Image Classifications Using Deep Learning Ensembles and Grad-CAM Analysis
by Walid Obaid, Abir Hussain, Tamer Rabie and Wathiq Mansoor
AI 2025, 6(8), 172; https://doi.org/10.3390/ai6080172 - 31 Jul 2025
Viewed by 225
Abstract
Objectives: This study introduces an automated classification system for noisy kidney ultrasound images using an ensemble of deep neural networks (DNNs) with transfer learning. Methods: The method was tested using a dataset with two categories: normal kidney images and kidney images with stones. [...] Read more.
Objectives: This study introduces an automated classification system for noisy kidney ultrasound images using an ensemble of deep neural networks (DNNs) with transfer learning. Methods: The method was tested using a dataset with two categories: normal kidney images and kidney images with stones. The dataset contains 1821 normal kidney images and 2592 kidney images with stones. Noisy images involve various types of noises, including salt and pepper noise, speckle noise, Poisson noise, and Gaussian noise. The ensemble-based method is benchmarked with state-of-the-art techniques and evaluated on ultrasound images with varying quality and noise levels. Results: Our proposed method demonstrated a maximum classification accuracy of 99.43% on high-quality images (the original dataset images) and 99.21% on the dataset images with added noise. Conclusions: The experimental results confirm that the ensemble of DNNs accurately classifies most images, achieving a high classification performance compared to conventional and individual DNN-based methods. Additionally, our method outperforms the highest-achieving method by more than 1% in accuracy. Furthermore, our analysis using Gradient-weighted Class Activation Mapping indicated that our proposed deep learning model is capable of prediction using clinically relevant features. Full article
(This article belongs to the Section Medical & Healthcare AI)
Show Figures

Figure 1

20 pages, 732 KiB  
Review
AI Methods Tailored to Influenza, RSV, HIV, and SARS-CoV-2: A Focused Review
by Achilleas Livieratos, George C. Kagadis, Charalambos Gogos and Karolina Akinosoglou
Pathogens 2025, 14(8), 748; https://doi.org/10.3390/pathogens14080748 - 30 Jul 2025
Viewed by 278
Abstract
Artificial intelligence (AI) techniques—ranging from hybrid mechanistic–machine learning (ML) ensembles to gradient-boosted decision trees, support-vector machines, and deep neural networks—are transforming the management of seasonal influenza, respiratory syncytial virus (RSV), human immunodeficiency virus (HIV), and severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). Symptom-based [...] Read more.
Artificial intelligence (AI) techniques—ranging from hybrid mechanistic–machine learning (ML) ensembles to gradient-boosted decision trees, support-vector machines, and deep neural networks—are transforming the management of seasonal influenza, respiratory syncytial virus (RSV), human immunodeficiency virus (HIV), and severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). Symptom-based triage models using eXtreme Gradient Boosting (XGBoost) and Random Forests, as well as imaging classifiers built on convolutional neural networks (CNNs), have improved diagnostic accuracy across respiratory infections. Transformer-based architectures and social media surveillance pipelines have enabled real-time monitoring of COVID-19. In HIV research, support-vector machines (SVMs), logistic regression, and deep neural network (DNN) frameworks advance viral-protein classification and drug-resistance mapping, accelerating antiviral and vaccine discovery. Despite these successes, persistent challenges remain—data heterogeneity, limited model interpretability, hallucinations in large language models (LLMs), and infrastructure gaps in low-resource settings. We recommend standardized open-access data pipelines and integration of explainable-AI methodologies to ensure safe, equitable deployment of AI-driven interventions in future viral-outbreak responses. Full article
(This article belongs to the Section Viral Pathogens)
Show Figures

Figure 1

12 pages, 1196 KiB  
Article
DNN-Based Noise Reduction Significantly Improves Bimodal Benefit in Background Noise for Cochlear Implant Users
by Courtney Kolberg, Sarah O. Holbert, Jamie M. Bogle and Aniket A. Saoji
J. Clin. Med. 2025, 14(15), 5302; https://doi.org/10.3390/jcm14155302 - 27 Jul 2025
Viewed by 335
Abstract
Background/Objectives: Traditional hearing aid noise reduction algorithms offer no additional benefit in noisy situations for bimodal cochlear implant (CI) users with a CI in one ear and a hearing aid (HA) in the other. Recent breakthroughs in deep neural network (DNN)-based noise [...] Read more.
Background/Objectives: Traditional hearing aid noise reduction algorithms offer no additional benefit in noisy situations for bimodal cochlear implant (CI) users with a CI in one ear and a hearing aid (HA) in the other. Recent breakthroughs in deep neural network (DNN)-based noise reduction have improved speech understanding for hearing aid users in noisy environments. These advancements could also boost speech perception in noise for bimodal CI users. This study investigated the effectiveness of DNN-based noise reduction in the HAs used by bimodal CI patients. Methods: Eleven bimodal CI patients, aged 71–89 years old, were fit with a Phonak Audéo Sphere Infinio 90 HA in their non-implanted ear and were provided with a Calm Situation program and Spheric Speech in Loud Noise program that uses DNN-based noise reduction. Sentence recognition scores were measured using AzBio sentences in quiet and in noise with the CI alone, hearing aid alone, and bimodally with both the Calm Situation and DNN HA programs. Results: The DNN program in the hearing aid significantly improved bimodal performance in noise, with sentence recognition scores reaching 79% compared to 60% with Calm Situation (a 19% average benefit, p < 0.001). When compared to the CI-alone condition in multi-talker babble, the DNN HA program offered a 40% bimodal benefit, significantly higher than the 21% score seen with the Calm Situation program. Conclusions: DNN-based noise reduction in HA significantly improves speech understanding in noise for bimodal CI users. Utilization of this technology is a promising option to address patients’ common complaint of speech understanding in noise. Full article
(This article belongs to the Section Otolaryngology)
Show Figures

Figure 1

19 pages, 5166 KiB  
Article
Estimating Wheat Chlorophyll Content Using a Multi-Source Deep Feature Neural Network
by Jun Li, Yali Sheng, Weiqiang Wang, Jikai Liu and Xinwei Li
Agriculture 2025, 15(15), 1624; https://doi.org/10.3390/agriculture15151624 - 26 Jul 2025
Viewed by 193
Abstract
Chlorophyll plays a vital role in wheat growth and fertilization management. Accurate and efficient estimation of chlorophyll content is crucial for providing a scientific foundation for precision agricultural management. Unmanned aerial vehicles (UAVs), characterized by high flexibility, spatial resolution, and operational efficiency, have [...] Read more.
Chlorophyll plays a vital role in wheat growth and fertilization management. Accurate and efficient estimation of chlorophyll content is crucial for providing a scientific foundation for precision agricultural management. Unmanned aerial vehicles (UAVs), characterized by high flexibility, spatial resolution, and operational efficiency, have emerged as effective tools for estimating chlorophyll content in wheat. Although multi-source data derived from UAV-based multispectral imagery have shown potential for wheat chlorophyll estimation, the importance of multi-source deep feature fusion has not been adequately addressed. Therefore, this study aims to estimate wheat chlorophyll content by integrating spectral and textural features extracted from UAV multispectral imagery, in conjunction with partial least squares regression (PLSR), random forest regression (RFR), deep neural network (DNN), and a novel multi-source deep feature neural network (MDFNN) proposed in this research. The results demonstrate the following: (1) Except for the RFR model, models based on texture features exhibit superior accuracy compared to those based on spectral features. Furthermore, the estimation accuracy achieved by fusing spectral and texture features is significantly greater than that obtained using a single type of data. (2) The MDFNN proposed in this study outperformed other models in chlorophyll content estimation, with an R2 of 0.850, an RMSE of 5.602, and an RRMSE of 15.76%. Compared to the second-best model, the DNN (R2 = 0.799, RMSE = 6.479, RRMSE = 18.23%), the MDFNN achieved a 6.4% increase in R2, and 13.5% reductions in both RMSE and RRMSE. (3) The MDFNN exhibited strong robustness and adaptability across varying years, wheat varieties, and nitrogen application levels. The findings of this study offer important insights into UAV-based remote sensing applications for estimating wheat chlorophyll under field conditions. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

29 pages, 5542 KiB  
Article
SVRG-AALR: Stochastic Variance-Reduced Gradient Method with Adaptive Alternating Learning Rate for Training Deep Neural Networks
by Shiyun Zou, Hua Qin, Guolin Yang and Pengfei Wang
Electronics 2025, 14(15), 2979; https://doi.org/10.3390/electronics14152979 - 25 Jul 2025
Viewed by 182
Abstract
The stochastic variance-reduced gradient (SVRG) theory is particularly well-suited for addressing gradient variance in deep neural network (DNN) training; however, its direct application to DNN training is hindered by adaptation challenges. To tackle this issue, the present paper proposes a series of strategies [...] Read more.
The stochastic variance-reduced gradient (SVRG) theory is particularly well-suited for addressing gradient variance in deep neural network (DNN) training; however, its direct application to DNN training is hindered by adaptation challenges. To tackle this issue, the present paper proposes a series of strategies focused on adaptive alternating learning rates to effectively adapt SVRG for DNN training. Firstly, within the outer loop of SVRG, both the full gradient and the learning rate specific to DNN training are computed. For two distinct formulas used for calculating the learning rate, an alternating strategy is introduced that employs them alternately across iterations. This approach allows for simultaneous provision of diverse guidance information regarding parameter change rates and gradient change rates during DNN weight updates. Additionally, a threshold method is utilized to correct the learning rate into an appropriate range, thereby accelerating convergence. Secondly, in the inner loop of SVRG, DNN weights are updated using mini-batch average gradient along with the proposed learning rate. Concurrently, mini-batch average gradients from each iteration within the inner loop are refined and aggregated into a single gradient exhibiting reduced variance through an inertia strategy. This refined gradient is then relayed back to the outer loop to recalculate the new learning rate. The efficacy of the proposed algorithm has been validated on models including LeNet, VGG11, ResNet34, and DenseNet121 while being compared against several classic and advanced optimizers. Experimental results demonstrate that the proposed algorithm exhibits remarkable training robustness across DNN models with diverse characteristics. In terms of training convergence, the proposed algorithm demonstrates competitiveness with state-of-the-art algorithms, such as Lion, developed by the Google Brain team. Full article
(This article belongs to the Special Issue Advances in Machine Learning for Image Classification)
Show Figures

Figure 1

32 pages, 5164 KiB  
Article
Decentralized Distributed Sequential Neural Networks Inference on Low-Power Microcontrollers in Wireless Sensor Networks: A Predictive Maintenance Case Study
by Yernazar Bolat, Iain Murray, Yifei Ren and Nasim Ferdosian
Sensors 2025, 25(15), 4595; https://doi.org/10.3390/s25154595 - 24 Jul 2025
Viewed by 352
Abstract
The growing adoption of IoT applications has led to increased use of low-power microcontroller units (MCUs) for energy-efficient, local data processing. However, deploying deep neural networks (DNNs) on these constrained devices is challenging due to limitations in memory, computational power, and energy. Traditional [...] Read more.
The growing adoption of IoT applications has led to increased use of low-power microcontroller units (MCUs) for energy-efficient, local data processing. However, deploying deep neural networks (DNNs) on these constrained devices is challenging due to limitations in memory, computational power, and energy. Traditional methods like cloud-based inference and model compression often incur bandwidth, privacy, and accuracy trade-offs. This paper introduces a novel Decentralized Distributed Sequential Neural Network (DDSNN) designed for low-power MCUs in Tiny Machine Learning (TinyML) applications. Unlike the existing methods that rely on centralized cluster-based approaches, DDSNN partitions a pre-trained LeNet across multiple MCUs, enabling fully decentralized inference in wireless sensor networks (WSNs). We validate DDSNN in a real-world predictive maintenance scenario, where vibration data from an industrial pump is analyzed in real-time. The experimental results demonstrate that DDSNN achieves 99.01% accuracy, explicitly maintaining the accuracy of the non-distributed baseline model and reducing inference latency by approximately 50%, highlighting its significant enhancement over traditional, non-distributed approaches, demonstrating its practical feasibility under realistic operating conditions. Full article
Show Figures

Figure 1

22 pages, 474 KiB  
Article
Neural Network-Informed Lotka–Volterra Dynamics for Cryptocurrency Market Analysis
by Dimitris Kastoris, Dimitris Papadopoulos and Konstantinos Giotopoulos
Future Internet 2025, 17(8), 327; https://doi.org/10.3390/fi17080327 - 24 Jul 2025
Viewed by 309
Abstract
Mathematical modeling plays a crucial role in supporting decision-making across a wide range of scientific disciplines. These models often involve multiple parameters, the estimation of which is critical to assessing their reliability and predictive power. Recent advancements in artificial intelligence have made it [...] Read more.
Mathematical modeling plays a crucial role in supporting decision-making across a wide range of scientific disciplines. These models often involve multiple parameters, the estimation of which is critical to assessing their reliability and predictive power. Recent advancements in artificial intelligence have made it possible to efficiently estimate such parameters with high accuracy. In this study, we focus on modeling the dynamics of cryptocurrency market shares by employing a Lotka–Volterra system. We introduce a methodology based on a deep neural network (DNN) to estimate the parameters of the Lotka–Volterra model, which are subsequently used to numerically solve the system using a fourth-order Runge–Kutta method. The proposed approach, when applied to real-world market share data for Bitcoin, Ethereum, and alternative cryptocurrencies, demonstrates excellent alignment with empirical observations. Our method achieves RMSEs of 0.0687 (BTC), 0.0268 (ETH), and 0.0558 (ALTs)—an over 50% reduction in error relative to ARIMA(2,1,2) and over 25% relative to a standard NN–ODE model—thereby underscoring its effectiveness for cryptocurrency-market forecasting. The entire framework, including neural network training and Runge–Kutta integration, was implemented in MATLAB R2024a (version 24.1). Full article
Show Figures

Figure 1

21 pages, 4369 KiB  
Article
Breast Cancer Classification via a High-Precision Hybrid IGWO–SOA Optimized Deep Learning Framework
by Aniruddha Deka, Debashis Dev Misra, Anindita Das and Manob Jyoti Saikia
AI 2025, 6(8), 167; https://doi.org/10.3390/ai6080167 - 24 Jul 2025
Viewed by 462
Abstract
Breast cancer (BRCA) remains a significant cause of mortality among women, particularly in developing and underdeveloped regions, where early detection is crucial for effective treatment. This research introduces an innovative hybrid model that combines Improved Grey Wolf Optimizer (IGWO) with the Seagull Optimization [...] Read more.
Breast cancer (BRCA) remains a significant cause of mortality among women, particularly in developing and underdeveloped regions, where early detection is crucial for effective treatment. This research introduces an innovative hybrid model that combines Improved Grey Wolf Optimizer (IGWO) with the Seagull Optimization Algorithm (SOA), forming the IGWO–SOA technique to enhance BRCA detection accuracy. The hybrid model draws inspiration from the adaptive and strategic behaviors of seagulls, especially their ability to dynamically change attack angles in order to effectively tackle complex global optimization challenges. A deep neural network (DNN) is fine-tuned using this hybrid optimization method to address the challenges of hyperparameter selection and overfitting, which are common in DL approaches for BRCA classification. The proposed IGWO–SOA model demonstrates optimal performance in identifying key attributes that contribute to accurate cancer detection using the CBIS-DDSM dataset. Its effectiveness is validated using performance metrics such as loss, F1-score, precision, accuracy, and recall. Notably, the model achieved an impressive accuracy of 99.4%, outperforming existing methods in the domain. By optimizing both the learning parameters and model structure, this research establishes an advanced deep learning framework built upon the IGWO–SOA approach, presenting a robust and reliable method for early BRCA detection with significant potential to improve diagnostic precision. Full article
(This article belongs to the Section Medical & Healthcare AI)
Show Figures

Figure 1

24 pages, 2151 KiB  
Article
Federated Learning-Based Intrusion Detection in IoT Networks: Performance Evaluation and Data Scaling Study
by Nurtay Albanbay, Yerlan Tursynbek, Kalman Graffi, Raissa Uskenbayeva, Zhuldyz Kalpeyeva, Zhastalap Abilkaiyr and Yerlan Ayapov
J. Sens. Actuator Netw. 2025, 14(4), 78; https://doi.org/10.3390/jsan14040078 - 23 Jul 2025
Viewed by 540
Abstract
This paper presents a large-scale empirical study aimed at identifying the optimal local deep learning model and data volume for deploying intrusion detection systems (IDS) on resource-constrained IoT devices using federated learning (FL). While previous studies on FL-based IDS for IoT have primarily [...] Read more.
This paper presents a large-scale empirical study aimed at identifying the optimal local deep learning model and data volume for deploying intrusion detection systems (IDS) on resource-constrained IoT devices using federated learning (FL). While previous studies on FL-based IDS for IoT have primarily focused on maximizing accuracy, they often overlook the computational limitations of IoT hardware and the feasibility of local model deployment. In this work, three deep learning architectures—a deep neural network (DNN), a convolutional neural network (CNN), and a hybrid CNN+BiLSTM—are trained using the CICIoT2023 dataset within a federated learning environment simulating up to 150 IoT devices. The study evaluates how detection accuracy, convergence speed, and inference costs (latency and model size) vary across different local data scales and model complexities. Results demonstrate that CNN achieves the best trade-off between detection performance and computational efficiency, reaching ~98% accuracy with low latency and a compact model footprint. The more complex CNN+BiLSTM architecture yields slightly higher accuracy (~99%) at a significantly greater computational cost. Deployment tests on Raspberry Pi 5 devices confirm that all three models can be effectively implemented on real-world IoT edge hardware. These findings offer practical guidance for researchers and practitioners in selecting scalable and lightweight IDS models suitable for real-world federated IoT deployments, supporting secure and efficient anomaly detection in urban IoT networks. Full article
(This article belongs to the Special Issue Federated Learning: Applications and Future Directions)
Show Figures

Figure 1

30 pages, 9222 KiB  
Article
Using Deep Learning in Forecasting the Production of Electricity from Photovoltaic and Wind Farms
by Michał Pikus, Jarosław Wąs and Agata Kozina
Energies 2025, 18(15), 3913; https://doi.org/10.3390/en18153913 - 23 Jul 2025
Viewed by 289
Abstract
Accurate forecasting of electricity production is crucial for the stability of the entire energy sector. However, predicting future renewable energy production and its value is difficult due to the complex processes that affect production using renewable energy sources. In this article, we examine [...] Read more.
Accurate forecasting of electricity production is crucial for the stability of the entire energy sector. However, predicting future renewable energy production and its value is difficult due to the complex processes that affect production using renewable energy sources. In this article, we examine the performance of basic deep learning models for electricity forecasting. We designed deep learning models, including recursive neural networks (RNNs), which are mainly based on long short-term memory (LSTM) networks; gated recurrent units (GRUs), convolutional neural networks (CNNs), temporal fusion transforms (TFTs), and combined architectures. In order to achieve this goal, we have created our benchmarks and used tools that automatically select network architectures and parameters. Data were obtained as part of the NCBR grant (the National Center for Research and Development, Poland). These data contain daily records of all the recorded parameters from individual solar and wind farms over the past three years. The experimental results indicate that the LSTM models significantly outperformed the other models in terms of forecasting. In this paper, multilayer deep neural network (DNN) architectures are described, and the results are provided for all the methods. This publication is based on the results obtained within the framework of the research and development project “POIR.01.01.01-00-0506/21”, realized in the years 2022–2023. The project was co-financed by the European Union under the Smart Growth Operational Programme 2014–2020. Full article
Show Figures

Figure 1

26 pages, 4049 KiB  
Article
A Versatile UAS Development Platform Able to Support a Novel Tracking Algorithm in Real-Time
by Dan-Marius Dobrea and Matei-Ștefan Dobrea
Aerospace 2025, 12(8), 649; https://doi.org/10.3390/aerospace12080649 - 22 Jul 2025
Viewed by 314
Abstract
A primary objective of this research entails the development of an innovative algorithm capable of tracking a drone in real-time. This objective serves as a fundamental requirement across various applications, including collision avoidance, formation flying, and the interception of moving targets. Nonetheless, regardless [...] Read more.
A primary objective of this research entails the development of an innovative algorithm capable of tracking a drone in real-time. This objective serves as a fundamental requirement across various applications, including collision avoidance, formation flying, and the interception of moving targets. Nonetheless, regardless of the efficacy of any detection algorithm, achieving 100% performance remains unattainable. Deep neural networks (DNNs) were employed to enhance this performance. To facilitate real-time operation, the DNN must be executed within a Deep Learning Processing Unit (DPU), Neural Processing Unit (NPU), Tensor Processing Unit (TPU), or Graphics Processing Unit (GPU) system on board the UAV. Given the constraints of these processing units, it may be necessary to quantify the DNN or utilize a less complex variant, resulting in an additional reduction in performance. However, precise target detection at each control step is imperative for effective flight path control. By integrating multiple algorithms, the developed system can effectively track UAVs with improved detection performance. Furthermore, this paper aims to establish a versatile Unmanned Aerial System (UAS) development platform constructed using open-source components and possessing the capability to adapt and evolve seamlessly throughout the development and post-production phases. Full article
(This article belongs to the Section Aeronautics)
Show Figures

Figure 1

17 pages, 3065 KiB  
Article
Soot Mass Concentration Prediction at the GPF Inlet of GDI Engine Based on Machine Learning Methods
by Zhiyuan Hu, Zeyu Liu, Jiayi Shen, Shimao Wang and Piqiang Tan
Energies 2025, 18(14), 3861; https://doi.org/10.3390/en18143861 - 20 Jul 2025
Viewed by 208
Abstract
To improve the prediction accuracy of soot load in gasoline particulate filters (GPFs) and the control accuracy during GPF regeneration, this study developed a prediction model to predict the soot mass concentration at the GPF inlet of gasoline direct injection (GDI) engines using [...] Read more.
To improve the prediction accuracy of soot load in gasoline particulate filters (GPFs) and the control accuracy during GPF regeneration, this study developed a prediction model to predict the soot mass concentration at the GPF inlet of gasoline direct injection (GDI) engines using advanced machine learning methods. Three machine learning approaches, namely, support vector regression (SVR), deep neural network (DNN), and a Stacking integration model of SVR and DNN, were employed, respectively, to predict the soot mass concentration at the GPF inlet. The input data includes engine speed, torque, ignition timing, throttle valve opening angle, fuel injection pressure, and pulse width. Exhaust gas soot mass concentration at the three-way catalyst (TWC) outlet is obtained by an engine bench test. The results show that the correlation coefficients (R2) of SVR, DNN, and Stacking integration model of SVR and DNN are 0.937, 0.984, and 0.992, respectively, and the prediction ranges of soot mass concentration are 0–0.038 mg/s, 0–0.030 mg/s, and 0–0.07 mg/s, respectively. The distribution, median, and data density of prediction results obtained by the three machine learning approaches fit well with the test results. However, the prediction result of the SVR model is poor when the soot mass concentration exceeds 0.038 mg/s. The median of the prediction result obtained by the DNN model is closer to the test result, specifically for data points in the 25–75% range. However, there are a few negative prediction results in the test dataset due to overfitting. Integrating SVR and DNN models through stacked models extends the predictive range of a single SVR or DNN model while mitigating the overfitting of DNN models. The results of the study can serve as a reference for the development of accurate prediction algorithms to estimate soot loads in GPFs, which in turn can provide some basis for the control of the particulate mass and particle number (PN) emitted from GDI engines. Full article
(This article belongs to the Special Issue Internal Combustion Engines: Research and Applications—3rd Edition)
Show Figures

Figure 1

16 pages, 5468 KiB  
Article
Alpine Meadow Fractional Vegetation Cover Estimation Using UAV-Aided Sentinel-2 Imagery
by Kai Du, Yi Shao, Naixin Yao, Hongyan Yu, Shaozhong Ma, Xufeng Mao, Litao Wang and Jianjun Wang
Sensors 2025, 25(14), 4506; https://doi.org/10.3390/s25144506 - 20 Jul 2025
Viewed by 304
Abstract
Fractional Vegetation Cover (FVC) is a crucial indicator describing vegetation conditions and provides essential data for ecosystem health assessments. However, due to the low and sparse vegetation in alpine meadows, it is challenging to obtain pure vegetation pixels from Sentinel-2 imagery, resulting in [...] Read more.
Fractional Vegetation Cover (FVC) is a crucial indicator describing vegetation conditions and provides essential data for ecosystem health assessments. However, due to the low and sparse vegetation in alpine meadows, it is challenging to obtain pure vegetation pixels from Sentinel-2 imagery, resulting in errors in the FVC estimation using traditional pixel dichotomy models. This study integrated Sentinel-2 imagery with unmanned aerial vehicle (UAV) data and utilized the pixel dichotomy model together with four machine learning algorithms, namely Random Forest (RF), Extreme Gradient Boosting (XGBoost), Light Gradient Boosting Machine (LightGBM), and Deep Neural Network (DNN), to estimate FVC in an alpine meadow region. First, FVC was preliminarily estimated using the pixel dichotomy model combined with nine vegetation indices applied to Sentinel-2 imagery. The performance of these estimates was evaluated against reference FVC values derived from centimeter-level UAV data. Subsequently, four machine learning models were employed for an accurate FVC inversion, using the estimated FVC values and UAV-derived reference FVC as inputs, following feature importance ranking and model parameter optimization. The results showed that: (1) Machine learning algorithms based on Sentinel-2 and UAV imagery effectively improved the accuracy of FVC estimation in alpine meadows. The DNN-based FVC estimation performed best, with a coefficient of determination of 0.82 and a root mean square error (RMSE) of 0.09. (2) In vegetation coverage estimation based on the pixel dichotomy model, different vegetation indices demonstrated varying performances across areas with different FVC levels. The GNDVI-based FVC achieved a higher accuracy (RMSE = 0.08) in high-vegetation coverage areas (FVC > 0.7), while the NIRv-based FVC and the SR-based FVC performed better (RMSE = 0.10) in low-vegetation coverage areas (FVC < 0.4). The method provided in this study can significantly enhance FVC estimation accuracy with limited fieldwork, contributing to alpine meadow monitoring on the Qinghai–Tibet Plateau. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Graphical abstract

Back to TopTop