Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (22)

Search Parameters:
Keywords = Jetson nano (B01)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 3331 KiB  
Article
Low-Light Image Enhancement Using Deep Learning: A Lightweight Network with Synthetic and Benchmark Dataset Evaluation
by Manuel J. C. S. Reis
Appl. Sci. 2025, 15(11), 6330; https://doi.org/10.3390/app15116330 - 4 Jun 2025
Viewed by 1436
Abstract
Low-light conditions often lead to severe degradation in image quality, impairing critical computer vision tasks in applications such as surveillance and mobile imaging. In this paper, we propose a lightweight deep learning framework for low-light image enhancement, designed to balance visual quality with [...] Read more.
Low-light conditions often lead to severe degradation in image quality, impairing critical computer vision tasks in applications such as surveillance and mobile imaging. In this paper, we propose a lightweight deep learning framework for low-light image enhancement, designed to balance visual quality with computational efficiency, with potential for deployment in latency-sensitive and resource-constrained environments. The architecture builds upon a UNet-inspired encoder–decoder structure, enhanced with attention modules and trained using a combination of perceptual and structural loss functions. Our training strategy utilizes a hybrid dataset composed of both real low-light images and synthetically generated image pairs created through controlled exposure adjustment and noise modeling. Experimental results on benchmark datasets such as LOL and SID demonstrate that our model achieves a Peak Signal-to-Noise Ratio (PSNR) of up to 28.4 dB and a Structural Similarity Index (SSIM) of 0.88 while maintaining a small parameter footprint (~1.3 M) and low inference latency (~6 FPS on Jetson Nano). The proposed approach offers a promising solution for industrial applications such as real-time surveillance, mobile photography, and embedded vision systems. Full article
(This article belongs to the Special Issue Image Processing: Technologies, Methods, Apparatus)
Show Figures

Figure 1

16 pages, 15339 KiB  
Article
MLKD-Net: Lightweight Single Image Dehazing via Multi-Head Large Kernel Attention
by Jiwon Moon and Jongyoul Park
Appl. Sci. 2025, 15(11), 5858; https://doi.org/10.3390/app15115858 - 23 May 2025
Viewed by 435
Abstract
Haze significantly degrades image quality by reducing contrast and blurring object boundaries, which impairs the performance of computer vision systems. Among various approaches, single-image dehazing remains particularly challenging due to the absence of depth information. While Vision Transformer (ViT)-based models have achieved remarkable [...] Read more.
Haze significantly degrades image quality by reducing contrast and blurring object boundaries, which impairs the performance of computer vision systems. Among various approaches, single-image dehazing remains particularly challenging due to the absence of depth information. While Vision Transformer (ViT)-based models have achieved remarkable results by leveraging multi-head attention and large effective receptive fields, their high computational complexity limits their applicability in real-time and embedded systems. To address this limitation, we propose MLKD-Net, a lightweight CNN-based model that incorporates a novel Multi-Head Large Kernel Block (MLKD), which is based on the Multi-Head Large Kernel Attention (MLKA) mechanism. This structure preserves the benefits of large receptive fields and a multi-head design while also ensuring compactness and computational efficiency. MLKD-Net achieves a PSNR of 37.42 dB on the SOTS-Outdoor dataset while using 90.9% fewer parameters than leading Transformer-based models. Furthermore, it demonstrates real-time performance with 55.24 ms per image (18.2 FPS) on the NVIDIA Jetson Orin Nano in TensorRT-INT8 mode. These results highlight its effectiveness and practicality for resource-constrained, real-time image dehazing applications. Full article
(This article belongs to the Section Robotics and Automation)
Show Figures

Figure 1

17 pages, 5373 KiB  
Article
Real-Time Overhead Power Line Component Detection on Edge Computing Platforms
by Nico Surantha
Computers 2025, 14(4), 134; https://doi.org/10.3390/computers14040134 - 5 Apr 2025
Viewed by 810
Abstract
Regular inspection of overhead power line (OPL) systems is required to detect damage early and ensure the efficient and uninterrupted transmission of high-voltage electric power. In the past, these checks were conducted utilizing line crawling, inspection robots, and a helicopter. Yet, these traditional [...] Read more.
Regular inspection of overhead power line (OPL) systems is required to detect damage early and ensure the efficient and uninterrupted transmission of high-voltage electric power. In the past, these checks were conducted utilizing line crawling, inspection robots, and a helicopter. Yet, these traditional solutions are slow, costly, and hazardous. Advancements in drones, edge computing platforms, deep learning, and high-resolution cameras may enable real-time OPL inspections using drones. Some research has been conducted on OPL inspection with autonomous drones. However, it is essential to explore how to achieve real-time OPL component detection effectively and efficiently. In this paper, we report our research on OPL component detection on edge computing devices. The original OPL dataset is generated in this study. In this paper, we evaluate the detection performance with several sizes of training datasets. We also implement simple data augmentation to extend the size of datasets. The performance of the YOLOv7 model is also evaluated on several edge computing platforms, such as Raspberry Pi 4B, Jetson Nano, and Jetson Orin Nano. The model quantization method is used to improve the real-time performance of the detection model. The simulation results show that the proposed YOLOv7 model can achieve mean average precision (mAP) over 90%. While the hardware evaluation shows the real-time detection performance can be achieved in several circumstances. Full article
Show Figures

Figure 1

21 pages, 10344 KiB  
Article
Efficient Deployment of Peanut Leaf Disease Detection Models on Edge AI Devices
by Zekai Lv, Shangbin Yang, Shichuang Ma, Qiang Wang, Jinti Sun, Linlin Du, Jiaqi Han, Yufeng Guo and Hui Zhang
Agriculture 2025, 15(3), 332; https://doi.org/10.3390/agriculture15030332 - 2 Feb 2025
Cited by 3 | Viewed by 1329
Abstract
The intelligent transformation of crop leaf disease detection has driven the use of deep neural network algorithms to develop more accurate disease detection models. In resource-constrained environments, the deployment of crop leaf disease detection models on the cloud introduces challenges such as communication [...] Read more.
The intelligent transformation of crop leaf disease detection has driven the use of deep neural network algorithms to develop more accurate disease detection models. In resource-constrained environments, the deployment of crop leaf disease detection models on the cloud introduces challenges such as communication latency and privacy concerns. Edge AI devices offer lower communication latency and enhanced scalability. To achieve the efficient deployment of crop leaf disease detection models on edge AI devices, a dataset of 700 images depicting peanut leaf spot, scorch spot, and rust diseases was collected. The YOLOX-Tiny network was utilized to conduct deployment experiments with the peanut leaf disease detection model on the Jetson Nano B01. The experiments initially focused on three aspects of efficient deployment optimization: the fusion of rectified linear unit (ReLU) and convolution operations, the integration of Efficient Non-Maximum Suppression for TensorRT (EfficientNMS_TRT) to accelerate post-processing within the TensorRT model, and the conversion of model formats from number of samples, channels, height, width (NCHW) to number of samples, height, width, and channels (NHWC) in the TensorFlow Lite model. Additionally, experiments were conducted to compare the memory usage, power consumption, and inference latency between the two inference frameworks, as well as to evaluate the real-time video detection performance using DeepStream. The results demonstrate that the fusion of ReLU activation functions with convolution operations reduced the inference latency by 55.5% compared to the use of the Sigmoid linear unit (SiLU) activation alone. In the TensorRT model, the integration of the EfficientNMS_TRT module accelerated post-processing, leading to a reduction in the inference latency of 19.6% and an increase in the frames per second (FPS) of 20.4%. In the TensorFlow Lite model, conversion to the NHWC format decreased the model conversion time by 88.7% and reduced the inference latency by 32.3%. These three efficient deployment optimization methods effectively decreased the inference latency and enhanced the inference efficiency. Moreover, a comparison between the two frameworks revealed that TensorFlow Lite exhibited memory usage reductions of 15% to 20% and power consumption decreases of 15% to 25% compared to TensorRT. Additionally, TensorRT achieved inference latency reductions of 53.2% to 55.2% relative to TensorFlow Lite. Consequently, TensorRT is deemed suitable for tasks requiring strong real-time performance and low latency, whereas TensorFlow Lite is more appropriate for scenarios with constrained memory and power resources. Additionally, the integration of DeepStream and EfficientNMS_TRT was found to optimize memory and power utilization, thereby enhancing the speed of real-time video detection. A detection rate of 28.7 FPS was achieved at a resolution of 1280 × 720. These experiments validate the feasibility and advantages of deploying crop leaf disease detection models on edge AI devices. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

12 pages, 3638 KiB  
Article
Exploring Edge Computing for Sustainable CV-Based Worker Detection in Construction Site Monitoring: Performance and Feasibility Analysis
by Xue Xiao, Chen Chen, Martin Skitmore, Heng Li and Yue Deng
Buildings 2024, 14(8), 2299; https://doi.org/10.3390/buildings14082299 - 25 Jul 2024
Cited by 2 | Viewed by 1573
Abstract
This research explores edge computing for construction site monitoring using computer vision (CV)-based worker detection methods. The feasibility of using edge computing is validated by testing worker detection models (yolov5 and yolov8) on local computers and three edge computing devices (Jetson Nano, Raspberry [...] Read more.
This research explores edge computing for construction site monitoring using computer vision (CV)-based worker detection methods. The feasibility of using edge computing is validated by testing worker detection models (yolov5 and yolov8) on local computers and three edge computing devices (Jetson Nano, Raspberry Pi 4B, and Jetson Xavier NX). The results show comparable mAP values for all devices, with the local computer processing frames six times faster than the Jetson Xavier NX. This study contributes by proposing an edge computing solution to address data security, installation complexity, and time delay issues in CV-based construction site monitoring. This approach also enhances data sustainability by mitigating potential risks associated with data loss, privacy breaches, and network connectivity issues. Additionally, it illustrates the practicality of employing edge computing devices for automated visual monitoring and provides valuable information for construction managers to select the appropriate device. Full article
Show Figures

Figure 1

25 pages, 18356 KiB  
Article
Implementation of Intelligent Indoor Service Robot Based on ROS and Deep Learning
by Mingyang Liu, Min Chen, Zhigang Wu, Bin Zhong and Wangfen Deng
Machines 2024, 12(4), 256; https://doi.org/10.3390/machines12040256 - 11 Apr 2024
Cited by 8 | Viewed by 3406
Abstract
When faced with challenges such as adapting to dynamic environments and handling ambiguous identification, indoor service robots encounter manifold difficulties. This paper aims to address this issue by proposing the design of a service robot equipped with precise small-object recognition, autonomous path planning, [...] Read more.
When faced with challenges such as adapting to dynamic environments and handling ambiguous identification, indoor service robots encounter manifold difficulties. This paper aims to address this issue by proposing the design of a service robot equipped with precise small-object recognition, autonomous path planning, and obstacle-avoidance capabilities. We conducted in-depth research on the suitability of three SLAM algorithms (GMapping, Hector-SLAM, and Cartographer) in indoor environments and explored their performance disparities. Upon this foundation, we have elected to utilize the STM32F407VET6 and Nvidia Jetson Nano B01 as our processing controllers. For the program design on the STM32 side, we are employing the FreeRTOS operating system, while for the Jetson Nano side, we are employing ROS (Robot Operating System) for program design. The robot employs a differential drive chassis, enabling successful autonomous path planning and obstacle-avoidance maneuvers. Within indoor environments, we utilized the YOLOv3 algorithm for target detection, achieving precise target identification. Through a series of simulations and real-world experiments, we validated the performance and feasibility of the robot, including mapping, navigation, and target detection functionalities. Experimental results demonstrate the robot’s outstanding performance and accuracy in indoor environments, offering users efficient service and presenting new avenues and methodologies for the development of indoor service robots. Full article
(This article belongs to the Special Issue Design and Applications of Service Robots)
Show Figures

Figure 1

33 pages, 11426 KiB  
Article
Plant Disease Identification Using Machine Learning Algorithms on Single-Board Computers in IoT Environments
by George Routis, Marios Michailidis and Ioanna Roussaki
Electronics 2024, 13(6), 1010; https://doi.org/10.3390/electronics13061010 - 7 Mar 2024
Cited by 16 | Viewed by 4184
Abstract
This paper investigates the usage of machine learning (ML) algorithms on agricultural images with the aim of extracting information regarding the health of plants. More specifically, a custom convolutional neural network is trained on Google Colab using photos of healthy and unhealthy plants. [...] Read more.
This paper investigates the usage of machine learning (ML) algorithms on agricultural images with the aim of extracting information regarding the health of plants. More specifically, a custom convolutional neural network is trained on Google Colab using photos of healthy and unhealthy plants. The trained models are evaluated using various single-board computers (SBCs) that demonstrate different essential characteristics. Raspberry Pi 3 and Raspberry Pi 4 are the current mainstream SBCs that use their Central Processing Units (CPUs) for processing and are used for many applications for executing ML algorithms based on popular related libraries such as TensorFlow. NVIDIA Graphic Processing Units (GPUs) have a different rationale and base the execution of ML algorithms on a GPU that uses a different architecture than a CPU. GPUs can also implement high parallelization on the Compute Unified Device Architecture (CUDA) cores. Another current approach involves using a Tensor Processing Unit (TPU) processing unit carried by the Google Coral Dev TPU Board, which is an Application-Specific Integrated Circuit (ASIC) specialized for accelerating ML algorithms such as Convolutional Neural Networks (CNNs) via the usage of TensorFlow Lite. This study experiments with all of the above-mentioned devices and executes custom CNN models with the aim of identifying plant diseases. In this respect, several evaluation metrics are used, including knowledge extraction time, CPU utilization, Random Access Memory (RAM) usage, swap memory, temperature, current milli Amperes (mA), voltage (Volts), and power consumption milli Watts (mW). Full article
Show Figures

Figure 1

24 pages, 7815 KiB  
Article
AI on the Road: NVIDIA Jetson Nano-Powered Computer Vision-Based System for Real-Time Pedestrian and Priority Sign Detection
by Kornel Sarvajcz, Laszlo Ari and Jozsef Menyhart
Appl. Sci. 2024, 14(4), 1440; https://doi.org/10.3390/app14041440 - 9 Feb 2024
Cited by 13 | Viewed by 7639
Abstract
Advances in information and signal processing, driven by artificial intelligence techniques and recent breakthroughs in deep learning, have significantly impacted autonomous driving by enhancing safety and reducing the dependence on human intervention. Generally, prevailing ADASs (advanced driver assistance systems) incorporate costly components, making [...] Read more.
Advances in information and signal processing, driven by artificial intelligence techniques and recent breakthroughs in deep learning, have significantly impacted autonomous driving by enhancing safety and reducing the dependence on human intervention. Generally, prevailing ADASs (advanced driver assistance systems) incorporate costly components, making them financially unattainable for a substantial portion of the population. This paper proposes a solution: an embedded system designed for real-time pedestrian and priority sign detection, offering affordability and universal applicability across various vehicles. The suggested system, which comprises two cameras, an NVIDIA Jetson Nano B01 low-power edge device and an LCD (liquid crystal system) display, ensures seamless integration into a vehicle without occupying substantial space and provides a cost-effective alternative. The primary focus of this research is addressing accidents caused by the failure to yield priority to other drivers or pedestrians. Our study stands out from existing research by concurrently addressing traffic sign recognition and pedestrian detection, concentrating on identifying five crucial objects: pedestrians, pedestrian crossings (signs and road paintings separately), stop signs, and give way signs. Object detection was executed using a lightweight, custom-trained CNN (convolutional neural network) known as SSD (Single Shot Detector)-MobileNet, implemented on the Jetson Nano. To tailor the model for this specific application, the pre-trained neural network underwent training on our custom dataset consisting of images captured on the road under diverse lighting and traffic conditions. The outcomes of the proposed system offer promising results, positioning it as a viable candidate for real-time implementation; its contributions are noteworthy in advancing the safety and accessibility of autonomous driving technologies. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

23 pages, 27063 KiB  
Article
A Smart Cane Based on 2D LiDAR and RGB-D Camera Sensor-Realizing Navigation and Obstacle Recognition
by Chunming Mai, Huaze Chen, Lina Zeng, Zaijin Li, Guojun Liu, Zhongliang Qiao, Yi Qu, Lianhe Li and Lin Li
Sensors 2024, 24(3), 870; https://doi.org/10.3390/s24030870 - 29 Jan 2024
Cited by 9 | Viewed by 8435
Abstract
In this paper, an intelligent blind guide system based on 2D LiDAR and RGB-D camera sensing is proposed, and the system is mounted on a smart cane. The intelligent guide system relies on 2D LiDAR, an RGB-D camera, IMU, GPS, Jetson nano B01, [...] Read more.
In this paper, an intelligent blind guide system based on 2D LiDAR and RGB-D camera sensing is proposed, and the system is mounted on a smart cane. The intelligent guide system relies on 2D LiDAR, an RGB-D camera, IMU, GPS, Jetson nano B01, STM32, and other hardware. The main advantage of the intelligent guide system proposed by us is that the distance between the smart cane and obstacles can be measured by 2D LiDAR based on the cartographer algorithm, thus achieving simultaneous localization and mapping (SLAM). At the same time, through the improved YOLOv5 algorithm, pedestrians, vehicles, pedestrian crosswalks, traffic lights, warning posts, stone piers, tactile paving, and other objects in front of the visually impaired can be quickly and effectively identified. Laser SLAM and improved YOLOv5 obstacle identification tests were carried out inside a teaching building on the campus of Hainan Normal University and on a pedestrian crossing on Longkun South Road in Haikou City, Hainan Province. The results show that the intelligent guide system developed by us can drive the omnidirectional wheels at the bottom of the smart cane and provide the smart cane with a self-leading blind guide function, like a “guide dog”, which can effectively guide the visually impaired to avoid obstacles and reach their predetermined destination, and can quickly and effectively identify the obstacles on the way out. The mapping and positioning accuracy of the system’s laser SLAM is 1 m ± 7 cm, and the laser SLAM speed of this system is 25~31 FPS, which can realize the short-distance obstacle avoidance and navigation function both in indoor and outdoor environments. The improved YOLOv5 helps to identify 86 types of objects. The recognition rates for pedestrian crosswalks and for vehicles are 84.6% and 71.8%, respectively; the overall recognition rate for 86 types of objects is 61.2%, and the obstacle recognition rate of the intelligent guide system is 25–26 FPS. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

20 pages, 3384 KiB  
Article
MCFP-YOLO Animal Species Detector for Embedded Systems
by Mai Ibraheam, Kin Fun Li and Fayez Gebali
Electronics 2023, 12(24), 5044; https://doi.org/10.3390/electronics12245044 - 18 Dec 2023
Cited by 2 | Viewed by 2065
Abstract
Advances in deep learning have led to the development of various animal species detection models suited for different environments. Building on this, our research introduces a detection model that efficiently handles both batch and real-time processing. It achieves this by integrating a motion-based [...] Read more.
Advances in deep learning have led to the development of various animal species detection models suited for different environments. Building on this, our research introduces a detection model that efficiently handles both batch and real-time processing. It achieves this by integrating a motion-based frame selection algorithm and a two-stage pipelining–dataflow hybrid parallel processing approach. These modifications significantly reduced the processing delay and power consumption of the proposed MCFP-YOLO detector, particularly on embedded systems with limited resources, without trading off the accuracy of our animal species detection system. For field applications, the proposed MCFP-YOLO model was deployed and tested on two embedded devices: the RP4B and the Jetson Nano. While the Jetson Nano provided faster processing, the RP4B was selected due to its lower power consumption and a balanced cost–performance ratio, making it particularly suitable for extended use in remote areas. Full article
(This article belongs to the Special Issue Embedded Systems for Neural Network Applications)
Show Figures

Figure 1

18 pages, 2479 KiB  
Article
Implementation of an Edge-Computing Vision System on Reduced-Board Computers Embedded in UAVs for Intelligent Traffic Management
by Sergio Bemposta Rosende, Sergio Ghisler, Javier Fernández-Andrés and Javier Sánchez-Soriano
Drones 2023, 7(11), 682; https://doi.org/10.3390/drones7110682 - 20 Nov 2023
Cited by 15 | Viewed by 4409
Abstract
Advancements in autonomous driving have seen unprecedented improvement in recent years. This work addresses the challenge of enhancing the navigation of autonomous vehicles in complex urban environments such as intersections and roundabouts through the integration of computer vision and unmanned aerial vehicles (UAVs). [...] Read more.
Advancements in autonomous driving have seen unprecedented improvement in recent years. This work addresses the challenge of enhancing the navigation of autonomous vehicles in complex urban environments such as intersections and roundabouts through the integration of computer vision and unmanned aerial vehicles (UAVs). UAVs, owing to their aerial perspective, offer a more effective means of detecting vehicles involved in these maneuvers. The primary objective is to develop, evaluate, and compare different computer vision models and reduced-board (and small-power) hardware for optimizing traffic management in these scenarios. A dataset was constructed using two sources, several models (YOLO 5 and 8, DETR, and EfficientDetLite) were selected and trained, four reduced-board computers were chosen (Raspberry Pi 3B+ and 4, Jetson Nano, and Google Coral), and the models were tested on these boards for edge computing in UAVs. The experiments considered training times (with the dataset and its optimized version), model metrics were obtained, inference frames per second (FPS) were measured, and energy consumption was quantified. After the experiments, it was observed that the combination that best suits our use case is the YoloV8 model with the Jetson Nano. On the other hand, a combination with much higher inference speed but lower accuracy involves the EfficientDetLite models with the Google Coral board. Full article
(This article belongs to the Special Issue Edge Computing and IoT Technologies for Drones)
Show Figures

Figure 1

20 pages, 4912 KiB  
Article
Power Requirements Evaluation of Embedded Devices for Real-Time Video Line Detection
by Jakub Suder, Kacper Podbucki and Tomasz Marciniak
Energies 2023, 16(18), 6677; https://doi.org/10.3390/en16186677 - 18 Sep 2023
Cited by 6 | Viewed by 3869
Abstract
In this paper, the comparison of the power requirements during real-time processing of video sequences in embedded systems was investigated. During the experimental tests, four modules were tested: Raspberry Pi 4B, NVIDIA Jetson Nano, NVIDIA Jetson Xavier AGX, and NVIDIA Jetson Orin AGX. [...] Read more.
In this paper, the comparison of the power requirements during real-time processing of video sequences in embedded systems was investigated. During the experimental tests, four modules were tested: Raspberry Pi 4B, NVIDIA Jetson Nano, NVIDIA Jetson Xavier AGX, and NVIDIA Jetson Orin AGX. The processing speed and energy consumption have been checked, depending on input frame size resolution and the particular power mode. Two vision algorithms for detecting lines located in airport areas were tested. The results show that the power modes of the NVIDIA Jetson modules have sufficient computing resources to effectively detect lines based on the camera image, such as Jetson Xavier in mode MAXN or Jetson Orin in mode MAXN, with a resolution of 1920 × 1080 pixels and a power consumption of about 19 W for 24 FPS for both algorithms tested. Full article
Show Figures

Figure 1

13 pages, 8157 KiB  
Article
Experimental Study on Remaining Useful Life Prediction of Lithium-Ion Batteries Based on Three Regression Models for Electric Vehicle Application
by Vo Thanh Ha and Pham Thi Giang
Appl. Sci. 2023, 13(13), 7660; https://doi.org/10.3390/app13137660 - 28 Jun 2023
Cited by 12 | Viewed by 2951
Abstract
This paper presents three regression models that predict the lithium-ion battery life for electric cars based on a supervised machine learning regression algorithm. The linear regression, bagging regressor, and random forest regressor models will be compared for the capacity prediction of lithium-ion batteries [...] Read more.
This paper presents three regression models that predict the lithium-ion battery life for electric cars based on a supervised machine learning regression algorithm. The linear regression, bagging regressor, and random forest regressor models will be compared for the capacity prediction of lithium-ion batteries based on voltage-dependent per-cell modeling. When sufficient test data are available, three linear regression learning algorithms will train this model to give a promising battery capacity prediction result. The effectiveness of the three linear regression models will be demonstrated experimentally. The experiment table system is built with an NVIDIA Jetson Nano 4 GB Developer Kit B01, a battery, an Arduino, and a voltage sensor. The random forest regressor model has evaluated the model’s accuracy based on the average of the square of the difference between the initial value and the predicted value in the data set (MSE (mean square error)) and RMSE (root mean squared error), which is smaller than the linear regression model and bagging regressor model (MSE is 516.332762; RMSE is 22.722957). The linear regression model with MSE and RMSE is the biggest (MSE is 22060.500669; RMSE is 148.527777). This result allows the random forest regressor model to remain helpful in predicting the life of lithium-ion batteries. Moreover, this result allows rapid identification of battery manufacturing processes and will enable users to decide to replace defective batteries when deterioration in battery performance and lifespan is identified. Full article
(This article belongs to the Section Applied Industrial Technologies)
Show Figures

Figure 1

17 pages, 3171 KiB  
Article
The Performance Analysis of PSO-ResNet for the Fault Diagnosis of Vibration Signals Based on the Pipeline Robot
by Zhaotao Yu, Liang Zhang and Jongwon Kim
Sensors 2023, 23(9), 4289; https://doi.org/10.3390/s23094289 - 26 Apr 2023
Cited by 16 | Viewed by 2862
Abstract
In the context of pipeline robots, the timely detection of faults is crucial in preventing safety incidents. In order to ensure the reliability and safety of the entire application process, robots’ fault diagnosis techniques play a vital role. However, traditional diagnostic methods for [...] Read more.
In the context of pipeline robots, the timely detection of faults is crucial in preventing safety incidents. In order to ensure the reliability and safety of the entire application process, robots’ fault diagnosis techniques play a vital role. However, traditional diagnostic methods for motor drive end-bearing faults in pipeline robots are often ineffective when the operating conditions are variable. An efficient solution for fault diagnosis is the application of deep learning algorithms. This paper proposes a rolling bearing fault diagnosis method (PSO-ResNet) that combines a Particle Swarm Optimization algorithm (PSO) with a residual network. A number of vibration signal sensors are placed at different locations in the pipeline robot to obtain vibration signals from different parts. The input to the PSO-ResNet algorithm is a two-bit image obtained by continuous wavelet transform of the vibration signal. The accuracy of this fault diagnosis method is compared with different types of fault diagnosis algorithms, and the experimental analysis shows that PSO-ResNet has higher accuracy. The algorithm was also deployed on an Nvidia Jetson Nano and a Raspberry Pi 4B. Through comparative experimental analysis, the proposed fault diagnosis algorithm was chosen to be deployed on the Nvidia Jetson Nano and used as the core fault diagnosis control unit of the pipeline robot for practical scenarios. However, the PSO-ResNet model needs further improvement in terms of accuracy, which is the focus of future research work. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

13 pages, 4252 KiB  
Article
Intelligent Real-Time Face-Mask Detection System with Hardware Acceleration for COVID-19 Mitigation
by Peter Sertic, Ayman Alahmar, Thangarajah Akilan, Marko Javorac and Yash Gupta
Healthcare 2022, 10(5), 873; https://doi.org/10.3390/healthcare10050873 - 9 May 2022
Cited by 23 | Viewed by 4113
Abstract
This paper proposes and implements a dedicated hardware accelerated real-time face-mask detection system using deep learning (DL). The proposed face-mask detection model (MaskDetect) was benchmarked on three embedded platforms: Raspberry PI 4B with either Google Coral USB TPU or Intel Neural Compute Stick [...] Read more.
This paper proposes and implements a dedicated hardware accelerated real-time face-mask detection system using deep learning (DL). The proposed face-mask detection model (MaskDetect) was benchmarked on three embedded platforms: Raspberry PI 4B with either Google Coral USB TPU or Intel Neural Compute Stick 2 VPU, and NVIDIA Jetson Nano. The MaskDetect was independently quantised and optimised for each hardware accelerated implementation. An ablation study was carried out on the proposed model and its quantised implementations on the embedded hardware configurations above as a comparison to other popular transfer-learning models, such as VGG16, ResNet-50V2, and InceptionV3, which are compatible with these acceleration hardware platforms. The ablation study revealed that MaskDetect achieved excellent average face-mask detection performance with accuracy above 94% across all embedded platforms except for Coral, which achieved an average accuracy of nearly 90%. With respect to detection performance (accuracy), inference speed (frames per second (FPS)), and product cost, the ablation study revealed that implementation on Jetson Nano is the best choice for real-time face-mask detection. It achieved 94.2% detection accuracy and twice greater FPS when compared to its desktop hardware counterpart. Full article
Show Figures

Figure 1

Back to TopTop