Sign in to use this feature.

Years

Between: -

Article Types

Countries / Regions

Search Results (22)

Search Parameters:
Journal = AI
Section = AI in Autonomous Systems

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 7990 KiB  
Article
Detection of Cracks in Low-Power Wind Turbines Using Vibration Signal Analysis with Empirical Mode Decomposition and Convolutional Neural Networks
by Angel H. Rangel-Rodriguez, Jose M. Machorro-Lopez, David Granados-Lieberman, J. Jesus de Santiago-Perez, Juan P. Amezquita-Sanchez and Martin Valtierra-Rodriguez
AI 2025, 6(8), 179; https://doi.org/10.3390/ai6080179 - 6 Aug 2025
Abstract
Condition monitoring and fault detection in wind turbines are essential for reducing repair and maintenance costs. Early detection of faults enables timely interventions before the damage worsens. However, existing methods often rely on costly scheduled inspections or lack the ability to effectively detect [...] Read more.
Condition monitoring and fault detection in wind turbines are essential for reducing repair and maintenance costs. Early detection of faults enables timely interventions before the damage worsens. However, existing methods often rely on costly scheduled inspections or lack the ability to effectively detect early stage damage, particularly under different operational speeds. This article presents a methodology based on convolutional neural networks (CNNs) and empirical mode decomposition (EMD) of vibration signals for the detection of blade crack damage. The proposed approach involves acquiring vibration signals under four conditions: healthy, light, intermediate, and severe damage. EMD is then applied to extract time–frequency representations of the signals, which are subsequently converted into images. These images are analyzed by a CNN to classify the condition of the wind turbine blades. To enhance the final CNN architecture, various image sizes and configuration parameters are evaluated to balance computational load and classification accuracy. The results demonstrate that combining vibration signal images, generated using the EMD method, with CNN models enables accurate classification of blade conditions, achieving 99.5% accuracy while maintaining a favorable trade-off between performance and complexity. Full article
Show Figures

Figure 1

21 pages, 3139 KiB  
Article
Resilient Anomaly Detection in Fiber-Optic Networks: A Machine Learning Framework for Multi-Threat Identification Using State-of-Polarization Monitoring
by Gulmina Malik, Imran Chowdhury Dipto, Muhammad Umar Masood, Mashboob Cheruvakkadu Mohamed, Stefano Straullu, Sai Kishore Bhyri, Gabriele Maria Galimberti, Antonio Napoli, João Pedro, Walid Wakim and Vittorio Curri
AI 2025, 6(7), 131; https://doi.org/10.3390/ai6070131 - 20 Jun 2025
Viewed by 948
Abstract
We present a thorough machine-learning framework based on real-time state-of-polarization (SOP) monitoring for robust anomaly identification in optical fiber networks. We exploit SOP data under three different threat scenarios: (i) malicious or critical vibration events, (ii) overlapping mechanical disturbances, and (iii) malicious fiber [...] Read more.
We present a thorough machine-learning framework based on real-time state-of-polarization (SOP) monitoring for robust anomaly identification in optical fiber networks. We exploit SOP data under three different threat scenarios: (i) malicious or critical vibration events, (ii) overlapping mechanical disturbances, and (iii) malicious fiber tapping (eavesdropping). We used various supervised machine learning techniques like k-Nearest Neighbor (k-NN), random forest, extreme gradient boosting (XGBoost), and decision trees to classify different vibration events. We also assessed the framework’s resilience to background interference by superimposing sinusoidal noise at different frequencies and examining its effects on the polarization signatures. This analysis provides insight into how subsurface installations, subject to ambient vibrations, affect detection fidelity. This highlights the sensitivity to which external interference affects polarization fingerprints. Crucially, it demonstrates the system’s capacity to discern and alert on malicious vibration events even in the presence of environmental noise. However, we focus on the necessity of noise-mitigation techniques in real-world implementations while providing a potent, real-time mechanism for multi-threat recognition in the fiber networks. Full article
(This article belongs to the Special Issue Artificial Intelligence in Optical Communication Networks)
Show Figures

Figure 1

29 pages, 1754 KiB  
Systematic Review
Agentic AI Frameworks in SMMEs: A Systematic Literature Review of Ecosystemic Interconnected Agents
by Peter Adebowale Olujimi, Pius Adewale Owolawi, Refilwe Constance Mogase and Etienne Van Wyk
AI 2025, 6(6), 123; https://doi.org/10.3390/ai6060123 - 11 Jun 2025
Viewed by 2408
Abstract
This study examines the application of agentic artificial intelligence (AI) frameworks within small, medium, and micro-enterprises (SMMEs), highlighting how interconnected autonomous agents improve operational efficiency and adaptability. Using the PRISMA 2020 framework, this study systematically identified, screened, and analyzed 66 studies, including peer-reviewed [...] Read more.
This study examines the application of agentic artificial intelligence (AI) frameworks within small, medium, and micro-enterprises (SMMEs), highlighting how interconnected autonomous agents improve operational efficiency and adaptability. Using the PRISMA 2020 framework, this study systematically identified, screened, and analyzed 66 studies, including peer-reviewed and credible gray literature, published between 2019 and 2024, to assess agentic AI frameworks in SMMEs. Recognizing the constraints faced by SMMEs, such as limited scalability, high operational demands, and restricted access to advanced technologies, the review synthesizes existing research to highlight the characteristics, implementations, and impacts of agentic AI in task automation, decision-making, and ecosystem-wide collaboration. The results demonstrate the potential of agentic AI to address technological, ethical, and infrastructure barriers while promoting innovation, scalability, and competitiveness. This review contributes to the understanding of agentic AI frameworks by offering practical insights and setting the groundwork for further research into their applications in SMMEs’ dynamic and resource-constrained economic environments. Full article
(This article belongs to the Section AI in Autonomous Systems)
Show Figures

Figure 1

24 pages, 4055 KiB  
Article
Privacy-Preserving Interpretability: An Explainable Federated Learning Model for Predictive Maintenance in Sustainable Manufacturing and Industry 4.0
by Hamad Mohamed Hamdan Alzari Alshkeili, Saif Jasim Almheiri and Muhammad Adnan Khan
AI 2025, 6(6), 117; https://doi.org/10.3390/ai6060117 - 6 Jun 2025
Viewed by 1269
Abstract
Background: Industry 4.0’s development requires digitalized manufacturing through Predictive Maintenance (PdM) because such practices decrease equipment failures and operational disruptions. However, its effectiveness is hindered by three key challenges: (1) data confidentiality, as traditional methods rely on centralized data sharing, raising concerns about [...] Read more.
Background: Industry 4.0’s development requires digitalized manufacturing through Predictive Maintenance (PdM) because such practices decrease equipment failures and operational disruptions. However, its effectiveness is hindered by three key challenges: (1) data confidentiality, as traditional methods rely on centralized data sharing, raising concerns about security and regulatory compliance; (2) a lack of interpretability, where opaque AI models provide limited transparency, making it difficult for operators to trust and act on failure predictions; and (3) adaptability issues, as many existing solutions struggle to maintain a consistent performance across diverse industrial environments. Addressing these challenges requires a privacy-preserving, interpretable, and adaptive Artificial Intelligence (AI) model that ensures secure, reliable, and transparent PdM while meeting industry standards and regulatory requirements. Methods: Explainable AI (XAI) plays a crucial role in enhancing transparency and trust in PdM models by providing interpretable insights into failure predictions. Meanwhile, Federated Learning (FL) ensures privacy-preserving, decentralized model training, allowing multiple industrial sites to collaborate without sharing sensitive operational data. This proposed research developed a sustainable privacy-preserving Explainable FL (XFL) model that integrates XAI techniques like Shapley Additive Explanations (SHAP) and Local Interpretable Model-Agnostic Explanations (LIME) into an FL structure to improve PdM’s security and interpretability capabilities. Results: The proposed XFL model enables industrial operators to interpret, validate, and refine AI-driven maintenance strategies while ensuring data privacy, accuracy, and regulatory compliance. Conclusions: This model significantly improves failure prediction, reduces unplanned downtime, and strengthens trust in AI-driven decision-making. The simulation results confirm its high reliability, achieving 98.15% accuracy with a minimal 1.85% miss rate, demonstrating its effectiveness as a scalable, secure, and interpretable solution for PdM in Industry 4.0. Full article
Show Figures

Figure 1

20 pages, 1031 KiB  
Article
Evaluating a Hybrid LLM Q-Learning/DQN Framework for Adaptive Obstacle Avoidance in Embedded Robotics
by Rihem Farkh, Ghislain Oudinet and Thibaut Deleruyelle
AI 2025, 6(6), 115; https://doi.org/10.3390/ai6060115 - 4 Jun 2025
Cited by 1 | Viewed by 1406
Abstract
This paper introduces a pioneering hybrid framework that integrates Q-learning/deep Q-network (DQN) with a locally deployed large language model (LLM) to enhance obstacle avoidance in embedded robotic systems. The STM32WB55RG microcontroller handles real-time decision-making using sensor data, while a Raspberry Pi 5 computer [...] Read more.
This paper introduces a pioneering hybrid framework that integrates Q-learning/deep Q-network (DQN) with a locally deployed large language model (LLM) to enhance obstacle avoidance in embedded robotic systems. The STM32WB55RG microcontroller handles real-time decision-making using sensor data, while a Raspberry Pi 5 computer runs a quantized TinyLlama LLM to dynamically refine navigation strategies. The LLM addresses traditional Q-learning limitations, such as slow convergence and poor adaptability, by analyzing action histories and optimizing decision-making policies in complex, dynamic environments. A selective triggering mechanism ensures efficient LLM intervention, minimizing computational overhead. Experimental results demonstrate significant improvements, including up to 41% higher deadlock recovery (81% vs. 40% for Q-learning + LLM), up to 34% faster time to goal (38 s vs. 58 s for Q-learning + LLM), and up to 14% lower collision rates (11% vs. 25% for Q-learning + LLM) compared to standalone Q-learning/DQN. This novel approach presents a solution for scalable, adaptive navigation in resource-constrained embedded robotics, with potential applications in logistics and healthcare. Full article
(This article belongs to the Section AI in Autonomous Systems)
Show Figures

Figure 1

26 pages, 9817 KiB  
Article
FASTSeg3D: A Fast, Efficient, and Adaptive Ground Filtering Algorithm for 3D Point Clouds in Mobile Sensing Applications
by Daniel Ayo Oladele, Elisha Didam Markus and Adnan M. Abu-Mahfouz
AI 2025, 6(5), 97; https://doi.org/10.3390/ai6050097 - 7 May 2025
Viewed by 925
Abstract
Background: Accurate ground segmentation in 3D point clouds is critical for robotic perception, enabling robust navigation, object detection, and environmental mapping. However, existing methods struggle with over-segmentation, under-segmentation, and computational inefficiency, particularly in dynamic or complex environments. Methods: This study proposes FASTSeg3D, a [...] Read more.
Background: Accurate ground segmentation in 3D point clouds is critical for robotic perception, enabling robust navigation, object detection, and environmental mapping. However, existing methods struggle with over-segmentation, under-segmentation, and computational inefficiency, particularly in dynamic or complex environments. Methods: This study proposes FASTSeg3D, a novel two-stage algorithm for real-time ground filtering. First, Range Elevation Estimation (REE) organizes point clouds efficiently while filtering outliers. Second, adaptive Window-Based Model Fitting (WBMF) addresses over-segmentation by dynamically adjusting to local geometric features. The method was rigorously evaluated in four challenging scenarios: large objects (vehicles), pedestrians, small debris/vegetation, and rainy conditions across day/night cycles. Results: FASTSeg3D achieved state-of-the-art performance, with a mean error of <7%, error sensitivity < 10%, and IoU scores > 90% in all scenarios except extreme cases (rainy/night small-object conditions). It maintained a processing speed 10× faster than comparable methods, enabling real-time operation. The algorithm also outperformed benchmarks in F1 score (avg. 94.2%) and kappa coefficient (avg. 0.91), demonstrating superior robustness. Conclusions: FASTSeg3D addresses critical limitations in ground segmentation by balancing speed and accuracy, making it ideal for real-time robotic applications in diverse environments. Its computational efficiency and adaptability to edge cases represent a significant advancement for autonomous systems. Full article
(This article belongs to the Section AI in Autonomous Systems)
Show Figures

Figure 1

22 pages, 6988 KiB  
Article
A Hybrid and Modular Integration Concept for Anomaly Detection in Industrial Control Systems
by Christian Goetz and Bernhard G. Humm
AI 2025, 6(5), 91; https://doi.org/10.3390/ai6050091 - 27 Apr 2025
Viewed by 1161
Abstract
Effective anomaly detection is essential for realizing modern and secure industrial control systems. However, the direct integration of anomaly detection within such a system is complex due to the wide variety of hardware used, different communication protocols, and given industrial requirements. Many components [...] Read more.
Effective anomaly detection is essential for realizing modern and secure industrial control systems. However, the direct integration of anomaly detection within such a system is complex due to the wide variety of hardware used, different communication protocols, and given industrial requirements. Many components of an industrial control system allow direct integration, while others are designed as closed systems or do not have the required performance. At the same time, the effective usage of available resources and the sustainable use of energy are more important than ever for modern industry. Therefore, in this paper, we present a modular and hybrid concept that enables the integration of efficient and effective anomaly detection while optimising the use of available resources under consideration of industrial requirements. Because of the modular and hybrid properties, many functionalities can be outsourced to the respective devices, and at the same time, additional hardware can be integrated where required. The resulting flexibility allows the seamless integration of complete anomaly detection into existing and legacy systems without the need for expensive centralised or cloud-based solutions. Through a detailed evaluation within an industrial unit, we demonstrate the performance and versatility of our concept. Full article
Show Figures

Figure 1

21 pages, 31401 KiB  
Article
BEV-CAM3D: A Unified Bird’s-Eye View Architecture for Autonomous Driving with Monocular Cameras and 3D Point Clouds
by Daniel Ayo Oladele, Elisha Didam Markus and Adnan M. Abu-Mahfouz
AI 2025, 6(4), 82; https://doi.org/10.3390/ai6040082 - 18 Apr 2025
Viewed by 2373
Abstract
Three-dimensional (3D) visual perception is pivotal for understanding surrounding environments in applications such as autonomous driving and mobile robotics. While LiDAR-based models dominate due to accurate depth sensing, their cost and sparse outputs have driven interest in camera-based systems. However, challenges like cross-domain [...] Read more.
Three-dimensional (3D) visual perception is pivotal for understanding surrounding environments in applications such as autonomous driving and mobile robotics. While LiDAR-based models dominate due to accurate depth sensing, their cost and sparse outputs have driven interest in camera-based systems. However, challenges like cross-domain degradation and depth estimation inaccuracies persist. This paper introduces BEVCAM3D, a unified bird’s-eye view (BEV) architecture that fuses monocular cameras and LiDAR point clouds to overcome single-sensor limitations. BEVCAM3D integrates a deformable cross-modality attention module for feature alignment and a fast ground segmentation algorithm to reduce computational overhead by 40%. Evaluated on the nuScenes dataset, BEVCAM3D achieves state-of-the-art performance, with a 73.9% mAP and a 76.2% NDS, outperforming existing LiDAR-camera fusion methods like SparseFusion (72.0% mAP) and IS-Fusion (73.0% mAP). Notably, it excels in detecting pedestrians (91.0% AP) and traffic cones (89.9% AP), addressing the class imbalance in autonomous driving scenarios. The framework supports real-time inference at 11.2 FPS with an EfficientDet-B3 backbone and demonstrates robustness under low-light conditions (62.3% nighttime mAP). Full article
(This article belongs to the Section AI in Autonomous Systems)
Show Figures

Figure 1

17 pages, 2758 KiB  
Article
History-Aware Multimodal Instruction-Oriented Policies for Navigation Tasks
by Renas Mukhametzianov and Hidetaka Nambo
AI 2025, 6(4), 75; https://doi.org/10.3390/ai6040075 - 11 Apr 2025
Viewed by 763
Abstract
The rise of large-scale language models and multimodal transformers has enabled instruction-based policies, such as vision-and-language navigation. To leverage their general world knowledge, we propose multimodal annotations for action options and support selection from a dynamic, describable action space. Our framework employs a [...] Read more.
The rise of large-scale language models and multimodal transformers has enabled instruction-based policies, such as vision-and-language navigation. To leverage their general world knowledge, we propose multimodal annotations for action options and support selection from a dynamic, describable action space. Our framework employs a multimodal transformer that processes front-facing camera images, light detection and ranging (LIDAR) sensor’s point clouds, and tasks as textual instructions to produce a history-aware decision policy for mobile robot navigation. Our approach leverages a pretrained vision–language encoder and integrates it with a custom causal generative pretrained transformer (GPT) decoder to predict action sequences within a state–action history. We propose a trainable attention score mechanism to efficiently select the most suitable action from a variable set of possible options. Action options are text–image pairs and encoded using the same multimodal encoder employed for environment states. This approach of annotating and dynamically selecting actions is applicable to broader multidomain decision-making tasks. We compared two baseline models, ViLT (vision-and-language transformer) and FLAVA (foundational language and vision alignment), and found that FLAVA achieves superior performance within the constraints of 8 GB video memory usage in the training phase. Experiments were conducted in both simulated and real-world environments using our custom datasets for instructed task completion episodes, demonstrating strong prediction accuracy. These results highlight the potential of multimodal, dynamic action spaces for instruction-based robot navigation and beyond. Full article
(This article belongs to the Section AI in Autonomous Systems)
Show Figures

Figure 1

23 pages, 3001 KiB  
Review
A Bibliometric Analysis on Artificial Intelligence in the Production Process of Small and Medium Enterprises
by Federico Briatore, Marco Tullio Mosca, Roberto Nicola Mosca and Mattia Braggio
AI 2025, 6(3), 54; https://doi.org/10.3390/ai6030054 - 12 Mar 2025
Cited by 1 | Viewed by 1397
Abstract
Industry 4.0 represents the main paradigm currently bringing great innovation in the field of automation and data exchange among production technologies, according to the principles of interoperability, virtualization, decentralization and production flexibility. The Fourth Industrial Revolution is driven by structural changes in the [...] Read more.
Industry 4.0 represents the main paradigm currently bringing great innovation in the field of automation and data exchange among production technologies, according to the principles of interoperability, virtualization, decentralization and production flexibility. The Fourth Industrial Revolution is driven by structural changes in the manufacturing sector, such as the demand for customized products, market volatility and sustainability goals, and the integration of artificial intelligence and Big Data. This work aims to analyze, from a bibliometric point of view of journal papers on Scopus, with no time limitation, the existing literature on the application of AI in SMEs, which are crucial elements in the industrial and economic fabric of many countries. However, the adoption of modern technologies, particularly AI, can be challenging for them, due to the intrinsic structure of this type of enterprise, despite the positive effects obtained in large organizations. Full article
Show Figures

Graphical abstract

28 pages, 4142 KiB  
Article
IntelliGrid AI: A Blockchain and Deep-Learning Framework for Optimized Home Energy Management with V2H and H2V Integration
by Sami Binyamin and Sami Ben Slama
AI 2025, 6(2), 34; https://doi.org/10.3390/ai6020034 - 12 Feb 2025
Cited by 2 | Viewed by 1443
Abstract
The integration of renewable energy sources and electric vehicles has become a focal point for industries and academia due to its profound economic, environmental, and technological implications. These developments require the development of a robust intelligent home energy management system (IHEMS) to optimize [...] Read more.
The integration of renewable energy sources and electric vehicles has become a focal point for industries and academia due to its profound economic, environmental, and technological implications. These developments require the development of a robust intelligent home energy management system (IHEMS) to optimize energy utilization, enhance transaction security, and ensure grid stability. For this reason, this paper develops an IntelliGrid AI, an advanced system that integrates blockchain technology, deep learning (DL), and dual-energy transmission capabilities—vehicle to home (V2H) and home to vehicle (H2V). The proposed approach can dynamically optimize household energy flows, deploying real-time data and adaptive algorithms to balance energy demand and supply. Blockchain technology ensures the security and integrity of energy transactions while facilitating decentralized peer-to-peer (P2P) energy trading. The core of IntelliGrid AI is an advanced Q-learning algorithm that intelligently allocates energy resources. V2H enables electric vehicles to power households during peak periods, reducing the strain on the grid. Conversely, H2V technology facilitates the efficient charging of electric cars during peak hours, contributing to grid stability and efficient energy utilization. Case studies conducted in Tunisia validate the system’s performance, showing a 20% reduction in energy costs and significant improvements in transaction efficiency. These results highlight the practical benefits of integrating V2H and H2V technologies into innovative energy management frameworks. Full article
Show Figures

Figure 1

20 pages, 892 KiB  
Article
TRust Your GENerator (TRYGEN): Enhancing Out-of-Model Scope Detection
by Václav Diviš, Bastian Spatz and Marek Hrúz
AI 2024, 5(4), 2127-2146; https://doi.org/10.3390/ai5040104 - 30 Oct 2024
Cited by 2 | Viewed by 1058
Abstract
Recent research has drawn attention to the ambiguity surrounding the definition and learnability of Out-of-Distribution recognition. Although the original problem remains unsolved, the term “Out-of-Model Scope” detection offers a clearer perspective. The ability to detect Out-of-Model Scope inputs is particularly beneficial in safety-critical [...] Read more.
Recent research has drawn attention to the ambiguity surrounding the definition and learnability of Out-of-Distribution recognition. Although the original problem remains unsolved, the term “Out-of-Model Scope” detection offers a clearer perspective. The ability to detect Out-of-Model Scope inputs is particularly beneficial in safety-critical applications such as autonomous driving or medicine. By detecting Out-of-Model Scope situations, the system’s robustness is enhanced and it is prevented from operating in unknown and unsafe scenarios. In this paper, we propose a novel approach for Out-of-Model Scope detection that integrates three sources of information: (1) the original input, (2) its latent feature representation extracted by an encoder, and (3) a synthesized version of the input generated from its latent representation. We demonstrate the effectiveness of combining original and synthetically generated inputs to defend against adversarial attacks in the computer vision domain. Our method, TRust Your GENerator (TRYGEN), achieves results comparable to those of other state-of-the-art methods and allows any encoder to be integrated into our pipeline in a plug-and-train fashion. Through our experiments, we evaluate which combinations of the encoder’s features are most effective for discovering Out-of-Model Scope samples and highlight the importance of a compact feature space for training the generator. Full article
(This article belongs to the Section AI in Autonomous Systems)
Show Figures

Figure 1

12 pages, 999 KiB  
Perspective
Collaborative Robots with Cognitive Capabilities for Industry 4.0 and Beyond
by Giulio Sandini, Alessandra Sciutti and Pietro Morasso
AI 2024, 5(4), 1858-1869; https://doi.org/10.3390/ai5040092 - 9 Oct 2024
Cited by 2 | Viewed by 2079
Abstract
The robots that entered the manufacturing sector in the second and third Industrial Revolutions (IR2 and IR3) were designed for carrying out predefined routines without physical interaction with humans. In contrast, IR4* robots (i.e., robots since IR4 and beyond) are supposed to interact [...] Read more.
The robots that entered the manufacturing sector in the second and third Industrial Revolutions (IR2 and IR3) were designed for carrying out predefined routines without physical interaction with humans. In contrast, IR4* robots (i.e., robots since IR4 and beyond) are supposed to interact with humans in a cooperative way for enhancing flexibility, autonomy, and adaptability, thus dramatically improving productivity. However, human–robot cooperation implies cognitive capabilities that the cooperative robots (CoBots) in the market do not have. The common wisdom is that such a cognitive lack can be filled in a straightforward way by integrating well-established ICT technologies with new AI technologies. This short paper expresses the view that this approach is not promising and suggests a different one based on artificial cognition rather than artificial intelligence, founded on concepts of embodied cognition, developmental robotics, and social robotics. We suggest giving these IR4* robots designed according to such principles the name CoCoBots. The paper also addresses the ethical problems that can be raised in cases of critical emergencies. In normal operating conditions, CoCoBots and human partners, starting from individual evaluations, will routinely develop joint decisions on the course of action to be taken through mutual understanding and explanation. In case a joint decision cannot be reached and/or in the limited case that an emergency is detected and declared by top security levels, we suggest that the ultimate decision-making power, with the associated responsibility, should rest on the human side, at the different levels of the organized structure. Full article
(This article belongs to the Special Issue Intelligent Systems for Industry 4.0)
Show Figures

Figure 1

20 pages, 14487 KiB  
Article
Fault Classification of 3D-Printing Operations Using Different Types of Machine and Deep Learning Techniques
by Satish Kumar, Sameer Sayyad and Arunkumar Bongale
AI 2024, 5(4), 1759-1778; https://doi.org/10.3390/ai5040087 - 27 Sep 2024
Cited by 1 | Viewed by 2195
Abstract
Fused deposition modeling (FDM), a method of additive manufacturing (AM), comprises the extrusion of materials via a nozzle and the subsequent combining of the layers to create 3D-printed objects. FDM is a widely used method for 3D-printing objects since it is affordable, effective, [...] Read more.
Fused deposition modeling (FDM), a method of additive manufacturing (AM), comprises the extrusion of materials via a nozzle and the subsequent combining of the layers to create 3D-printed objects. FDM is a widely used method for 3D-printing objects since it is affordable, effective, and easy to use. Some defects such as poor infill, elephant foot, layer shift, and poor surface finish arise in the FDM components at the printing stage due to variations in printing parameters such as printing speed, change in nozzle, or bed temperature. Proper fault classification is required to identify the cause of faulty products. In this work, the multi-sensory data are gathered using different sensors such as vibration, current, temperature, and sound sensors. The data acquisition is performed by using the National Instrumentation (NI) Data Acquisition System (DAQ) which provides the synchronous multi-sensory data for the model training. To induce the faults, the data are captured under different conditions such as variations in printing speed, temperate, and jerk during the printing. The collected data are used to train the machine learning (ML) and deep learning (DL) classification models to classify the variation in printing parameters. The ML models such as k-nearest neighbor (KNN), decision tree (DT), extra trees (ET), and random forest (RF) with convolutional neural network (CNN) as a DL model are used to classify the variable operation printing parameters. Out of the available models, in ML models, the RF classifier shows a classification accuracy of around 91% whereas, in the DL model, the CNN model shows good classification performance with accuracy ranging from 92 to 94% under variable operating conditions. Full article
(This article belongs to the Special Issue Intelligent Systems for Industry 4.0)
Show Figures

Figure 1

31 pages, 1582 KiB  
Article
Recent Advances in 3D Object Detection for Self-Driving Vehicles: A Survey
by Oluwajuwon A. Fawole and Danda B. Rawat
AI 2024, 5(3), 1255-1285; https://doi.org/10.3390/ai5030061 - 25 Jul 2024
Cited by 8 | Viewed by 7466
Abstract
The development of self-driving or autonomous vehicles has led to significant advancements in 3D object detection technologies, which are critical for the safety and efficiency of autonomous driving. Despite recent advances, several challenges remain in sensor integration, handling sparse and noisy data, and [...] Read more.
The development of self-driving or autonomous vehicles has led to significant advancements in 3D object detection technologies, which are critical for the safety and efficiency of autonomous driving. Despite recent advances, several challenges remain in sensor integration, handling sparse and noisy data, and ensuring reliable performance across diverse environmental conditions. This paper comprehensively surveys state-of-the-art 3D object detection techniques for autonomous vehicles, emphasizing the importance of multi-sensor fusion techniques and advanced deep learning models. Furthermore, we present key areas for future research, including enhancing sensor fusion algorithms, improving computational efficiency, and addressing ethical, security, and privacy concerns. The integration of these technologies into real-world applications for autonomous driving is presented by highlighting potential benefits and limitations. We also present a side-by-side comparison of different techniques in a tabular form. Through a comprehensive review, this paper aims to provide insights into the future directions of 3D object detection and its impact on the evolution of autonomous driving. Full article
(This article belongs to the Section AI in Autonomous Systems)
Show Figures

Figure 1

Back to TopTop