Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (10,214)

Search Parameters:
Keywords = intelligent machines

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
36 pages, 1840 KiB  
Review
Enabling Intelligent Industrial Automation: A Review of Machine Learning Applications with Digital Twin and Edge AI Integration
by Mohammad Abidur Rahman, Md Farhan Shahrior, Kamran Iqbal and Ali A. Abushaiba
Automation 2025, 6(3), 37; https://doi.org/10.3390/automation6030037 - 5 Aug 2025
Abstract
The integration of machine learning (ML) into industrial automation is fundamentally reshaping how manufacturing systems are monitored, inspected, and optimized. By applying machine learning to real-time sensor data and operational histories, advanced models enable proactive fault prediction, intelligent inspection, and dynamic process control—directly [...] Read more.
The integration of machine learning (ML) into industrial automation is fundamentally reshaping how manufacturing systems are monitored, inspected, and optimized. By applying machine learning to real-time sensor data and operational histories, advanced models enable proactive fault prediction, intelligent inspection, and dynamic process control—directly enhancing system reliability, product quality, and efficiency. This review explores the transformative role of ML across three key domains: Predictive Maintenance (PdM), Quality Control (QC), and Process Optimization (PO). It also analyzes how Digital Twin (DT) and Edge AI technologies are expanding the practical impact of ML in these areas. Our analysis reveals a marked rise in deep learning, especially convolutional and recurrent architectures, with a growing shift toward real-time, edge-based deployment. The paper also catalogs the datasets used, the tools and sensors employed for data collection, and the industrial software platforms supporting ML deployment in practice. This review not only maps the current research terrain but also highlights emerging opportunities in self-learning systems, federated architectures, explainable AI, and themes such as self-adaptive control, collaborative intelligence, and autonomous defect diagnosis—indicating that ML is poised to become deeply embedded across the full spectrum of industrial operations in the coming years. Full article
(This article belongs to the Section Industrial Automation and Process Control)
42 pages, 7526 KiB  
Review
Novel Nanomaterials for Developing Bone Scaffolds and Tissue Regeneration
by Nazim Uddin Emon, Lu Zhang, Shelby Dawn Osborne, Mark Allen Lanoue, Yan Huang and Z. Ryan Tian
Nanomaterials 2025, 15(15), 1198; https://doi.org/10.3390/nano15151198 - 5 Aug 2025
Abstract
Nanotechnologies bring a rapid paradigm shift in hard and soft bone tissue regeneration (BTR) through unprecedented control over the nanoscale structures and chemistry of biocompatible materials to regenerate the intricate architecture and functional adaptability of bone. This review focuses on the transformative analyses [...] Read more.
Nanotechnologies bring a rapid paradigm shift in hard and soft bone tissue regeneration (BTR) through unprecedented control over the nanoscale structures and chemistry of biocompatible materials to regenerate the intricate architecture and functional adaptability of bone. This review focuses on the transformative analyses and prospects of current and next-generation nanomaterials in designing bioactive bone scaffolds, emphasizing hierarchical architecture, mechanical resilience, and regenerative precision. Mainly, this review elucidated the innovative findings, new capabilities, unmet challenges, and possible future opportunities associated with biocompatible inorganic ceramics (e.g., phosphates, metallic oxides) and the United States Food and Drug Administration (USFDA) approved synthetic polymers, including their nanoscale structures. Furthermore, this review demonstrates the newly available approaches for achieving customized standard porosity, mechanical strengths, and accelerated bioactivity to construct an optimized nanomaterial-oriented scaffold. Numerous strategies including three-dimensional bioprinting, electro-spinning techniques and meticulous nanomaterials (NMs) fabrication are well established to achieve radical scientific precision in BTR engineering. The contemporary research is unceasingly decoding the pathways for spatial and temporal release of osteoinductive agents to enhance targeted therapy and prompt healing processes. Additionally, successful material design and integration of an osteoinductive and osteoconductive agents with the blend of contemporary technologies will bring radical success in this field. Furthermore, machine learning (ML) and artificial intelligence (AI) can further decode the current complexities of material design for BTR, notwithstanding the fact that these methods call for an in-depth understanding of bone composition, relationships and impacts on biochemical processes, distribution of stem cells on the matrix, and functionalization strategies of NMs for better scaffold development. Overall, this review integrated important technological progress with ethical considerations, aiming for a future where nanotechnology-facilitated bone regeneration is boosted by enhanced functionality, safety, inclusivity, and long-term environmental responsibility. Therefore, the assimilation of a specialized research design, while upholding ethical standards, will elucidate the challenge and questions we are presently encountering. Full article
(This article belongs to the Special Issue Applications of Functional Nanomaterials in Biomedical Science)
Show Figures

Graphical abstract

15 pages, 4422 KiB  
Article
Advanced Deep Learning Methods to Generate and Discriminate Fake Images of Egyptian Monuments
by Daniyah Alaswad and Mohamed A. Zohdy
Appl. Sci. 2025, 15(15), 8670; https://doi.org/10.3390/app15158670 (registering DOI) - 5 Aug 2025
Abstract
Artificial intelligence technologies, particularly machine learning and computer vision, are being increasingly utilized to preserve, restore, and create immersive virtual experiences with cultural artifacts and sites, thus aiding in conserving cultural heritage and making it accessible to a global audience. This paper examines [...] Read more.
Artificial intelligence technologies, particularly machine learning and computer vision, are being increasingly utilized to preserve, restore, and create immersive virtual experiences with cultural artifacts and sites, thus aiding in conserving cultural heritage and making it accessible to a global audience. This paper examines the performance of Generative Adversarial Networks (GAN), especially Style-Based Generator Architecture (StyleGAN), as a deep learning approach for producing realistic images of Egyptian monuments. We used Sigmoid loss for Language–Image Pre-training (SigLIP) as a unique image–text alignment system to guide monument generation through semantic elements. We also studied truncation methods to regulate the generated image noise and identify the most effective parameter settings based on architectural representation versus diverse output creation. An improved discriminator design that combined noise addition with squeeze-and-excitation blocks and a modified MinibatchStdLayer produced 27.5% better Fréchet Inception Distance performance than the original discriminator models. Moreover, differential evolution for latent-space optimization reduced alignment mistakes during specific monument construction tasks by about 15%. We checked a wide range of truncation values from 0.1 to 1.0 and found that somewhere between 0.4 and 0.7 was the best range because it allowed for good accuracy while retaining many different architectural elements. Our findings indicate that specific model optimization strategies produce superior outcomes by creating better-quality and historically correct representations of diverse Egyptian monuments. Thus, the developed technology may be instrumental in generating educational and archaeological visualization assets while adding virtual tourism capabilities. Full article
(This article belongs to the Special Issue Novel Applications of Machine Learning and Bayesian Optimization)
Show Figures

Figure 1

22 pages, 2669 KiB  
Article
Data-Driven Fault Diagnosis for Rotating Industrial Paper-Cutting Machinery
by Luca Viale, Alessandro Paolo Daga, Ilaria Ronchi and Salvatore Caronia
Machines 2025, 13(8), 688; https://doi.org/10.3390/machines13080688 - 5 Aug 2025
Abstract
Machine learning and artificial intelligence have transformed fault detection and maintenance strategies for industrial machinery. This study applies well-established data-driven techniques to a rarely explored industrial application—the condition monitoring of high-precision paper cutting machines—enhancing condition-based maintenance to improve operational efficiency, safety, and cost-effectiveness. [...] Read more.
Machine learning and artificial intelligence have transformed fault detection and maintenance strategies for industrial machinery. This study applies well-established data-driven techniques to a rarely explored industrial application—the condition monitoring of high-precision paper cutting machines—enhancing condition-based maintenance to improve operational efficiency, safety, and cost-effectiveness. A key element of the proposed approach is the integration of an infrared pyrometer into vibration monitoring, utilizing accelerometer data to evaluate the state of health of machinery. Unlike traditional fault detection studies that focus on extreme degradation states, this work successfully identifies subtle deviations from optimal, which even expert technicians struggle to detect. Building on a feasibility study conducted with Tecnau SRL, a comprehensive diagnostic system suitable for industrial deployment is developed. Endurance tests pave the way for continuous monitoring under various operating conditions, enabling real-time industrial diagnostic applications. Multi-scale signal analysis highlights the significance of transient and steady-state phase detection, improving the effectiveness of real-time monitoring strategies. Despite the physical similarity of the classified states, simple time-series statistics combined with machine learning algorithms demonstrate high sensitivity to early-stage deviations, confirming the reliability of the approach. Additionally, a systematic analysis to downgrade acquisition system specifications identifies cost-effective sensor configurations, ensuring the feasibility of industrial implementation. Full article
Show Figures

Figure 1

42 pages, 14160 KiB  
Article
Automated Vehicle Classification and Counting in Toll Plazas Using LiDAR-Based Point Cloud Processing and Machine Learning Techniques
by Alexander Campo-Ramírez, Eduardo F. Caicedo-Bravo and Bladimir Bacca-Cortes
Future Transp. 2025, 5(3), 105; https://doi.org/10.3390/futuretransp5030105 - 5 Aug 2025
Abstract
This paper presents the design and implementation of a high-precision vehicle detection and classification system for toll stations on national highways in Colombia, leveraging LiDAR-based 3D point cloud processing and supervised machine learning. The system integrates a multi-sensor architecture, including a LiDAR scanner, [...] Read more.
This paper presents the design and implementation of a high-precision vehicle detection and classification system for toll stations on national highways in Colombia, leveraging LiDAR-based 3D point cloud processing and supervised machine learning. The system integrates a multi-sensor architecture, including a LiDAR scanner, high-resolution cameras, and Doppler radars, with an embedded computing platform for real-time processing and on-site inference. The methodology covers data preprocessing, feature extraction, descriptor encoding, and classification using Support Vector Machines. The system supports eight vehicular categories established by national regulations, which present significant challenges due to the need to differentiate categories by axle count, the presence of lifted axles, and vehicle usage. These distinctions affect toll fees and require a classification strategy beyond geometric profiling. The system achieves 89.9% overall classification accuracy, including 96.2% for light vehicles and 99.0% for vehicles with three or more axles. It also incorporates license plate recognition for complete vehicle traceability. The system was deployed at an operational toll station and has run continuously under real traffic and environmental conditions for over eighteen months. This framework represents a robust, scalable, and strategic technological component within Intelligent Transportation Systems and contributes to data-driven decision-making for road management and toll operations. Full article
Show Figures

Figure 1

17 pages, 1306 KiB  
Article
Rapid Salmonella Serovar Classification Using AI-Enabled Hyperspectral Microscopy with Enhanced Data Preprocessing and Multimodal Fusion
by MeiLi Papa, Siddhartha Bhattacharya, Bosoon Park and Jiyoon Yi
Foods 2025, 14(15), 2737; https://doi.org/10.3390/foods14152737 - 5 Aug 2025
Abstract
Salmonella serovar identification typically requires multiple enrichment steps using selective media, consuming considerable time and resources. This study presents a rapid, culture-independent method leveraging artificial intelligence (AI) to classify Salmonella serovars from rich hyperspectral microscopy data. Five serovars (Enteritidis, Infantis, Kentucky, Johannesburg, 4,[5],12:i:-) [...] Read more.
Salmonella serovar identification typically requires multiple enrichment steps using selective media, consuming considerable time and resources. This study presents a rapid, culture-independent method leveraging artificial intelligence (AI) to classify Salmonella serovars from rich hyperspectral microscopy data. Five serovars (Enteritidis, Infantis, Kentucky, Johannesburg, 4,[5],12:i:-) were analyzed from samples prepared using only sterilized de-ionized water. Hyperspectral data cubes were collected to generate single-cell spectra and RGB composite images representing the full microscopy field. Data analysis involved two parallel branches followed by multimodal fusion. The spectral branch compared manual feature selection with data-driven feature extraction via principal component analysis (PCA), followed by classification using conventional machine learning models (i.e., k-nearest neighbors, support vector machine, random forest, and multilayer perceptron). The image branch employed a convolutional neural network (CNN) to extract spatial features directly from images without predefined morphological descriptors. Using PCA-derived spectral features, the highest performing machine learning model achieved 81.1% accuracy, outperforming manual feature selection. CNN-based classification using image features alone yielded lower accuracy (57.3%) in this serovar-level discrimination. In contrast, a multimodal fusion model combining spectral and image features improved accuracy to 82.4% on the unseen test set while reducing overfitting on the train set. This study demonstrates that AI-enabled hyperspectral microscopy with multimodal fusion can streamline Salmonella serovar identification workflows. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) and Machine Learning for Foods)
Show Figures

Figure 1

62 pages, 2440 KiB  
Article
Macroeconomic and Labor Market Drivers of AI Adoption in Europe: A Machine Learning and Panel Data Approach
by Carlo Drago, Alberto Costantiello, Marco Savorgnan and Angelo Leogrande
Economies 2025, 13(8), 226; https://doi.org/10.3390/economies13080226 - 5 Aug 2025
Abstract
This article investigates the macroeconomic and labor market conditions that shape the adoption of artificial intelligence (AI) technologies among large firms in Europe. Based on panel data econometrics and supervised machine learning techniques, we estimate how public health spending, access to credit, export [...] Read more.
This article investigates the macroeconomic and labor market conditions that shape the adoption of artificial intelligence (AI) technologies among large firms in Europe. Based on panel data econometrics and supervised machine learning techniques, we estimate how public health spending, access to credit, export activity, gross capital formation, inflation, openness to trade, and labor market structure influence the share of firms that adopt at least one AI technology. The research covers all 28 EU members between 2018 and 2023. We employ a set of robustness checks using a combination of fixed-effects, random-effects, and dynamic panel data specifications supported by Clustering and supervised learning techniques. We find that AI adoption is linked to higher GDP per capita, healthcare spending, inflation, and openness to trade but lower levels of credit, exports, and capital formation. Labor markets with higher proportions of salaried work, service occupations, and self-employment are linked to AI diffusion, while unemployment and vulnerable work are detractors. Cluster analysis identifies groups of EU members with similar adoption patterns that are usually underpinned by stronger economic and institutional fundamentals. The results collectively suggest that AI diffusion is shaped not only by technological preparedness and capabilities to invest but by inclusive macroeconomic conditions and equitable labor institutions. Targeted policy measures can accelerate the equitable adoption of AI technologies within the European industrial economy. Full article
(This article belongs to the Special Issue Digital Transformation in Europe: Economic and Policy Implications)
Show Figures

Figure 1

30 pages, 825 KiB  
Review
Predictive Analytics in Human Resources Management: Evaluating AIHR’s Role in Talent Retention
by Ana Maria Căvescu and Nirvana Popescu
AppliedMath 2025, 5(3), 99; https://doi.org/10.3390/appliedmath5030099 (registering DOI) - 5 Aug 2025
Abstract
This study explores the role of artificial intelligence (AI) in human resource management (HRM), with a focus on recruitment, employee retention, and performance optimization. Through a PRISMA-based systematic literature review, the paper examines many machine learning algorithms including XGBoost, SVM, random forest, and [...] Read more.
This study explores the role of artificial intelligence (AI) in human resource management (HRM), with a focus on recruitment, employee retention, and performance optimization. Through a PRISMA-based systematic literature review, the paper examines many machine learning algorithms including XGBoost, SVM, random forest, and linear regression in decision-making related to employee-attrition prediction and talent management. The findings suggest that these technologies can automate HR processes, reduce bias, and personalize employee experiences. However, the implementation of AI in HRM also presents challenges, including data privacy concerns, algorithmic bias, and organizational resistance. To address these obstacles, the study highlights the importance of adopting ethical AI frameworks, ensuring transparency in decision-making, and developing effective integration strategies. Future research should focus on improving explainability, minimizing algorithmic bias, and promoting fairness in AI-driven HR practices. Full article
Show Figures

Figure 1

31 pages, 1583 KiB  
Article
Ensuring Zero Trust in GDPR-Compliant Deep Federated Learning Architecture
by Zahra Abbas, Sunila Fatima Ahmad, Adeel Anjum, Madiha Haider Syed, Saif Ur Rehman Malik and Semeen Rehman
Computers 2025, 14(8), 317; https://doi.org/10.3390/computers14080317 - 4 Aug 2025
Abstract
Deep Federated Learning (DFL) revolutionizes machine learning (ML) by enabling collaborative model training across diverse, decentralized data sources without direct data sharing, emphasizing user privacy and data sovereignty. Despite its potential, DFL’s application in sensitive sectors is hindered by challenges in meeting rigorous [...] Read more.
Deep Federated Learning (DFL) revolutionizes machine learning (ML) by enabling collaborative model training across diverse, decentralized data sources without direct data sharing, emphasizing user privacy and data sovereignty. Despite its potential, DFL’s application in sensitive sectors is hindered by challenges in meeting rigorous standards like the GDPR, with traditional setups struggling to ensure compliance and maintain trust. Addressing these issues, our research introduces an innovative Zero Trust-based DFL architecture designed for GDPR compliant systems, integrating advanced security and privacy mechanisms to ensure safe and transparent cross-node data processing. Our base paper proposed the basic GDPR-Compliant DFL Architecture. Now we validate the previously proposed architecture by formally verifying it using High-Level Petri Nets (HLPNs). This Zero Trust-based framework facilitates secure, decentralized model training without direct data sharing. Furthermore, we have also implemented a case study using the MNIST and CIFAR-10 datasets to evaluate the existing approach with the proposed Zero Trust-based DFL methodology. Our experiments confirmed its effectiveness in enhancing trust, complying with GDPR, and promoting DFL adoption in privacy-sensitive areas, achieving secure, ethical Artificial Intelligence (AI) with transparent and efficient data processing. Full article
Show Figures

Figure 1

14 pages, 1077 KiB  
Article
Research on Data-Driven Drilling Safety Grade Evaluation System
by Shuan Meng, Changhao Wang, Yingcao Zhou and Lidong Hou
Processes 2025, 13(8), 2469; https://doi.org/10.3390/pr13082469 - 4 Aug 2025
Abstract
With the in-depth application of digital transformation in the oil industry, data-driven methods provide a new technical path for drilling engineering safety evaluation. In this paper, a data-driven drilling safety level evaluation system is proposed. By integrating the three-dimensional visualization technology of wellbore [...] Read more.
With the in-depth application of digital transformation in the oil industry, data-driven methods provide a new technical path for drilling engineering safety evaluation. In this paper, a data-driven drilling safety level evaluation system is proposed. By integrating the three-dimensional visualization technology of wellbore trajectory and the prediction model of friction torque, a dynamic and intelligent drilling risk evaluation framework is constructed. The Python platform is used to integrate geomechanical parameters, real-time drilling data, and historical working condition records, and the machine learning algorithm is used to train the friction torque prediction model to improve prediction accuracy. Based on the K-means clustering evaluation method, a three-tier drilling safety classification standard is established: Grade I (low risk) for friction (0–100 kN) and torque (0–10 kN·m), Grade II (medium risk) for friction (100–200 kN) and torque (10–20 kN·m), and Grade III (high risk) for friction (>200 kN) and torque (>20 kN·m). This enables intelligent quantitative evaluation of drilling difficulty. The system not only dynamically optimizes bottom-hole assembly (BHA) and drilling parameters but also continuously refines the evaluation model’s accuracy through a data backtracking mechanism. This provides a reliable theoretical foundation and technical support for risk early warning, parameter optimization, and intelligent decision-making in drilling engineering. Full article
(This article belongs to the Section AI-Enabled Process Engineering)
25 pages, 1751 KiB  
Review
Large Language Models for Adverse Drug Events: A Clinical Perspective
by Md Muntasir Zitu, Dwight Owen, Ashish Manne, Ping Wei and Lang Li
J. Clin. Med. 2025, 14(15), 5490; https://doi.org/10.3390/jcm14155490 - 4 Aug 2025
Abstract
Adverse drug events (ADEs) significantly impact patient safety and health outcomes. Manual ADE detection from clinical narratives is time-consuming, labor-intensive, and costly. Recent advancements in large language models (LLMs), including transformer-based architectures such as Bidirectional Encoder Representations from Transformers (BERT) and Generative Pretrained [...] Read more.
Adverse drug events (ADEs) significantly impact patient safety and health outcomes. Manual ADE detection from clinical narratives is time-consuming, labor-intensive, and costly. Recent advancements in large language models (LLMs), including transformer-based architectures such as Bidirectional Encoder Representations from Transformers (BERT) and Generative Pretrained Transformer (GPT) series, offer promising methods for automating ADE extraction from clinical data. These models have been applied to various aspects of pharmacovigilance and clinical decision support, demonstrating potential in extracting ADE-related information from real-world clinical data. Additionally, chatbot-assisted systems have been explored as tools in clinical management, aiding in medication adherence, patient engagement, and symptom monitoring. This narrative review synthesizes the current state of LLMs in ADE detection from a clinical perspective, organizing studies into categories such as human-facing decision support tools, immune-related ADE detection, cancer-related and non-cancer-related ADE surveillance, and personalized decision support systems. In total, 39 articles were included in this review. Across domains, LLM-driven methods have demonstrated promising performances, often outperforming traditional approaches. However, critical limitations persist, such as domain-specific variability in model performance, interpretability challenges, data quality and privacy concerns, and infrastructure requirements. By addressing these challenges, LLM-based ADE detection could enhance pharmacovigilance practices, improve patient safety outcomes, and optimize clinical workflows. Full article
(This article belongs to the Section Pharmacology)
Show Figures

Figure 1

17 pages, 1256 KiB  
Systematic Review
Integrating Artificial Intelligence into Orthodontic Education: A Systematic Review and Meta-Analysis of Clinical Teaching Application
by Carlos M. Ardila, Eliana Pineda-Vélez and Anny Marcela Vivares Builes
J. Clin. Med. 2025, 14(15), 5487; https://doi.org/10.3390/jcm14155487 - 4 Aug 2025
Abstract
Background/Objectives: Artificial intelligence (AI) is rapidly emerging as a transformative force in healthcare education, including orthodontics. This systematic review and meta-analysis aimed to evaluate the integration of AI into orthodontic training programs, focusing on its effectiveness in improving diagnostic accuracy, learner engagement, [...] Read more.
Background/Objectives: Artificial intelligence (AI) is rapidly emerging as a transformative force in healthcare education, including orthodontics. This systematic review and meta-analysis aimed to evaluate the integration of AI into orthodontic training programs, focusing on its effectiveness in improving diagnostic accuracy, learner engagement, and the perceived quality of AI-generated educational content. Materials and Methods: A comprehensive literature search was conducted across PubMed, Scopus, Web of Science, and Embase through May 2025. Eligible studies involved AI-assisted educational interventions in orthodontics. A mixed-methods approach was applied, combining meta-analysis and narrative synthesis based on data availability and consistency. Results: Seven studies involving 1101 participants—including orthodontic students, clinicians, faculty, and program directors—were included. AI tools ranged from cephalometric landmarking platforms to ChatGPT-based learning modules. A fixed-effects meta-analysis using two studies yielded a pooled Global Quality Scale (GQS) score of 3.69 (95% CI: 3.58–3.80), indicating moderate perceived quality of AI-generated content (I2 = 64.5%). Due to methodological heterogeneity and limited statistical reporting in most studies, a narrative synthesis was used to summarize additional outcomes. AI tools enhanced diagnostic skills, learner autonomy, and perceived satisfaction, particularly among students and junior faculty. However, barriers such as limited curricular integration, lack of training, and faculty skepticism were recurrent. Conclusions: AI technologies, especially ChatGPT and digital cephalometry tools, show promise in orthodontic education. While learners demonstrate high acceptance, full integration is hindered by institutional and perceptual challenges. Strategic curricular reforms and targeted faculty development are needed to optimize AI adoption in clinical training. Full article
(This article belongs to the Special Issue Orthodontics: State of the Art and Perspectives)
Show Figures

Figure 1

32 pages, 1986 KiB  
Article
Machine Learning-Based Blockchain Technology for Secure V2X Communication: Open Challenges and Solutions
by Yonas Teweldemedhin Gebrezgiher, Sekione Reward Jeremiah, Xianjun Deng and Jong Hyuk Park
Sensors 2025, 25(15), 4793; https://doi.org/10.3390/s25154793 - 4 Aug 2025
Abstract
Vehicle-to-everything (V2X) communication is a fundamental technology in the development of intelligent transportation systems, encompassing vehicle-to-vehicle (V2V), infrastructure (V2I), and pedestrian (V2P) communications. This technology enables connected and autonomous vehicles (CAVs) to interact with their surroundings, significantly enhancing road safety, traffic efficiency, and [...] Read more.
Vehicle-to-everything (V2X) communication is a fundamental technology in the development of intelligent transportation systems, encompassing vehicle-to-vehicle (V2V), infrastructure (V2I), and pedestrian (V2P) communications. This technology enables connected and autonomous vehicles (CAVs) to interact with their surroundings, significantly enhancing road safety, traffic efficiency, and driving comfort. However, as V2X communication becomes more widespread, it becomes a prime target for adversarial and persistent cyberattacks, posing significant threats to the security and privacy of CAVs. These challenges are compounded by the dynamic nature of vehicular networks and the stringent requirements for real-time data processing and decision-making. Much research is on using novel technologies such as machine learning, blockchain, and cryptography to secure V2X communications. Our survey highlights the security challenges faced by V2X communications and assesses current ML and blockchain-based solutions, revealing significant gaps and opportunities for improvement. Specifically, our survey focuses on studies integrating ML, blockchain, and multi-access edge computing (MEC) for low latency, robust, and dynamic security in V2X networks. Based on our findings, we outline a conceptual framework that synergizes ML, blockchain, and MEC to address some of the identified security challenges. This integrated framework demonstrates the potential for real-time anomaly detection, decentralized data sharing, and enhanced system scalability. The survey concludes by identifying future research directions and outlining the remaining challenges for securing V2X communications in the face of evolving threats. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

10 pages, 426 KiB  
Proceeding Paper
Guiding or Misleading: Challenges of Artificial Intelligence-Generated Content in Heuristic Teaching: ChatGPT
by Ping-Kuo A. Chen
Eng. Proc. 2025, 103(1), 1; https://doi.org/10.3390/engproc2025103001 - 4 Aug 2025
Abstract
Artificial intelligence (AI)-generated content (AIGC) is an innovative technology that utilizes machine learning, AI models, reward modeling, and natural language processing (NLP) to create diverse digital content such as videos, images, and text. It has the potential to support various human activities with [...] Read more.
Artificial intelligence (AI)-generated content (AIGC) is an innovative technology that utilizes machine learning, AI models, reward modeling, and natural language processing (NLP) to create diverse digital content such as videos, images, and text. It has the potential to support various human activities with significant implications in teaching and learning, facilitating heuristic teaching for educators. By using AIGC, teachers can create extensive knowledge content and effectively design instructional strategies to guide students, aligning with heuristic teaching. However, incorporating AIGC into heuristic teaching has controversies and concerns, which potentially mislead outcomes. Nevertheless, leveraging AIGC greatly benefits teachers in enhancing heuristic teaching. When integrating AIGC to support heuristic teaching, challenges and risks must be acknowledged and addressed. These challenges include the need for users to possess sufficient knowledge reserves to identify incorrect information and content generated by AIGC, the importance of avoiding excessive reliance on AIGC, ensuring users maintain control over their actions rather than being driven by AIGC, and the necessity of scrutinizing and verifying the accuracy of information and knowledge generated by AIGC to preserve its effectiveness. Full article
Show Figures

Figure 1

25 pages, 394 KiB  
Article
SMART DShot: Secure Machine-Learning-Based Adaptive Real-Time Timing Correction
by Hyunmin Kim, Zahid Basha Shaik Kadu and Kyusuk Han
Appl. Sci. 2025, 15(15), 8619; https://doi.org/10.3390/app15158619 (registering DOI) - 4 Aug 2025
Abstract
The exponential growth of autonomous systems demands robust security mechanisms that can operate within the extreme constraints of real-time embedded environments. This paper introduces SMART DShot, a groundbreaking machine learning-enhanced framework that transforms the security landscape of unmanned aerial vehicle motor control systems [...] Read more.
The exponential growth of autonomous systems demands robust security mechanisms that can operate within the extreme constraints of real-time embedded environments. This paper introduces SMART DShot, a groundbreaking machine learning-enhanced framework that transforms the security landscape of unmanned aerial vehicle motor control systems through seamless integration of adaptive timing correction and real-time anomaly detection within Digital Shot (DShot) communication protocols. Our approach addresses critical vulnerabilities in Electronic Speed Controller (ESC) interfaces by deploying four synergistic algorithms—Kalman Filter Timing Correction (KFTC), Recursive Least Squares Timing Correction (RLSTC), Fuzzy Logic Timing Correction (FLTC), and Hybrid Adaptive Timing Correction (HATC)—each optimized for specific error characteristics and attack scenarios. Through comprehensive evaluation encompassing 32,000 Monte Carlo test iterations (500 per scenario × 16 scenarios × 4 algorithms) across 16 distinct operational scenarios and PolarFire SoC Field-Programmable Gate Array (FPGA) implementation, we demonstrate exceptional performance with 88.3% attack detection rate, only 2.3% false positive incidence, and substantial vulnerability mitigation reducing Common Vulnerability Scoring System (CVSS) severity from High (7.3) to Low (3.1). Hardware validation on PolarFire SoC confirms practical viability with minimal resource overhead (2.16% Look-Up Table utilization, 16.57 mW per channel) and deterministic sub-10 microsecond execution latency. The Hybrid Adaptive Timing Correction algorithm achieves 31.01% success rate (95% CI: [30.2%, 31.8%]), representing a 26.5% improvement over baseline approaches through intelligent meta-learning-based algorithm selection. Statistical validation using Analysis of Variance confirms significant performance differences (F(3,1996) = 30.30, p < 0.001) with large effect sizes (Cohen’s d up to 4.57), where 64.6% of algorithm comparisons showed large practical significance. SMART DShot establishes a paradigmatic shift from reactive to proactive embedded security, demonstrating that sophisticated artificial intelligence can operate effectively within microsecond-scale real-time constraints while providing comprehensive protection against timing manipulation, de-synchronization, burst interference, replay attacks, coordinated multi-channel attacks, and firmware-level compromises. This work provides essential foundations for trustworthy autonomous systems across critical domains including aerospace, automotive, industrial automation, and cyber–physical infrastructure. These results conclusively demonstrate that ML-enhanced motor control systems can achieve both superior security (88.3% attack detection rate with 2.3% false positives) and operational performance (31.01% timing correction success rate, 26.5% improvement over baseline) simultaneously, establishing SMART DShot as a practical, deployable solution for next-generation autonomous systems. Full article
Show Figures

Figure 1

Back to TopTop