Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,542)

Search Parameters:
Keywords = industrial vision

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 19553 KiB  
Article
Fast Anomaly Detection for Vision-Based Industrial Inspection Using Cascades of Null Subspace PCA Detectors
by Muhammad Bilal and Muhammad Shehzad Hanif
Sensors 2025, 25(15), 4853; https://doi.org/10.3390/s25154853 - 7 Aug 2025
Abstract
Anomaly detection in industrial imaging is critical for ensuring quality and reliability in automated manufacturing processes. While recently several methods have been reported in the literature that have demonstrated impressive detection performance on standard benchmarks, they necessarily rely on computationally intensive CNN architectures [...] Read more.
Anomaly detection in industrial imaging is critical for ensuring quality and reliability in automated manufacturing processes. While recently several methods have been reported in the literature that have demonstrated impressive detection performance on standard benchmarks, they necessarily rely on computationally intensive CNN architectures and post-processing techniques, necessitating access to high-end GPU hardware and limiting practical deployment in resource-constrained settings. In this study, we introduce a novel anomaly detection framework that leverages feature maps from a lightweight convolutional neural network (CNN) backbone, MobileNetV2, and cascaded detection to achieve notable accuracy as well as computational efficiency. The core of our method consists of two main components. First is a PCA-based anomaly detection module that specifically exploits near-zero variance features. Contrary to traditional PCA methods, which tend to focus on the high-variance directions that encapsulate the dominant patterns in normal data, our approach demonstrates that the lower variance directions (which are typically ignored) form an approximate null space where normal samples project near zero. However, the anomalous samples, due to their inherent deviations from the norm, lead to projections with significantly higher magnitudes in this space. This insight not only enhances sensitivity to true anomalies but also reduces computational complexity by eliminating the need for operations such as matrix inversion or the calculation of Mahalanobis distances for correlated features otherwise needed when normal behavior is modeled as Gaussian distribution. Second, our framework consists of a cascaded multi-stage decision process. Instead of combining features across layers, we treat the local features extracted from each layer as independent stages within a cascade. This cascading mechanism not only simplifies the computations at each stage by quickly eliminating clear cases but also progressively refines the anomaly decision, leading to enhanced overall accuracy. Experimental evaluations on MVTec and VisA benchmark datasets demonstrate that our proposed approach achieves superior anomaly detection performance (99.4% and 91.7% AUROC respectively) while maintaining a lower computational overhead compared to other methods. This framework provides a compelling solution for practical anomaly detection challenges in diverse application domains where competitive accuracy is needed at the expense of minimal hardware resources. Full article
Show Figures

Figure 1

32 pages, 1435 KiB  
Review
Smart Safety Helmets with Integrated Vision Systems for Industrial Infrastructure Inspection: A Comprehensive Review of VSLAM-Enabled Technologies
by Emmanuel A. Merchán-Cruz, Samuel Moveh, Oleksandr Pasha, Reinis Tocelovskis, Alexander Grakovski, Alexander Krainyukov, Nikita Ostrovenecs, Ivans Gercevs and Vladimirs Petrovs
Sensors 2025, 25(15), 4834; https://doi.org/10.3390/s25154834 - 6 Aug 2025
Abstract
Smart safety helmets equipped with vision systems are emerging as powerful tools for industrial infrastructure inspection. This paper presents a comprehensive state-of-the-art review of such VSLAM-enabled (Visual Simultaneous Localization and Mapping) helmets. We surveyed the evolution from basic helmet cameras to intelligent, sensor-fused [...] Read more.
Smart safety helmets equipped with vision systems are emerging as powerful tools for industrial infrastructure inspection. This paper presents a comprehensive state-of-the-art review of such VSLAM-enabled (Visual Simultaneous Localization and Mapping) helmets. We surveyed the evolution from basic helmet cameras to intelligent, sensor-fused inspection platforms, highlighting how modern helmets leverage real-time visual SLAM algorithms to map environments and assist inspectors. A systematic literature search was conducted targeting high-impact journals, patents, and industry reports. We classify helmet-integrated camera systems into monocular, stereo, and omnidirectional types and compare their capabilities for infrastructure inspection. We examine core VSLAM algorithms (feature-based, direct, hybrid, and deep-learning-enhanced) and discuss their adaptation to wearable platforms. Multi-sensor fusion approaches integrating inertial, LiDAR, and GNSS data are reviewed, along with edge/cloud processing architectures enabling real-time performance. This paper compiles numerous industrial use cases, from bridges and tunnels to plants and power facilities, demonstrating significant improvements in inspection efficiency, data quality, and worker safety. Key challenges are analyzed, including technical hurdles (battery life, processing limits, and harsh environments), human factors (ergonomics, training, and cognitive load), and regulatory issues (safety certification and data privacy). We also identify emerging trends, such as semantic SLAM, AI-driven defect recognition, hardware miniaturization, and collaborative multi-helmet systems. This review finds that VSLAM-equipped smart helmets offer a transformative approach to infrastructure inspection, enabling real-time mapping, augmented awareness, and safer workflows. We conclude by highlighting current research gaps, notably in standardizing systems and integrating with asset management, and provide recommendations for industry adoption and future research directions. Full article
Show Figures

Figure 1

30 pages, 3842 KiB  
Article
SABE-YOLO: Structure-Aware and Boundary-Enhanced YOLO for Weld Seam Instance Segmentation
by Rui Wen, Wu Xie, Yong Fan and Lanlan Shen
J. Imaging 2025, 11(8), 262; https://doi.org/10.3390/jimaging11080262 - 6 Aug 2025
Abstract
Accurate weld seam recognition is essential in automated welding systems, as it directly affects path planning and welding quality. With the rapid advancement of industrial vision, weld seam instance segmentation has emerged as a prominent research focus in both academia and industry. However, [...] Read more.
Accurate weld seam recognition is essential in automated welding systems, as it directly affects path planning and welding quality. With the rapid advancement of industrial vision, weld seam instance segmentation has emerged as a prominent research focus in both academia and industry. However, existing approaches still face significant challenges in boundary perception and structural representation. Due to the inherently elongated shapes, complex geometries, and blurred edges of weld seams, current segmentation models often struggle to maintain high accuracy in practical applications. To address this issue, a novel structure-aware and boundary-enhanced YOLO (SABE-YOLO) is proposed for weld seam instance segmentation. First, a Structure-Aware Fusion Module (SAFM) is designed to enhance structural feature representation through strip pooling attention and element-wise multiplicative fusion, targeting the difficulty in extracting elongated and complex features. Second, a C2f-based Boundary-Enhanced Aggregation Module (C2f-BEAM) is constructed to improve edge feature sensitivity by integrating multi-scale boundary detail extraction, feature aggregation, and attention mechanisms. Finally, the inner minimum point distance-based intersection over union (Inner-MPDIoU) is introduced to improve localization accuracy for weld seam regions. Experimental results on the self-built weld seam image dataset show that SABE-YOLO outperforms YOLOv8n-Seg by 3 percentage points in the AP(50–95) metric, reaching 46.3%. Meanwhile, it maintains a low computational cost (18.3 GFLOPs) and a small number of parameters (6.6M), while achieving an inference speed of 127 FPS, demonstrating a favorable trade-off between segmentation accuracy and computational efficiency. The proposed method provides an effective solution for high-precision visual perception of complex weld seam structures and demonstrates strong potential for industrial application. Full article
(This article belongs to the Section Image and Video Processing)
Show Figures

Figure 1

31 pages, 34013 KiB  
Article
Vision-Based 6D Pose Analytics Solution for High-Precision Industrial Robot Pick-and-Place Applications
by Balamurugan Balasubramanian and Kamil Cetin
Sensors 2025, 25(15), 4824; https://doi.org/10.3390/s25154824 - 6 Aug 2025
Abstract
High-precision 6D pose estimation for pick-and-place operations remains a critical problem for industrial robot arms in manufacturing. This study introduces an analytics-based solution for 6D pose estimation designed for a real-world industrial application: it enables the Staubli TX2-60L (manufactured by Stäubli International AG, [...] Read more.
High-precision 6D pose estimation for pick-and-place operations remains a critical problem for industrial robot arms in manufacturing. This study introduces an analytics-based solution for 6D pose estimation designed for a real-world industrial application: it enables the Staubli TX2-60L (manufactured by Stäubli International AG, Horgen, Switzerland) robot arm to pick up metal plates from various locations and place them into a precisely defined slot on a brake pad production line. The system uses a fixed eye-to-hand Intel RealSense D435 RGB-D camera (manufactured by Intel Corporation, Santa Clara, California, USA) to capture color and depth data. A robust software infrastructure developed in LabVIEW (ver.2019) integrated with the NI Vision (ver.2019) library processes the images through a series of steps, including particle filtering, equalization, and pattern matching, to determine the X-Y positions and Z-axis rotation of the object. The Z-position of the object is calculated from the camera’s intensity data, while the remaining X-Y rotation angles are determined using the angle-of-inclination analytics method. It is experimentally verified that the proposed analytical solution outperforms the hybrid-based method (YOLO-v8 combined with PnP/RANSAC algorithms). Experimental results across four distinct picking scenarios demonstrate the proposed solution’s superior accuracy, with position errors under 2 mm, orientation errors below 1°, and a perfect success rate in pick-and-place tasks. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

25 pages, 482 KiB  
Article
The Influence of Managers’ Safety Perceptions and Practices on Construction Workers’ Safety Behaviors in Saudi Arabian Projects: The Mediating Roles of Workers’ Safety Awareness, Competency, and Safety Actions
by Talal Mousa Alshammari, Musab Rabi, Mazen J. Al-Kheetan and Abdulrazzaq Jawish Alkherret
Safety 2025, 11(3), 77; https://doi.org/10.3390/safety11030077 - 5 Aug 2025
Abstract
Improving construction site safety remains a critical challenge in Saudi Arabia’s rapidly growing construction sector, where high accident rates and diverse labor forces demand evidence-based managerial interventions. This study investigated the influence of Managers’ Safety Perceptions and Practices (MSP) on Workers’ Safety Behaviors [...] Read more.
Improving construction site safety remains a critical challenge in Saudi Arabia’s rapidly growing construction sector, where high accident rates and diverse labor forces demand evidence-based managerial interventions. This study investigated the influence of Managers’ Safety Perceptions and Practices (MSP) on Workers’ Safety Behaviors (WSB) in the Saudi construction industry, emphasizing the mediating roles of Workers’ Safety Awareness (WSA), Safety Competency (WSC), and Safety Actions (SA). The conceptual framework integrates these three mediators to explain how managerial attitudes and practices translate into frontline safety outcomes. A quantitative, cross-sectional design was adopted using a structured questionnaire distributed among construction workers, supervisors, and project managers. A total of 352 from 384 valid responses were collected, and the data were analyzed using Partial Least Squares Structural Equation Modeling (PLS-SEM) via SmartPLS 4. The findings revealed that MSP does not directly influence WSB but has significant indirect effects through WSA, WSC, and SA. Among these, WSC emerged as the most powerful mediator, followed by WSA and SA, indicating that competency is the most critical driver of safe worker behavior. These results provide robust empirical support for a multidimensional mediation model, highlighting the need for managers to enhance safety behaviors not merely through supervision but through fostering awareness and competency, providing technical training, and implementing proactive safety measures. Theoretically, this study contributes a novel and integrative framework to the occupational safety literature, particularly within underexplored Middle Eastern construction contexts. Practically, it offers actionable insights for safety managers, industry practitioners, and policymakers seeking to improve construction safety performance in alignment with Saudi Vision 2030. Full article
(This article belongs to the Special Issue Safety Performance Assessment and Management in Construction)
Show Figures

Figure 1

27 pages, 5228 KiB  
Article
Detection of Surface Defects in Steel Based on Dual-Backbone Network: MBDNet-Attention-YOLO
by Xinyu Wang, Shuhui Ma, Shiting Wu, Zhaoye Li, Jinrong Cao and Peiquan Xu
Sensors 2025, 25(15), 4817; https://doi.org/10.3390/s25154817 - 5 Aug 2025
Abstract
Automated surface defect detection in steel manufacturing is pivotal for ensuring product quality, yet it remains an open challenge owing to the extreme heterogeneity of defect morphologies—ranging from hairline cracks and microscopic pores to elongated scratches and shallow dents. Existing approaches, whether classical [...] Read more.
Automated surface defect detection in steel manufacturing is pivotal for ensuring product quality, yet it remains an open challenge owing to the extreme heterogeneity of defect morphologies—ranging from hairline cracks and microscopic pores to elongated scratches and shallow dents. Existing approaches, whether classical vision pipelines or recent deep-learning paradigms, struggle to simultaneously satisfy the stringent demands of industrial scenarios: high accuracy on sub-millimeter flaws, insensitivity to texture-rich backgrounds, and real-time throughput on resource-constrained hardware. Although contemporary detectors have narrowed the gap, they still exhibit pronounced sensitivity–robustness trade-offs, particularly in the presence of scale-varying defects and cluttered surfaces. To address these limitations, we introduce MBY (MBDNet-Attention-YOLO), a lightweight yet powerful framework that synergistically couples the MBDNet backbone with the YOLO detection head. Specifically, the backbone embeds three novel components: (1) HGStem, a hierarchical stem block that enriches low-level representations while suppressing redundant activations; (2) Dynamic Align Fusion (DAF), an adaptive cross-scale fusion mechanism that dynamically re-weights feature contributions according to defect saliency; and (3) C2f-DWR, a depth-wise residual variant that progressively expands receptive fields without incurring prohibitive computational costs. Building upon this enriched feature hierarchy, the neck employs our proposed MultiSEAM module—a cascaded squeeze-and-excitation attention mechanism operating at multiple granularities—to harmonize fine-grained and semantic cues, thereby amplifying weak defect signals against complex textures. Finally, we integrate the Inner-SIoU loss, which refines the geometric alignment between predicted and ground-truth boxes by jointly optimizing center distance, aspect ratio consistency, and IoU overlap, leading to faster convergence and tighter localization. Extensive experiments on two publicly available steel-defect benchmarks—NEU-DET and PVEL-AD—demonstrate the superiority of MBY. Without bells and whistles, our model achieves 85.8% mAP@0.5 on NEU-DET and 75.9% mAP@0.5 on PVEL-AD, surpassing the best-reported results by significant margins while maintaining real-time inference on an NVIDIA Jetson Xavier. Ablation studies corroborate the complementary roles of each component, underscoring MBY’s robustness across defect scales and surface conditions. These results suggest that MBY strikes an appealing balance between accuracy, efficiency, and deployability, offering a pragmatic solution for next-generation industrial quality-control systems. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

31 pages, 4141 KiB  
Article
Automated Quality Control of Candle Jars via Anomaly Detection Using OCSVM and CNN-Based Feature Extraction
by Azeddine Mjahad and Alfredo Rosado-Muñoz
Mathematics 2025, 13(15), 2507; https://doi.org/10.3390/math13152507 - 4 Aug 2025
Viewed by 160
Abstract
Automated quality control plays a critical role in modern industries, particularly in environments that handle large volumes of packaged products requiring fast, accurate, and consistent inspections. This work presents an anomaly detection system for candle jars commonly used in industrial and commercial applications, [...] Read more.
Automated quality control plays a critical role in modern industries, particularly in environments that handle large volumes of packaged products requiring fast, accurate, and consistent inspections. This work presents an anomaly detection system for candle jars commonly used in industrial and commercial applications, where obtaining labeled defective samples is challenging. Two anomaly detection strategies are explored: (1) a baseline model using convolutional neural networks (CNNs) as an end-to-end classifier and (2) a hybrid approach where features extracted by CNNs are fed into One-Class classification (OCC) algorithms, including One-Class SVM (OCSVM), One-Class Isolation Forest (OCIF), One-Class Local Outlier Factor (OCLOF), One-Class Elliptic Envelope (OCEE), One-Class Autoencoder (OCAutoencoder), and Support Vector Data Description (SVDD). Both strategies are trained primarily on non-defective samples, with only a limited number of anomalous examples used for evaluation. Experimental results show that both the pure CNN model and the hybrid methods achieve excellent classification performance. The end-to-end CNN reached 100% accuracy, precision, recall, F1-score, and AUC. The best-performing hybrid model CNN-based feature extraction followed by OCIF also achieved 100% across all evaluation metrics, confirming the effectiveness and robustness of the proposed approach. Other OCC algorithms consistently delivered strong results, with all metrics above 95%, indicating solid generalization from predominantly normal data. This approach demonstrates strong potential for quality inspection tasks in scenarios with scarce defective data. Its ability to generalize effectively from mostly normal samples makes it a practical and valuable solution for real-world industrial inspection systems. Future work will focus on optimizing real-time inference and exploring advanced feature extraction techniques to further enhance detection performance. Full article
Show Figures

Figure 1

17 pages, 2222 KiB  
Article
A Comprehensive User Acceptance Evaluation Framework of Intelligent Driving Based on Subjective and Objective Integration—From the Perspective of Value Engineering
by Wang Zhang, Fuquan Zhao, Zongwei Liu, Haokun Song and Guangyu Zhu
Systems 2025, 13(8), 653; https://doi.org/10.3390/systems13080653 - 2 Aug 2025
Viewed by 134
Abstract
Intelligent driving technology is expected to reshape urban transportation, but its promotion is hindered by user acceptance challenges and diverse technical routes. This study proposes a comprehensive user acceptance evaluation framework for intelligent driving from the perspective of value engineering (VE). The novelty [...] Read more.
Intelligent driving technology is expected to reshape urban transportation, but its promotion is hindered by user acceptance challenges and diverse technical routes. This study proposes a comprehensive user acceptance evaluation framework for intelligent driving from the perspective of value engineering (VE). The novelty of this framework lies in three aspects: (1) It unifies behavioral theory and utility theory under the value engineering framework, and it extracts key indicators such as safety, travel efficiency, trust, comfort, and cost, thus addressing the issue of the lack of integration between subjective and objective factors in previous studies. (2) It establishes a systematic mapping mechanism from technical solutions to evaluation indicators, filling the gap of insufficient targeting at different technical routes in the existing literature. (3) It quantifies acceptance differences via VE’s core formula of V = F/C, overcoming the ambiguity of non-technical evaluation in prior research. A case study comparing single-vehicle intelligence vs. collaborative intelligence and different sensor combinations (vision-only, map fusion, and lidar fusion) shows that collaborative intelligence and vision-based solutions offer higher comprehensive acceptance due to balanced functionality and cost. This framework guides enterprises in technical strategy planning and assists governments in formulating industrial policies by quantifying acceptance differences across technical routes. Full article
(This article belongs to the Special Issue Modeling, Planning and Management of Sustainable Transport Systems)
Show Figures

Figure 1

20 pages, 4569 KiB  
Article
Lightweight Vision Transformer for Frame-Level Ergonomic Posture Classification in Industrial Workflows
by Luca Cruciata, Salvatore Contino, Marianna Ciccarelli, Roberto Pirrone, Leonardo Mostarda, Alessandra Papetti and Marco Piangerelli
Sensors 2025, 25(15), 4750; https://doi.org/10.3390/s25154750 - 1 Aug 2025
Viewed by 291
Abstract
Work-related musculoskeletal disorders (WMSDs) are a leading concern in industrial ergonomics, often stemming from sustained non-neutral postures and repetitive tasks. This paper presents a vision-based framework for real-time, frame-level ergonomic risk classification using a lightweight Vision Transformer (ViT). The proposed system operates directly [...] Read more.
Work-related musculoskeletal disorders (WMSDs) are a leading concern in industrial ergonomics, often stemming from sustained non-neutral postures and repetitive tasks. This paper presents a vision-based framework for real-time, frame-level ergonomic risk classification using a lightweight Vision Transformer (ViT). The proposed system operates directly on raw RGB images without requiring skeleton reconstruction, joint angle estimation, or image segmentation. A single ViT model simultaneously classifies eight anatomical regions, enabling efficient multi-label posture assessment. Training is supervised using a multimodal dataset acquired from synchronized RGB video and full-body inertial motion capture, with ergonomic risk labels derived from RULA scores computed on joint kinematics. The system is validated on realistic, simulated industrial tasks that include common challenges such as occlusion and posture variability. Experimental results show that the ViT model achieves state-of-the-art performance, with F1-scores exceeding 0.99 and AUC values above 0.996 across all regions. Compared to previous CNN-based system, the proposed model improves classification accuracy and generalizability while reducing complexity and enabling real-time inference on edge devices. These findings demonstrate the model’s potential for unobtrusive, scalable ergonomic risk monitoring in real-world manufacturing environments. Full article
(This article belongs to the Special Issue Secure and Decentralised IoT Systems)
Show Figures

Figure 1

19 pages, 4612 KiB  
Article
User-Centered Design of a Computer Vision System for Monitoring PPE Compliance in Manufacturing
by Luis Alberto Trujillo-Lopez, Rodrigo Alejandro Raymundo-Guevara and Juan Carlos Morales-Arevalo
Computers 2025, 14(8), 312; https://doi.org/10.3390/computers14080312 - 1 Aug 2025
Viewed by 170
Abstract
In manufacturing environments, the proper use of Personal Protective Equipment (PPE) is essential to prevent workplace accidents. Despite this need, existing PPE monitoring methods remain largely manual and suffer from limited coverage, significant errors, and inefficiencies. This article focuses on addressing this deficiency [...] Read more.
In manufacturing environments, the proper use of Personal Protective Equipment (PPE) is essential to prevent workplace accidents. Despite this need, existing PPE monitoring methods remain largely manual and suffer from limited coverage, significant errors, and inefficiencies. This article focuses on addressing this deficiency by designing a computer vision desktop application for automated monitoring of PPE use. This system uses lightweight YOLOv8 models, developed to run on the local system and operate even in industrial locations with limited network connectivity. Using a Lean UX approach, the development of the system involved creating empathy maps, assumptions, product backlog, followed by high-fidelity prototype interface components. C4 and physical diagrams helped define the system architecture to facilitate modifiability, scalability, and maintainability. Usability was verified using the System Usability Scale (SUS), with a score of 87.6/100 indicating “excellent” usability. The findings demonstrate that a user-centered design approach, considering user experience and technical flexibility, can significantly advance the utility and adoption of AI-based safety tools, especially in small- and medium-sized manufacturing operations. This article delivers a validated and user-centered design solution for implementing machine vision systems into manufacturing safety processes, simplifying the complexities of utilizing advanced AI technologies and their practical application in resource-limited environments. Full article
Show Figures

Figure 1

14 pages, 1974 KiB  
Article
The Identification of the Competency Components Necessary for the Tasks of Workers’ Representatives in the Field of OSH to Support Their Selection and Development, as Well as to Assess Their Effectiveness
by Peter Leisztner, Ferenc Farago and Gyula Szabo
Safety 2025, 11(3), 73; https://doi.org/10.3390/safety11030073 - 1 Aug 2025
Viewed by 159
Abstract
The European Union Council’s zero vision aims to eliminate workplace fatalities, while Industry 4.0 presents new challenges for occupational safety. Despite HR professionals assessing managers’ and employees’ competencies, no system currently exists to evaluate the competencies of workers’ representatives in occupational safety and [...] Read more.
The European Union Council’s zero vision aims to eliminate workplace fatalities, while Industry 4.0 presents new challenges for occupational safety. Despite HR professionals assessing managers’ and employees’ competencies, no system currently exists to evaluate the competencies of workers’ representatives in occupational safety and health (OSH). It is crucial to establish the necessary competencies for these representatives to avoid their selection based on personal bias, ambition, or coercion. The main objective of the study is to identify the competencies and their components required for workers’ representatives in the field of occupational safety and health by following the steps of the DACUM method with the assistance of OSH professionals. First, tasks were identified through semi-structured interviews conducted with eight occupational safety experts. In the second step, a focus group consisting of 34 OSH professionals (2 invited guests and 32 volunteers) determined the competencies and their components necessary to perform those tasks. Finally, the results were validated through an online questionnaire sent to the 32 volunteer participants of the focus group, from which 11 responses (34%) were received. The research categorized the competencies into the following three groups: core competencies (occupational safety and professional knowledge) and distinguishing competencies (personal attributes). Within occupational safety knowledge, 10 components were defined; for professional expertise, 7 components; and for personal attributes, 16 components. Based on the results, it was confirmed that all participants of the tripartite system have an important role in the training and development of workers’ representatives in the field of occupational safety and health. The results indicate that although OSH representation is not yet a priority in Hungary, there is a willingness to collaborate with competent, well-prepared representatives. The study emphasizes the importance of clearly defining and assessing the required competencies. Full article
Show Figures

Figure 1

32 pages, 5560 KiB  
Article
Design of Reconfigurable Handling Systems for Visual Inspection
by Alessio Pacini, Francesco Lupi and Michele Lanzetta
J. Manuf. Mater. Process. 2025, 9(8), 257; https://doi.org/10.3390/jmmp9080257 - 31 Jul 2025
Viewed by 192
Abstract
Industrial Vision Inspection Systems (VISs) often struggle to adapt to increasing variability of modern manufacturing due to the inherent rigidity of their hardware architectures. Although the Reconfigurable Manufacturing System (RMS) paradigm was introduced in the early 2000s to overcome these limitations, designing such [...] Read more.
Industrial Vision Inspection Systems (VISs) often struggle to adapt to increasing variability of modern manufacturing due to the inherent rigidity of their hardware architectures. Although the Reconfigurable Manufacturing System (RMS) paradigm was introduced in the early 2000s to overcome these limitations, designing such reconfigurable machines remains a complex, expert-dependent, and time-consuming task. This is primarily due to the lack of structured methodologies and the reliance on trial-and-error processes. In this context, this study proposes a novel theoretical framework to facilitate the design of fully reconfigurable handling systems for VISs, with a particular focus on fixture design. The framework is grounded in Model-Based Definition (MBD), embedding semantic information directly into the 3D CAD models of the inspected product. As an additional contribution, a general hardware architecture for the inspection of axisymmetric components is presented. This architecture integrates an anthropomorphic robotic arm, Numerically Controlled (NC) modules, and adaptable software and hardware components to enable automated, software-driven reconfiguration. The proposed framework and architecture were applied in an industrial case study conducted in collaboration with a leading automotive half-shaft manufacturer. The resulting system, implemented across seven automated cells, successfully inspected over 200 part types from 12 part families and detected more than 60 defect types, with a cycle below 30 s per part. Full article
Show Figures

Figure 1

24 pages, 1223 KiB  
Article
Breaking Barriers: Financial and Operational Strategies for Direct Operations in Saudi Arabia
by Samar S. Alharbi
Sustainability 2025, 17(15), 6949; https://doi.org/10.3390/su17156949 - 31 Jul 2025
Viewed by 296
Abstract
This study investigates the key factors enabling the transition from distributor-based models to direct operations among companies in Saudi Arabia, in alignment with Vision 2030’s goals of economic diversification and operational efficiency. The study is based on quantitative data collected from 528 questionnaire [...] Read more.
This study investigates the key factors enabling the transition from distributor-based models to direct operations among companies in Saudi Arabia, in alignment with Vision 2030’s goals of economic diversification and operational efficiency. The study is based on quantitative data collected from 528 questionnaire responses representing diverse industries and professional roles. The results highlight that technological integration and regulatory negotiation are essential for a smooth transition to direct operations. Furthermore, environmental sustainability practices and stakeholder involvement significantly affect the adoption of this transition, often acting as moderators and mediators. The findings emphasize the importance of aligning operational strategies with national development goals to enhance efficiency and resilience. This study also examines how transitioning to direct operations impacts financial efficiency and contributes to improved financial performance and sustainability. This study provides practical recommendations for policymakers and business leaders to address operational challenges and improve their financial and operational performance. Full article
Show Figures

Figure 1

20 pages, 3729 KiB  
Article
Can AIGC Aid Intelligent Robot Design? A Tentative Research of Apple-Harvesting Robot
by Qichun Jin, Jiayu Zhao, Wei Bao, Ji Zhao, Yujuan Zhang and Fuwen Hu
Processes 2025, 13(8), 2422; https://doi.org/10.3390/pr13082422 - 30 Jul 2025
Viewed by 380
Abstract
More recently, artificial intelligence (AI)-generated content (AIGC) is fundamentally transforming multiple sectors, including materials discovery, healthcare, education, scientific research, and industrial manufacturing. As for the complexities and challenges of intelligent robot design, AIGC has the potential to offer a new paradigm, assisting in [...] Read more.
More recently, artificial intelligence (AI)-generated content (AIGC) is fundamentally transforming multiple sectors, including materials discovery, healthcare, education, scientific research, and industrial manufacturing. As for the complexities and challenges of intelligent robot design, AIGC has the potential to offer a new paradigm, assisting in conceptual and technical design, functional module design, and the training of the perception ability to accelerate prototyping. Taking the design of an apple-harvesting robot, for example, we demonstrate a basic framework of the AIGC-assisted robot design methodology, leveraging the generation capabilities of available multimodal large language models, as well as the human intervention to alleviate AI hallucination and hidden risks. Second, we study the enhancement effect on the robot perception system using the generated apple images based on the large vision-language models to expand the actual apple images dataset. Further, an apple-harvesting robot prototype based on an AIGC-aided design is demonstrated and a pick-up experiment in a simulated scene indicates that it achieves a harvesting success rate of 92.2% and good terrain traversability with a maximum climbing angle of 32°. According to the tentative research, although not an autonomous design agent, the AIGC-driven design workflow can alleviate the significant complexities and challenges of intelligent robot design, especially for beginners or young engineers. Full article
(This article belongs to the Special Issue Design and Control of Complex and Intelligent Systems)
Show Figures

Figure 1

28 pages, 2789 KiB  
Review
A Review of Computer Vision and Deep Learning Applications in Crop Growth Management
by Zhijie Cao, Shantong Sun and Xu Bao
Appl. Sci. 2025, 15(15), 8438; https://doi.org/10.3390/app15158438 - 30 Jul 2025
Viewed by 477
Abstract
Agriculture is the foundational industry for human survival, profoundly impacting economic, ecological, and social dimensions. In the face of global challenges such as rapid population growth, resource scarcity, and climate change, achieving technological innovation in agriculture and advancing smart farming have become increasingly [...] Read more.
Agriculture is the foundational industry for human survival, profoundly impacting economic, ecological, and social dimensions. In the face of global challenges such as rapid population growth, resource scarcity, and climate change, achieving technological innovation in agriculture and advancing smart farming have become increasingly critical. In recent years, deep learning and computer vision have developed rapidly. Key areas in computer vision—such as deep learning-based image processing, object detection, and multimodal fusion—are rapidly transforming traditional agricultural practices. Processes in agriculture, including planting planning, growth management, harvesting, and post-harvest handling, are shifting from experience-driven methods to digital and intelligent approaches. This paper systematically reviews applications of deep learning and computer vision in agricultural growth management over the past decade, categorizing them into four key areas: crop identification, grading and classification, disease monitoring, and weed detection. Additionally, we introduce classic methods and models in computer vision and deep learning, discussing approaches that utilize different types of visual information. Finally, we summarize current challenges and limitations of existing methods, providing insights for future research and promoting technological innovation in agriculture. Full article
(This article belongs to the Section Agricultural Science and Technology)
Show Figures

Figure 1

Back to TopTop