Advances in Computer Vision and Image Processing for Industrial Processes

A special issue of Processes (ISSN 2227-9717). This special issue belongs to the section "Manufacturing Processes and Systems".

Deadline for manuscript submissions: 31 December 2025 | Viewed by 687

Special Issue Editor


E-Mail Website
Guest Editor
Grand Power Solution Co., Ltd., Seoul, Republic of Korea
Interests: computer vision; computer graphics; image processing; photonics

Special Issue Information

Dear Colleagues,

This Special Issue aims to combine pioneering research and advanced applications of computer vision and image processing for enhancing industrial processes. With the rapid development of artificial intelligence and machine learning technologies, computer vision and image processing are becoming integral to modern industrial systems, enhancing efficiency, quality, and automation.

We invite researchers, practitioners, and industry experts to submit original research articles, reviews, and case studies that showcase these technologies' latest advancements and applications in industrial contexts. Topics of interest include, but are not limited to:

  • Industrial Automation and Robotics: Exploration of how computer vision and image processing are revolutionizing automation and robotic systems, including real-time monitoring, autonomous navigation, and defect detection.
  • Quality Control and Inspection: Innovative approaches for using image processing techniques to enhance quality control measures, such as surface inspection, product sorting, and flaw detection.
  • Predictive Maintenance and Monitoring: Contributions highlighting computer vision's role in predictive maintenance, anomaly detection, and monitoring of industrial equipment and processes.
  • Smart Manufacturing and Industry 4.0: Research integrating computer vision with IoT and smart manufacturing technologies to foster Industry 4.0 initiatives, including smart factories and intelligent supply chains.
  • Augmented and Virtual Reality: Applications of AR and VR in industrial training, simulation, and maintenance, leveraging advanced image processing techniques.
  • Data Fusion and Sensor Integration: Studies combining computer vision with other sensor data (e.g., thermal, lidar) to enhance situational awareness and decision-making in industrial environments.
  • AI and Deep Learning Techniques: Exploration of novel AI and deep learning algorithms for image recognition, object detection, and scene understanding tailored to industrial applications.
  • Human–Machine Interaction: Research on improving human–machine interfaces using computer vision for gesture recognition, user behavior analysis, and augmented user interfaces.
  • Environmental Monitoring and Safety: Image processing applications in monitoring industrial environments for safety, emissions control, and environmental compliance.

We encourage submissions that demonstrate practical implementations, case studies from industry, and theoretical advancements that push the boundaries of current technology. All submissions will undergo a rigorous peer-review process to ensure high-quality contributions to this evolving field.

Dr. Changsoo Je
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Processes is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • computer vision
  • image processing
  • industrial automation
  • quality control
  • predictive maintenance
  • smart manufacturing
  • machine learning
  • deep learning
  • robotics
  • Industry 4.0

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

15 pages, 2498 KiB  
Article
Research on Image Stitching Based on an Improved LightGlue Algorithm
by Yuening Feng, Fei Zhang, Xiaozhan Li, Xiong Xiao, Lijun Wang and Xiaofei Xiang
Processes 2025, 13(6), 1687; https://doi.org/10.3390/pr13061687 - 28 May 2025
Viewed by 44
Abstract
In traditional centralized steel plant production monitoring systems, there are two major problems. On the one hand, the limited shooting angles of cameras make it impossible to capture comprehensive information. On the other hand, using multiple cameras to display monitoring screens separately on [...] Read more.
In traditional centralized steel plant production monitoring systems, there are two major problems. On the one hand, the limited shooting angles of cameras make it impossible to capture comprehensive information. On the other hand, using multiple cameras to display monitoring screens separately on a large screen leads to clutter and easy omission of key information. To address the above-mentioned issues, this paper proposes an image stitching technique based on an improved LightGlue algorithm. First of all, this paper employs the SuperPoint (Self-Supervised Interest Point Detection and Description) algorithm as the feature extraction algorithm. The experimental results show that this algorithm outperforms traditional algorithms both in terms of feature extraction speed and extraction accuracy. Then, the LightGlue (Local Feature Matching at Light Speed) algorithm is selected as the feature matching algorithm, and it is optimized and improved by combining it with the Agglomerative Clustering (AGG) algorithm. The experimental results indicate that this improvement effectively increases the speed of feature matching. Compared with the original LightGlue algorithm, the matching efficiency is improved by 26.2%. Finally, aiming at the problems of parallax and ghosting existing in the image fusion process, this paper proposes a pixel dynamic adaptive fusion strategy. A local homography matrix strategy is proposed in the geometric alignment stage, and a pixel difference fusion strategy is proposed in the pixel fusion stage. The experimental results show that this improvement successfully solves the problems of parallax and ghosting and achieves a good image stitching effect. Full article
Show Figures

Figure 1

10 pages, 1235 KiB  
Article
Cost-Efficient Active Transfer Learning Framework for Object Detection from Engineering Documents
by Yu-Ri Han, Donghyun Park, Young-Suk Han and Jae-Yoon Jung
Processes 2025, 13(6), 1657; https://doi.org/10.3390/pr13061657 - 25 May 2025
Viewed by 228
Abstract
Recently, engineering companies have started to digitise documents in image form to analyse their meaning and extract important content. However, many engineering and contract documents contain different types of components such as texts, tables, and forms, which often hinder accurate interpretation by simple [...] Read more.
Recently, engineering companies have started to digitise documents in image form to analyse their meaning and extract important content. However, many engineering and contract documents contain different types of components such as texts, tables, and forms, which often hinder accurate interpretation by simple optical character recognition. Therefore, document object detection (DOD) has been studied as a preprocessing step for optical character recognition. Given the ease of acquiring image data, reducing annotation time and effort through transfer learning and active learning has emerged as a key research challenge. In this study, a cost-efficient active transfer learning (ATL) framework for DOD is presented to minimise the effort and cost of time-consuming image annotation for transfer learning. Specifically, three new sample evaluation measures are proposed to enhance the sampling performance of ATL. The proposed framework performed well in ATL experiments of DOD for invitation-to-bid documents. In the experiments, the DOD model was trained on only half of the labelled images, but, in terms of the F1-score, it achieved a similar performance as a DOD model trained on all labelled images. In particular, one of the proposed sampling measures, ambiguity, showed the best sampling performance compared to existing measures, such as entropy and uncertainty. The efficient sample evaluation measures proposed in this study are expected to reduce the time and effort required for ATL. Full article
Show Figures

Figure 1

Back to TopTop