Next Article in Journal
Distributed Generation Cluster Division Method Considering Frequency Regulation Response Speed
Previous Article in Journal
Wiener Filter Using the Conjugate Gradient Method and a Third-Order Tensor Decomposition
Previous Article in Special Issue
Recognition of Additive Manufacturing Parts Based on Neural Networks and Synthetic Training Data: A Generalized End-to-End Workflow
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:

Applications of Machine Learning and Computer Vision in Industry 4.0

Faculty of Electrical Engineering and Information Technology, Slovak University of Technology in Bratislava, 841 04 Bratislava, Slovakia
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(6), 2431;
Submission received: 29 February 2024 / Accepted: 5 March 2024 / Published: 13 March 2024
(This article belongs to the Special Issue Applications of Machine Learning and Computer Vision in Industry 4.0)

1. Introduction

Among the most important economic activities of humankind is industry. In general terms, it is defined by activities in the course of which raw materials, inputs and products are transformed into new value-added products. The new products are destined for final consumption or are used in further production processes. In this way, the continuous development of both human society and industrial production is ensured. Modern recognition methods and intelligent diagnostics are closely linked to the current fourth industrial revolution [1].
Industrial development is closely linked to technological development and vice versa. The mass introduction of machine production by factories resulted in the emergence of a powerful impetus for social development as well—the Industrial Revolution. Such impulses brought about fundamental industrial and social changes, which can be referred to by the adjectives first, second and third industrial revolution [1,2].
In the late 18th century, the transition from manual to machine production was set in motion, giving rise to the changes known as the First Industrial Revolution. An important catalyst for these changes was the invention of the steam engine, which was patented by James Watt. There was a reduction in production and travel costs, increasing the speed of production and transportation, resulting in the development of cities and the migration of the population in search of work [3].
The period of the turn of the 19th and 20th centuries can be considered the beginning of the second industrial revolution. Electricity and the electric motor became important means of energy, the chemical and petrochemical industries developed, and motoring and the manufacture of automobiles and aeroplanes took off. Automobile production comes up with a revolutionary production method—belt-based production lines. These principles of mass production have been transferred to other manufacturing industries, resulting in increased labor productivity and a concomitant reduction in production costs. However, mass production also carried with it negatives. Products were produced in huge quantities, their variability was low, and any change in product mix or production meant high costs. Large corporate firms with thousands of employees emerged. The disparity between developed and underdeveloped countries is increasing [1,2].
The beginning of the third industrial revolution came in the late 1960s and early 1970s. Progressive changes and advances in communication technology are the main technical drivers. Automation, computer control, electronic and information technologies are introduced in manufacturing. Traditional heavy industries are no longer the backbone of the economy and a transformation is taking place. There is an increase in investment in banking, finance, insurance, trade, telecommunications and tourism. Information technology, science, research and innovation are playing an important role in the manufacturing sector. Product life cycles are decreasing and product variability is increasing. The process of digitizing industry is beginning [4].

2. Industry 4.0

Industry 4.0, also referred to as the fourth industrial revolution, is a complex concept representing the highest degree of current automation, interoperability, data exchange and decentralization in manufacturing systems and technologies. It is defined as an integrating concept for technologies, systems and the concept of value chain organization. It combines Cyber-Physical Systems (CPS), Internet of Service (IoS) and the Internet of Things (IoT). The term Industry 4.0 was used for the first time at the 2011 Hannover industrial exhibition [5].
Industry 4.0 simplifies the vision and implementation of Smart Factories. In the modular structure of the Industry 4.0 “Smart Factory,” cyber-physical systems control physical processes, form a virtual copy of a given physical world, and carry out decentralized decision-making. Through the Internet of Things, cyber-physical systems also communicate and interact with people in real time [6].

2.1. Roles of Intelligent Diagnostics in Industry 4.0

The basis of the Industry 4.0 concept is the full use of interconnected cyber-physical systems equipped with artificial intelligence elements, which will be able to autonomously provide selected activities in production processes that were previously provided by humans. The principles of this concept can be applied to both industrial production and service provision [7].
The development and implementation of automation and process control conceived in this way requires the development of new methods of automatic control. However, new tasks will also bring new challenges that can only be addressed by applying and developing the latest knowledge from a number of engineering disciplines: machine perception systems, intelligent sensing, intelligent human–machine and machine-to-machine communication, computer vision, machine learning, etc. New insights from these fields will enable modelling and subsequent replacement of human decision-making activities by machine activities [7].
Artificial intelligence will play a key role in the new tasks and challenges. This discipline offers solutions that will be as efficient as possible, provide flexibility, be able to fully exploit shared data storage (cloud solutions) and fully exploit the concept of the Internet of Things.
Artificial intelligence will play a key role in new tasks and challenges. This discipline offers solutions that are as efficient as possible, provide flexibility, are able to take full advantage of shared data storage (cloud solutions) and fully exploit the concept of the Internet of Things [8].
The basic characteristics of control and decision-making systems in Industry 4.0 are their automated behavior with elements of artificial intelligence, openness, the ability to integrate into large-scale units, the ability to communicate intelligently with people and with each other, high security (protection of data and communications from cyber-attacks), and also the fact that they will be able to be remotely monitored, diagnosed and their activities managed [7].
The goals of Industry 4.0 require the involvement and development of cyber practices and methods that will be applied in a wide range of industrial and societal practices. New practices in the automation of both industrial and societal processes, namely monitoring, control, decision-making, diagnostics, planning, etc., require research and development in the following areas, for example [7]:
  • Research and development of intelligent sensors providing information on the progress of production processes and monitoring essential parameters of the production process and products
  • Development of methods and means of intelligent perception of the environment and tools for intelligent man-machine communication (computer vision, speech and language processing) and industrial machine-to-machine communication (e.g., wireless communication for mechatronic applications);
  • Research and development of methods for analysis of collected data to enable efficient management of production as part of the entire value chain (from input components/raw materials to maximum satisfaction of customer needs)—on-line and off-line processing of large-scale data (“big data,” e.g., for the purpose of fast information retrieval and knowledge extraction, for the field of technical diagnostics—identification of atypical behavior, fault prediction, etc.)
  • Research and development of artificial intelligence methods and their application in the development of advanced methods for automated decision-making, control, diagnosis and monitoring in areas of technical and social practice
Applied research in AI is therefore crucial to the application of Industry 4.0 ideas—the results from these areas are literally the foundation of all Industry 4.0 solutions. Therefore, this part of research can be considered a priority, indispensable and indisputable [7].

2.2. Machine and Deep Learning in Industrial Processes

In recent years, the real possibilities and capabilities of artificial intelligence—AI—have become more and more visible. By combining advanced intelligent technologies, AI enables devices to perform tasks that were previously only possible with the assistance of human intelligence. It is advancing rapidly and exponentially, and soon there will be no limit to what AI can achieve in many industries and functions [9].
As a result of digitalization and the advent of the Industry 4.0 concept, it is undergoing a significant transformation, and AI is one of the tools bringing about this change. AI has evolved in its capabilities over the years and has found application in various areas of industrial manufacturing and automation. The integration of AI with other advanced technologies in the industrial ecosystem will enable manufacturers to gain a strong foothold in the Industry 4.0 concept. Automation as a part of manufacturing is implemented through systems such as the programmable logic controller (PLC), distributed control system (DCS) and SCADA system. However, smart manufacturing is the interconnectedness of advanced AI, Industrial Internet of Things (IIoT) and analytics technologies embedded in these traditional systems to increase manufacturing automation, improve process quality, improve process optimization and achieve higher cost savings [9].
The introduction of AI into industrial manufacturing will result in machines no longer requiring special programming to perform their tasks. Instead, these machines will have the potential to learn from their own experiences. This concept has been gaining a lot of momentum in recent years due to its ability to leverage the vast amount of data that can be generated as a result of new industrial concepts such as the Industrial Internet of Things—IIoT. In order to be competitive, it is essential that businesses develop the skills that can help them exploit the opportunities of digitalization. These applications of AI vary for different manufacturing industries [10].

2.3. Machine and Deep Learning in Machine Vision Applications

The first optoelectronic CCD sensor was developed by Kodak in the 1970s. In parallel, CMOS sensors were also developed, initially as a cheaper option compared to CCDs with noticeably poorer image quality. CMOS technology is used for the production of most integrated circuits, such as semiconductor memories in computers. The disadvantages of CCDs have been gradually eliminated or suppressed due to development, and so CCD sensors are slowly going away [11].
However, development is constantly moving forward, so silicon-based semiconductor sensors as we know them today may be completely a thing of the past in a few years. Research is testing chemical elements and materials such as Germanium or Graphene, which show much better properties than silicon, such as several orders of magnitude higher sensitivity. The cost estimates of mass production also speak in favor of new types of sensors. Photosensitive sensors have applications in both consumer electronics and industrial digital cameras. For machine vision technology itself, the impact of developments in sensors is evident in the ability to better record the scene being imaged in the form of higher-quality digital images. Other hardware innovations include higher transmission speeds in the form of the 25 GigE camera introduced. However, devices supporting 10 GigE have been slow to appear so far. Another example is the Dual USB3 camera with two USB3.0 ports and twice the transfer rate [11].
Machine vision is thus evolving thanks to advances in computer vision and new technologies and products. What used to be difficult or nonexistent can nowadays be the standard. And how does this fit into the Industry 4.0 concept? Machine vision is capable of replacing humans because of its ability to see and also understand what is seen. When considering the sensory perception of objects, it is often visual perception that is reached for, as about 90–95% of the information is drawn through this sensory channel [12]. Thus, in today’s information age, the replacement of humans or the integration of machine vision systems into the production process is quite logical and very common due to this fact. Current modern systems are able to adapt to rapid changes in production and allow a high degree of versatility. The application is mainly in the field of quality control, where the human factor very often fails or is not able to meet the productivity or accuracy requirements of the inspections performed [13].
Machine vision is an essential element of an automation system. No other aspect of the production line captures more information or is more valuable in assessing products and detecting defects, as well as gathering data to guide operations and optimize the productivity of robots and other equipment. Unlike simple sensors, vision sensors generate large amounts of image data, increasing their usefulness in Industry 4.0 environments [14].
As data analytics capabilities advance, the large volumes of data accessed through vision equipment will be used to identify and flag defective products, understand their shortcomings, and enable fast and effective intervention in an Industry 4.0 manufacturing plant [14].
A current trend in image processing is the use of GPUs (Graphics Processing Units). These are processing units that consist of thousands of relatively simple computing cores on a single chip. Their architecture looks like that of a neural network. This allows the use of biologically inspired and multi-layered “deep” neural networks resembling the human brain [15].
Deep learning has emerged as the core speech, text, and face recognition technology we use in our mobile and wearable devices and is now beginning to be used in many other applications—from medical diagnostics to internet security—to predict patterns and make critical business decisions. The same technology is now making its way into advanced manufacturing processes, quality control, and other decision-based uses [15].
Deep learning essentially teaches machines to do what comes naturally to humans: learn from examples. New inexpensive hardware has enabled the deployment of biologically inspired multi-layered “deep” neural networks that mimic the neural networks in the human brain. This gives manufacturing technology amazing new capabilities to recognize images, discern trends, and make intelligent predictions and decisions. Starting with the basic logic developed during initial training, deep neural networks can continuously improve their performance as they are exposed to new data, such as images or text [16].
Deep learning-based image analysis combines the specificity and flexibility of human visual inspection with the reliability, consistency, and speed of a computer system. Deep learning models can repeatedly and iteratively solve challenging computer vision applications that would be difficult to develop programmatically and that are often impossible to solve using traditional machine vision approaches. Deep learning models can distinguish unacceptable errors while tolerating natural variations in complex patterns. Moreover, they can be easily adapted to new examples without reprogramming their underlying algorithms [17].
Deep learning-based software can perform part localization, part inspection, part classification, and pattern recognition according to more effective judgments than humans or traditional machine vision solutions. The days when humans directly controlled production lines are long gone. Today, machines automate production, assembly and handling tasks. Machine vision systems equipped with precise setup and identification algorithms and guidance capabilities have made it possible to produce compact modern components that could not be assembled manually. On the production line, machine vision systems can reliably and repeatedly inspect hundreds or thousands of parts per minute, far exceeding the inspection capabilities of humans [18].
For decades, machine vision systems have taught computers to perform inspections that detect defects, contaminants, functional deficiencies, and other irregularities in manufactured products. Machine vision excels at quantitatively measuring a structured scene because of its speed, accuracy, and repeatability. A machine vision system built on the right resolution and camera optics can easily inspect the details of objects too small to be seen by the human eye and inspect them with greater reliability and less error [19].
Deep learning-based visual systems are the opposite of human visual inspection [18]:
  • More consistent—they work 24 × 7 and maintain the same level of quality on every line, every shift, and every factory;
  • More reliable—they identify every defect outside a set tolerance;
  • Faster—identify defects in milliseconds, support high-speed applications and increase production throughput.
Deep learning-based visual systems are in contrast to conventional machine vision systems [18]:
  • Designed for hard-to-solve applications—solves complex inspection, classification, and localization applications using classical rule-based algorithms;
  • Easier to configure—applications can be set up quickly, speeding up proof of concept and development;
  • Tolerates variation—handles error tolerances for applications that require evaluation of acceptable deviations from control.
Traditional machine vision systems work reliably with consistent, well-made parts. They work through sequential filtering and rule-based algorithms that are more cost-effective than human inspection. But algorithms become impractical as exceptions and defect databases grow. Certain traditional machine vision inspections, such as final assembly verification, are notoriously difficult to program due to multiple variables that can be difficult for a machine to isolate, such as illumination or color changes [16].
Although machine vision systems tolerate some variability in appearance due to scale, rotation, and distortion, complex surface textures and image quality issues present serious inspection challenges. Machine vision systems strive to appreciate the variability and variance between very visually similar parts. Inherent differences or anomalies may or may not be grounds for rejection, depending on how the user understands and classifies them. “Functional” anomalies that affect the usefulness of a part are almost always grounds for rejection, while cosmetic anomalies may not be, depending on the needs and preferences of the manufacturer [20].

2.4. Industry 5.0

Industry 4.0 concepts have become globally accepted in the last decade, with many countries introducing their own strategy and naming it. Thus, research and education have influenced the development and implementation of digital and smart technology solutions in various industries and services. This raises questions as to why, after around 10 years of Industry 4.0, the fourth industrial revolution, another concept is beginning to emerge—Industry 5.0 [21].
The Industry 5.0 revolution can therefore also be seen as the evolution of Industry 4.0, where it could be said that people are forgotten and only the modernization of production, higher efficiency and economic profit are considered. Industry 5.0 turns this view towards people, their comfort, health and the environment in which they work and live. It is no coincidence that the concepts of Industry 4.0 and 5.0 currently coexist and are likely to coexist for some time to come, as they complement and balance each other very well. There are debates about whether Industry 5.0 is a revolution or just an evolution. The answer is still not clear and depends on the point of view [22].
The concept of Industry 4.0 is a typical representative of the linear model, as it is based on a paradigm oriented towards economic growth regardless of its energy intensity and regardless of the use of resources. Industry 4.0 is mainly a technological paradigm based on the emergence of cyber-physical systems and the convergence of information or operational technologies. However, this paradigm does not address social tensions or the impact of the climate crisis.

3. An Overview of Published Articles

The article by Jonas Conrad et al. (contribution 1) is devoted to the realm of additive manufacturing (AM), also known as 3D printing, where sorting a large number of parts swiftly and precisely is crucial. However, traditional recognition methods frequently fall short, creating a bottleneck in the process. This article introduces a novel workflow that leverages the power of neural networks to address this challenge. The proposed workflow hinges on the generation of synthetic training data. This data is meticulously crafted from computer-aided design (CAD) models, which act as digital blueprints for the AM parts. By feeding this synthetic data into a neural network, the researchers essentially train the network to recognize the distinctive characteristics of various AM parts. Once trained, the network can then be employed to classify real AM parts with remarkable accuracy, eliminating the need for any modifications to the parts themselves. To validate the efficacy of their proposed workflow, the researchers conducted an industrial case study. The results were promising, demonstrating that the neural network achieved a high degree of accuracy in classifying AM parts. This paves the way for the potential integration of this workflow into real-world AM production lines, streamlining the sorting process and enhancing overall efficiency. In essence, this article presents a groundbreaking approach to AM part recognition using neural networks. By harnessing the power of synthetic training data, the proposed workflow offers a solution to a longstanding challenge in the AM industry, potentially leading to significant improvements in sorting efficiency and production throughput.
Shiqi Yue and Yuanwu Shi (contribution 2) present an article about a new method for controlling manipulators, which are robotic arms used in various industrial applications. The article discusses the challenges associated with maintaining manipulator stability during operation, as instability can lead to safety hazards and decreased performance. To address this challenge, the authors propose a system that leverages the combined strengths of two machine learning techniques: Long Short-Term Memory (LSTM) networks and XGBoost. LSTM networks are adept at identifying patterns in sequential data, making them well-suited for analyzing the continuous stream of vibration signals collected from the manipulator. XGBoost, on the other hand, is a powerful tool for making accurate predictions based on various data inputs. By combining these techniques, the researchers create a model that can effectively learn the complex relationships between the vibration signal features and the manipulator’s stability state. The model is trained on a dataset of vibration signals collected under various operating conditions, allowing it to generalize to unseen scenarios. The article also introduces a scoring system that translates the model’s predictions into a user-friendly stability score. This score provides a clear indication of the manipulator’s current stability level, enabling human operators to make informed decisions about its control. The effectiveness of the proposed method has been validated through extensive experiments. The results demonstrate that the LSTM-XGBoost model achieves superior performance compared to baseline methods in predicting manipulator stability. This paves the way for the potential integration of this approach into real-world manipulator control systems, enhancing safety, efficiency, and reliability in industrial operations. The article presents a significant contribution to the field of manipulator control by introducing a novel and effective method for predicting stability using machine learning. The combination of LSTM networks and XGBoost, along with the proposed scoring system, offers a promising solution for addressing this critical challenge in industrial robotics.
Research by Chunru Cheng et al. (contribution 3) delves into the realm of predicting rutting development in pavements that have been overlaid with flexible materials, employing artificial neural networks (ANNs) as the tool of choice. Pavement maintenance and repair are crucial aspects of ensuring safe and efficient transportation infrastructure, and accurate prediction of rutting development plays a pivotal role in optimizing these processes. The authors leverage data meticulously compiled from the Canadian Long-Term Pavement Performance (LTPP) program to establish a robust model. Their meticulous analysis reveals that a multitude of factors significantly influence rutting development, including traffic volume and composition, prevailing climatic conditions, the inherent characteristics of the pavement materials themselves, and any maintenance interventions that have been undertaken. By incorporating these diverse factors into the ANN model, the researchers empower it to discern the complex, often nonlinear, relationships that exist between these variables and the observed rutting development. This empowers transportation authorities and pavement maintenance crews to make more informed decisions regarding pavement maintenance strategies, ultimately contributing to the longevity and overall performance of our transportation infrastructure. In general, this article presents a valuable contribution to the field of pavement maintenance by proposing a novel and effective method for predicting rutting development using artificial neural networks. By leveraging the power of ANNs and meticulously considering the various influencing factors, the authors provide valuable insights that can be harnessed to optimize pavement maintenance strategies and ensure the enduring integrity of our transportation infrastructure.
A study by Hyeeun Ku and Minhyeok Lee (contribution 4) presents TextControlGAN, a novel text-to-image synthesis model based on generative adversarial networks (GANs). TextControlGAN addresses the limitations of existing models that utilize the conditional GAN (cGAN) framework. It leverages the ControlGAN framework, incorporating an independent regressor trained with data augmentation (DA) techniques to enhance its text-conditioning capabilities. Evaluations were conducted using a bird image dataset containing roughly 30,000 image-text pairs. Compared to the cGAN-based GAN-INT-CLS model, TextControlGAN achieved a significant improvement of 17.6% in Inception Score (IS) and a substantial reduction of 36.6% in Fréchet Inception Distance (FID). Qualitative comparisons revealed that images generated by TextControlGAN more faithfully adhered to the provided textual descriptions, unlike alternative models that sometimes failed to accurately capture the context. The key to TextControlGAN’s success lies in its unique approach. By employing an independent regressor trained with DA techniques, the model effectively learns the intricate relationship between text and image features. This learning method, applicable to various model structures, paves the way for further exploration and development in the field of text-to-image synthesis.
Bo Liang et al. (contribution 5) present an article about a study on the relationship between light environment parameters and driver pupil diameter. It discusses using a convolutional neural network (CNN) to analyze the data. The study found that the way the lights are laid out in the tunnel has the biggest effect on pupil diameter. Other factors that affect pupil diameter include the color temperature of the light source, the height of the reflective coating on the sidewalls, and the color of the reflective coating on the sidewalls. The researchers also looked at how these factors interact with each other. For example, they found that the effect of color temperature was more pronounced when the lights were laid out in a certain way. This suggests that the design of lighting systems in highway tunnels needs to take into account not only the individual parameters of the lighting system but also how these parameters interact with each other. The findings of this study could have important implications for the design of lighting systems in highway tunnels. By taking into account the factors that affect pupil diameter, lighting systems can be designed to improve driver visibility and comfort, which could help reduce the risk of accidents. Overall, this study provides valuable insights into the relationship between light environment parameters and driver pupil diameter. The findings of this study could be used to improve the design of lighting systems in highway tunnels in order to improve driver visibility and comfort.
The article by Rafaela Carvalho et al. (contribution 6) presents a deep learning-powered system for real-time digital meter reading on edge devices. It discusses the challenges of manually reading meters, which can be time-consuming, error-prone, and labor-intensive. The article also highlights the benefits of an automated system, such as improved accuracy, efficiency, and cost savings. The system uses a deep learning model to extract the meter reading from a captured image. Deep learning is a type of artificial intelligence that enables computers to learn from data without being explicitly programmed. In this case, the deep learning model is trained on a large dataset of images of meters with different readings. The model is then able to identify the meter reading in a new image and transmit it to a central system for further processing and billing. The system can be deployed on a mobile device, such as a smartphone or tablet. This makes it easy to use and portable, and it can be used in a variety of settings, such as residential homes, commercial buildings, and industrial facilities. The system is also accurate and efficient, with a high success rate in recognizing meter readings. This deep-learning-powered system has the potential to revolutionize the way meter readings are collected. It offers a number of advantages over traditional manual meter reading methods, including improved accuracy, efficiency, and cost savings.

4. Conclusions

This collection of articles spans a broad spectrum of innovative approaches and methodologies in the realms of additive manufacturing, robotics, pavement maintenance, text-to-image synthesis, and digital meter reading, showcasing the dynamic interplay between cutting-edge technology and practical applications. Each study underscores the transformative potential of integrating machine learning, deep learning, and artificial neural networks into various industries, highlighting a shared commitment to overcoming traditional challenges and advancing efficiency, accuracy, and safety.
The groundbreaking work of Jonas Conrad et al. on additive manufacturing part recognition through neural networks represents a significant leap forward in production line efficiency, addressing the bottleneck of part sorting with a novel, synthetic data-driven approach. Similarly, Shiqi Yue and Yuanwu Shi’s exploration of manipulator stability leverages LSTM networks and XGBoost to enhance robotic arm reliability and safety, marking a pivotal advancement in industrial robotics. Chunru Cheng and colleagues’ application of ANNs for predicting pavement rutting development offers a strategic tool for optimizing maintenance and ensuring the longevity of transportation infrastructure. The innovation continues with Hyeeun Ku and Minhyeok Lee’s TextControlGAN, which sets a new benchmark for text-to-image synthesis, pushing the boundaries of creative and accurate visual representation from textual descriptions. Bo Liang et al.’s study on the impact of light environment parameters on driver safety brings to light the critical influence of tunnel lighting design on road safety, advocating for smarter, data-driven lighting solutions. Lastly, Rafaela Carvalho and her team’s development of a deep learning-powered system for digital meter reading on edge devices exemplifies the practical application of AI in streamlining and improving the accuracy of utility management.
Across these diverse studies, a common theme emerges: the potential of machine learning and neural network technologies to revolutionize traditional practices across various sectors. From enhancing manufacturing processes and robotic control to improving public infrastructure maintenance, safety, and utility management, these contributions reflect a broader trend towards the digitization and smart automation of industry. Furthermore, the studies not only offer solutions to existing challenges but also pave the way for future research and development, inviting exploration into new applications and advancements.
In closing, this collection emphasizes the importance of interdisciplinary research and the adoption of new technologies to address complex challenges. The articles, through their innovative approaches and findings, not only contribute significantly to their respective fields but also inspire ongoing exploration and adaptation in the ever-evolving landscape of technology and industry.

Author Contributions

Conceptualization, E.K., O.H. and D.R.; methodology, E.K.; investigation, O.H.; resources, E.K, O.H. and D.R.; writing—original draft preparation, E.K. and O.H.; writing—review and editing, E.K. and O.H.; supervision, D.R.; funding acquisition, E.K. and D.R. All authors have read and agreed to the published version of the manuscript.


This research was funded by the Scientific Grant Agency of the Ministry of Education, Research and Sport of the Slovak Republic No. 1/0107/22 and No. 1/0637/23.

Conflicts of Interest

The authors declare no conflict of interest.

List of Contributions

  • Conrad, J.; Rodriguez, S.; Omidvarkarjan, D.; Ferchow, J.; Meboldt, M. Recognition of Additive Manufacturing Parts Based on Neural Networks and Synthetic Training Data: A Generalized End-to-End Workflow. Appl. Sci. 2023, 13, 12316.
  • Yue, S.; Shi, Y. Manipulator Smooth Control Method Based on LSTM-XGboost and Its Optimization Model Construction. Appl. Sci. 2023, 13, 8994.
  • Cheng, C.; Ye, C.; Yang, H.; Wang, L. Predicting Rutting Development of Pavement with Flexible Overlay Using Artificial Neural Network. Appl. Sci. 2023, 13, 7064.
  • Ku, H.; Lee, M. TextControlGAN: Text-to-Image Synthesis with Controllable Generative Adversarial Networks. Appl. Sci. 2023, 13, 5098.
  • Liang, B.; Xu, M.; Li, Z.; Niu, J. Sensitivity Study of Highway Tunnel Light Environment Parameters Based on Pupil Change Experiments and CNN Judging Method. Appl. Sci. 2023, 13, 3160.
  • Carvalho, R.; Melo, J.; Graça, R.; Santos, G.; Vasconcelos, M.J.M. Deep Learning-Powered System for Real-Time Digital Meter Reading on Edge Devices. Appl. Sci. 2023, 13, 2315.


  1. Popjaková, D.; Mintálová, T. Industry 4.0, What Preceded It and What Characterises It—Geographical Context. Acta Geogr. Uni. Com. 2019, 63, 173–192. (In Slovak) [Google Scholar]
  2. Ministry of Economy of the Slovak Republic. Intelligent Industry Concept for Slovakia; Government of the Slovak Republic: Bratislava, Slovakia, 2016. (In Slovak)
  3. Liserre, M.; Sauter, T.; Hung, J.Y. Future Energy Systems: Integrating Renewable Energy Sources into the Smart Power Grid through Industrial Electronics. IEEE Ind. Electron. Mag. 2010, 4, 18–37. [Google Scholar] [CrossRef]
  4. Naboni, R.; Paoletti, I. The Third Industrial Revolution. In Advanced Customization in Architectural Design and Construction; Springer: Berlin/Heidelberg, Germany, 2015. [Google Scholar] [CrossRef]
  5. Kagermann, H.; Wahlster, W.; Helbig, J. Final Report of the Industrie 4.0 Working Group; Forschungsunion Wirtschaft und Wissenschaft, Acatech: Munich, Germany, 2013; pp. 1–84. [Google Scholar]
  6. JENSEN, M.C. The Modern Industrial Revolution, Exit, and the Failure of Internal Control Systems. J. Financ. 1993, 48, 831–880. [Google Scholar] [CrossRef]
  7. Ministry of Industry and Trade. Industry 4.0 Initiative; Ministry of Industry and Trade of Czech Republic: Prague, Czech Republic, 2015. (In Czech)
  8. Ionescu, L. Big Data, Blockchain, and Artificial Intelligence in Cloud-Based Accounting Information Systems. Anal. Metaphys. 2019, 18, 44–49. [Google Scholar] [CrossRef]
  9. Sundaram, K.; Nandini, N. Artificial Intelligence in the Shop Floor, Envisioning the Future of Intelligent Automation and Its Impact on Manufacturing; White paper; Frost & Sullivan: San Antonio, TX, USA, 2018. [Google Scholar]
  10. Rusakova, E.P.; Inshakova, A.O. Industrial and Manufacturing Engineering in Digital Legal Proceedings in the Asia-Pacific Region: A New Level of Quality Based on Data, Blockchain and Ai. Int. J. Qual. Res. 2021, 15, 273–290. [Google Scholar] [CrossRef]
  11. H&D International Group Strojové Vidění a Průmysl 4.0 Jako Cesta Budoucnosti. Available online: (accessed on 15 May 2020).
  12. Démuth, A. Teórie Percepcie; Filozofická fakulta Trnavskej Univerzity v Trnave: Trnava, Slovakia, 2013; ISBN 9788080825799. [Google Scholar]
  13. Batchelor, B.G. Machine Vision Handbook; Springer: London, UK, 2012. [Google Scholar]
  14. Cognex Corporation White Paper: Industry 4.0 and Machine Vision. Available online: (accessed on 22 August 2020).
  15. Vše o průmyslu Hluboké Učení + Strojové Vidění = Kontrola Kvality Nové Generace. Available online: (accessed on 10 January 2024).
  16. Coffey, V.C. Machine Vision: The Eyes of Industry 40. Opt. Photonics News 2018, 29, 42. [Google Scholar] [CrossRef]
  17. Kovilpillai, J.J.A.; Jayanthy, S. An Optimized Deep Learning Approach to Detect and Classify Defective Tiles in Production Line for Efficient Industrial Quality Control. Neural Comput. Appl. 2023, 35, 11089–11108. [Google Scholar] [CrossRef]
  18. Cognex Deep Learning for Factory Automation. Available online: (accessed on 22 August 2020).
  19. Javaid, M.; Haleem, A.; Singh, R.P.; Rab, S.; Suman, R. Exploring Impact and Features of Machine Vision for Progressive Industry 4.0 Culture. Sens. Int. 2022, 3, 100132. [Google Scholar] [CrossRef]
  20. Müller, J.M. Contributions of Industry 4.0 to Quality Management—A SCOR Perspective. IFAC-Pap. 2019, 52, 1236–1241. [Google Scholar] [CrossRef]
  21. Singh, T.; Singh, D.; Singh, C.D.; Singh, K. Industry 5.0—Towards a Sustainable, Human-Centric and Resilient European Industry; European Union: Luxembourg, 2023. [Google Scholar]
  22. Müller, J. Enabling Technologies for Industry 5.0: Results of a Workshop with Europe’s Technology Leaders; European Commission: Luxembourg, 2020. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Haffner, O.; Kučera, E.; Rosinová, D. Applications of Machine Learning and Computer Vision in Industry 4.0. Appl. Sci. 2024, 14, 2431.

AMA Style

Haffner O, Kučera E, Rosinová D. Applications of Machine Learning and Computer Vision in Industry 4.0. Applied Sciences. 2024; 14(6):2431.

Chicago/Turabian Style

Haffner, Oto, Erik Kučera, and Danica Rosinová. 2024. "Applications of Machine Learning and Computer Vision in Industry 4.0" Applied Sciences 14, no. 6: 2431.

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop