Next Article in Journal
A Survey on Mission-Profile-Based Stress Testing of Electric Motor Drives
Previous Article in Journal
Modeling and Simulation of Autonomous DC Microgrid with Variable Droop Controller
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Designing a System Architecture for Dynamic Data Collection as a Foundation for Knowledge Modeling in Industry

Josef Ressel Centre for Knowledge-Assisted Visual Analytics for Industrial Manufacturing Data, St. Pölten University of Applied Sciences, 3100 St. Pölten, Austria
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(9), 5081; https://doi.org/10.3390/app15095081
Submission received: 13 March 2025 / Revised: 17 April 2025 / Accepted: 29 April 2025 / Published: 2 May 2025

Abstract

:
This study develops and implements a scalable system architecture for dynamic data acquisition and knowledge modeling in industrial contexts. The objective is to efficiently process large datasets to support decision-making and process optimization within Industry 4.0. The architecture integrates modern technologies, such as the ibaPDA system for data acquisition, and employs communication standards like Modbus TCP and OPC UA to ensure broad compatibility with diverse equipment. In addition, it leverages native protocols offered by certain controllers, enabling direct data exchange without the need for conversion layers. A developed prototype demonstrates the practical applicability of the architecture, tested in a real industrial environment with a focus on processing speed, data integrity, and system reliability. The results indicate that the architecture not only meets the requirements for dynamic data acquisition but also enhances knowledge modeling. This leads to more efficient process control and opens new perspectives for managing and analyzing big data in production environments. The study emphasizes the importance of an integrated development approach and highlights the need for interdisciplinary collaboration to address operational challenges. Future extensions may include the implementation of Python interfaces and machine learning algorithms for data simulation, enabling more accurate predictive models. These findings provide valuable insights for industry, software development, data science, and academia, helping to tackle the challenges of Industry 4.0 and drive innovation forward.

1. Introduction

In the data-driven economy of Industry 4.0, the ability to efficiently collect, analyze, and utilize data is crucial for competitiveness and innovation [1,2]. Modern knowledge management approaches incorporate human–machine interaction and data protection, playing a key role in industrial applications [3]. The transition to a Knowledge Society 4.0 underscores the importance of data-driven value creation, with seven core knowledge management tasks: data collection, processing, knowledge generation, distribution, utilization, fostering learning organizations, and adapting to technological advancements [4]. To better understand this evolution, Figure 1 illustrates the four stages of industrial transformation—from mechanization to digitalization. While not central to the technical contributions of this work, it serves as a contextual reference point to position the architectural challenges of Industry 4.0 within the broader historical landscape.
Efficient industrial data analysis requires integrated systems, such as the reference framework developed in the AKKORD research project [5]. However, the growing heterogeneity of data sources and the demand for real-time processing pose significant challenges. Addressing these issues requires a well-defined system architecture, enabling interdisciplinary collaboration, scalable integration, and adaptive knowledge modeling [6].
The fourth industrial revolution (Industry 4.0) has transformed traditional manufacturing by introducing cyber-physical systems, connectivity, and autonomous decision-making [7,8,9]. As industrial systems increasingly rely on data-driven control and predictive analytics, the ability to continuously acquire, process, and contextualize data from heterogeneous sources becomes a core challenge [10]. This includes integrating legacy devices [11], new smart sensors, and distributed control systems—each with different protocols, semantics, and update cycles.
However, existing architectural frameworks often lack the flexibility, semantic interoperability, and real-time processing capabilities required to unify such diverse industrial environments [12]. Most reference models focus either on high-level integration (e.g., RAMI 4.0) or low-level communication standards (e.g., OPC UA), but fail to provide a scalable, modular approach that supports continuous knowledge modeling alongside dynamic data acquisition.
This study addresses this critical gap by proposing a modular architecture that combines native protocol integration, semantic abstraction, and edge-cloud processing. It is evaluated in a real-world industrial environment, focusing on the system’s ability to collect, transform, and utilize live data for knowledge-based applications.
Industry 4.0 transformation is revolutionizing production processes through IoT and cyberphysical systems (CPS), facilitating automation, data exchange, and decentralized control structures [7,8,9]. This shift improves efficiency, process transparency, and value chain integration, creating new opportunities for data-driven decision-making [13]. However, Industry 4.0 is not only a technological evolution—it requires fundamental changes in management structures and a shift toward intelligent, self-optimizing production systems [14]. As shown in Figure 1, the evolution from mechanical power to interconnected digital systems highlights the complexity of modern industrial requirements.
To address these emerging challenges, this research develops a system architecture for dynamic data acquisition and knowledge modeling in industrial environments. The aim is to create a scalable and flexible framework that enables the seamless integration of heterogeneous data sources while ensuring real-time analytics and interoperability. By introducing new methods for real-time data integration and analysis, this study aims to provide valuable information for industry, software development, and data science, supporting the advancement of Industry 4.0.
This study aims to contribute to system architecture, dynamic data acquisition, and knowledge modeling by addressing the following key research questions. How can a system architecture be designed to efficiently support dynamic data collection for knowledge modeling in industrial environments? This question focuses on developing a scalable, flexible, and industry-compatible framework for real-time data integration. What methods and tools enable the creation and continuous updating of the knowledge model for multiple users within the proposed architecture? The focus here is on designing user-centric and adaptive knowledge modeling techniques. How can semantic conflicts between heterogeneous data sources be resolved within the architecture? Establishing data harmonization techniques ensures data integrity and usability.
This research is based on a comprehensive review of the literature that analyzes current system architectures. It then explores innovative approaches to data integration and knowledge modeling, leading to the development of a prototype based on a KAVA model. Knowledge-Assisted Visual Analytics (KAVA) [15,16] combines visual analytics with human expertise, improving data interpretation and real-time decision-making [17,18]. An example of a KAVA system is gaitXplorer, which classifies gait patterns in patients with cerebral palsy using machine learning [19].
By answering these research questions, this study makes a significant contribution to both research and industrial practice, offering innovative solutions for real-time data processing, the integration of heterogeneous data sources, and the improvement of knowledge modeling.
This work is motivated by a central question in smart manufacturing: how to design a system that supports real-time, high-frequency data acquisition across legacy and modern sources, while enabling adaptive knowledge modeling in production environments. These challenges are formally characterized in Section 4, which defines the technical and operational boundaries for system design.

2. Related Work & Background

The term Industry 4.0 extends beyond the introduction of new technologies and marks the beginning of a digitally connected production landscape [20]. This fourth industrial revolution is based on the integration of Cyber-Physical Systems (CPS), the Internet of Things (IoT), big data, and Cloud Computing into manufacturing processes, enabling intelligent and flexible production [20].
The implementation of Industry 4.0 concepts facilitates deep connectivity between machines, systems, products, and people. Embedded systems and smart objects can independently interpret and process information, allowing them to react to data in real-time [20]. This human-to-machine and machine-to-machine communication is enabled by technologies such as RFID and NFC, which ensure precise identification and interaction within the production system [20].
A key aspect is horizontal and vertical integration, which allows seamless information exchange along the entire value chain. As shown in Figure 2, horizontal integration connects IT and production systems across multiple locations, while vertical integration enables direct access to data from various production levels [20].
Despite its advantages, Industry 4.0 presents challenges, particularly in resolving semantic conflicts between heterogeneous data sources and ensuring efficient resource utilization. A flexible system architecture can provide tailored solutions for various industrial applications by ensuring precise data acquisition and processing [22].

2.1. System Architectures for Industry 4.0

The fourth industrial revolution (Industry 4.0) signifies the digital integration of traditional manufacturing processes, leading to horizontal and vertical value chain integration [21]. This transformation shifts control from centralized to decentralized structures, impacting various sectors along the value chain. Industry 4.0 architectures leverage key technologies such as Augmented Reality (AR), the Industrial Internet of Things (IIoT), digital twins, autonomous robots, simulations, cybersecurity, and predictive maintenance [23]. These advancements drive efficiency, flexibility, and automation in industrial environments.

2.1.1. Reference Architectures for Industry 4.0

Effective and intelligent data processing is fundamental for Industry 4.0. Reference architectures like RAMI 4.0 and IIRA provide structured frameworks for data integration, automation, and decision-making [24].
  • RAMI 4.0 (Reference Architecture Model for Industry 4.0) offers a three-dimensional framework that structures the product lifecycle, together with enterprise hierarchy levels and integration layers [25]. It facilitates Industry 4.0 project planning and implementation, focusing on interoperability, standardization, and security.
  • IIRA (Industrial Internet Reference Architecture) complements RAMI 4.0 by emphasizing IT and OT system integration within IIoT environments. It provides guidelines for architecture decisions, security strategies, and scalability [26].

2.1.2. Core Technologies for Industry 4.0

The Internet of Things (IoT) connects machines, systems, and products, enabling real-time data analysis. Technologies like digital twins support process optimization, predictive maintenance, and virtual prototyping [27,28]. Industrial IoT (IIoT) enhances production workflows and value chain management, where machine learning and AI play a pivotal role [29].
Edge computing shifts data processing closer to its source, reducing latency and offloading central computing resources [30]. Key benefits include the following:
  • Real-time data processing in industrial settings;
  • Integration with programmable logic controllers (PLCs);
  • Optimization of industrial networks through blockchain and security measures [31].
Cloud technologies provide scalable computing and storage resources for smart manufacturing systems, enabling data analysis and remote maintenance [32]. Applications include the following:
  • Cloud-based simulations for production processes;
  • E-learning systems for Industry 4.0 workforce training;
  • Digital design platforms for collaborative industrial environments [33].

2.1.3. Communication Protocols and Interoperability

Message Queuing Telemetry Transport (MQTT), seen in Figure 3, is a lightweight protocol for IoT communication, facilitating real-time data transmission between industrial devices. It is used for robot control, remote monitoring, and energy-efficient systems [34]. Blockchain and smart contract integrations enhance MQTT security [35].
OPC UA (Open Platform Communications Unified Architecture) is a standardized communication protocol for industrial automation, ensuring interoperability between machines, controllers, and IT systems [37]. It supports real-time communication (OPC UA TSN), security, and integration across diverse production environments [38].

2.1.4. Data Analytics and Simulation of Production Processes

Big Data Analytics and Artificial Intelligence: The integration of big data and IIoT facilitates intelligent decision-making in industrial applications [39]. Deep learning models enhance predictive maintenance and production efficiency [40]. However, challenges remain in data extraction, visualization, and processing complexity [41].
Simulation and Digital Twins: The simulation of industrial processes using digital twins enhances production planning, quality control, and process optimization [42]. Applications include the following:
  • Virtual 100% inspection for waste reduction [43].
  • Backward simulation for production scheduling in semiconductor manufacturing [44].

2.1.5. Maturity Models for Industry 4.0

Maturity models help companies assess their Industry 4.0 readiness and develop digital transformation strategies [45].
  • The RAMI 4.0 Maturity Index evaluates digital transformation progress across organization, technology, and process maturity [46].
  • The Impuls Maturity Model is widely used to assess Industry 4.0 implementation levels [47].
  • The comparative analysis of maturity models reveals a lack of universal standards [48].

2.1.6. Modular Approaches and Edge Computing

Integrating edge computing enables the dynamic orchestration of information flows and the flexible configuration of computing functions across various hardware platforms, both of which are essential for Industry 4.0 [49]. A data-centric, distributed architecture supports the vertical and horizontal digitization of the value chain, also taking into account existing technologies [50].

2.1.7. AI-Driven Methods

Ontology-based systems facilitate the contextualization of data with domain knowledge, which is crucial for advanced inference and knowledge generation [51]. AI-driven architectures help consolidate empirical knowledge gleaned from real-world planning challenges and transform it into intelligent structures [52].

2.1.8. Integration of Heterogeneous Data Sources

The interoperability and flexibility of Cyber-Physical Production Systems (CPPS) and digital twins are critical for meeting Industry 4.0 requirements, as they enable multi-agent collaboration [53]. A generic system architecture promotes cross-company data sharing and real-time capabilities, making it broadly applicable to a wide range of Industry 4.0 use cases [54].

2.1.9. Summary

System architectures for Industry 4.0 are critical for integrating intelligent manufacturing processes, with reference frameworks like RAMI 4.0 and IIRA guiding the path toward digitalization and automation. The synergy of IoT, Edge Computing, Cloud Technologies, and AI not only streamlines real-time data acquisition but also enables more efficient and flexible production workflows. Communication protocols such as MQTT and OPC UA ensure interoperability across diverse industrial settings, while big data analytics and digital twins further optimize production by enabling advanced process simulations and predictive maintenance. Recent advances in modular architectures, cyber-physical production systems (CPPS), and ontology-based knowledge systems have also underlined the potential for adaptive, domain-specific knowledge modeling. Choosing the appropriate maturity model (e.g., RAMI 4.0 Maturity Index, Impuls Model) helps organizations systematically progress along their digital transformation journey. At the same time, cross-company data sharing and real-time capabilities play a pivotal role in supporting increasingly interconnected manufacturing environments.
Building on these established frameworks and technologies, the following sections detail our novel approach to seamlessly integrating heterogeneous data sources with knowledge modeling, ensuring real-time capabilities for practical Industry 4.0 deployments. This comprehensive approach aims to fill current gaps in integrating legacy systems, edge-based analytics, and continuous learning processes within a single scalable architecture.

2.2. Dynamic Data Acquisition

In Industry 4.0, dynamic data acquisition plays a crucial role as it forms the foundation for automation and data exchange in manufacturing. Real-time data acquisition allows devices, machines, and products to be interconnected via the Internet of Things (IoT), enabling seamless data exchange [55]. Additive manufacturing, artificial intelligence (AI)-driven data analysis, and machine vision are key innovations used to optimize these processes. The quality of research and development in this field is highly dependent on data quality [55].
The implementation of a Data Distribution Service (DDS) cloud platform demonstrates how the integration of IoT, big data, and cloud computing can address Industry 4.0’s dynamic requirements. Sensor data are collected by a microcontroller, aggregated by an edge device (such as a Raspberry Pi), and transmitted via a DDS application to the cloud, where it is stored in a MySQL database [56].
A cloud-enabled smart data acquisition system at the shop-floor level enables real-time product tracking along the production chain and supports predictive maintenance. This system follows a multi-layer architecture:
  • Sensor layer: Collects data directly from machines or sensors.
  • Edge computing layer: Pre-processes data to reduce latency and network congestion.
  • Cloud layer: Stores, processes, and analyzes data using big data technologies and AI.
  • Application layer: Provides processed data to interfaces and decision-support tools for users.
This system enables real-time monitoring of conventional production machines while adapting to customer demands [57].
The integration of Lean Manufacturing and Industry 4.0 technologies, known as Lean 4.0, helps manufacturers optimize production processes and increase efficiency. A digital twin approach for automated cycle time acquisition and Yamazumi analysis provides real-time insights into manufacturing processes, identifying potential bottlenecks [58].
Data acquisition is a core aspect of Industry 4.0, and processing collected data through cyber-physical systems (CPS), machine learning, virtual environments, IIoT, and augmented reality is essential for production efficiency [59].
In large-scale smart manufacturing facilities, wireless sensor networks (WSNs) are widely used to meet dynamic production demands. A unified data description and management framework enables the identification of unknown data types and provides interfaces for accessing stored data [60].
Open-source data acquisition solutions for the Industrial Internet of Things (IIoT) offer cost-effective and flexible alternatives to commercial software, fostering IIoT development. Data can be collected via industrial protocols, converted into OPC-UA or MQTT formats, and stored in time-series databases [61].

2.2.1. Comparison with Existing Architectures and Systems

To contextualize the proposed system architecture, we reviewed existing industrial solutions and reference frameworks such as RAMI 4.0, the Industrial Internet Reference Architecture (IIRA), and vendor-specific systems by Siemens, Beckhoff, and B&R. While these systems offer robust integration and automation capabilities, many rely on proprietary ecosystems or are primarily geared toward either high-level enterprise integration or low-level control tasks. As such, they often lack the flexibility for integrating heterogeneous data sources—especially when combining legacy controllers, smart sensors, and third-party analytics platforms.
During our prototyping phase, we evaluated multiple approaches and identified the ibaPDA system as a pragmatic and extensible foundation. The iba platform provides native support for protocols like Profibus, Modbus TCP, and OPC UA, and allows the seamless integration of live data streams through external interfaces (e.g., UDP, OPC UA Server). However, our implementation goes beyond the default iba system by introducing semantic abstraction layers, modular edge-to-cloud orchestration, and continuous knowledge modeling using AI-enabled extensions.
Therefore, our contribution lies not in developing a new vendor system but in architecting an interoperable, lightweight, and scalable approach tailored to the demands of real-time data acquisition and domain-centric knowledge modeling. This bridges the gap between existing frameworks and practical industrial needs in heterogeneous production environments.

2.2.2. Summary

Edge computing leverages native and standardized communication protocols such as OPC UA and MQTT to ensure efficient data processing and transmission. The use of native protocols can significantly enhance performance, particularly in reducing latency and improving real-time responsiveness [62].
For legacy systems that do not support modern protocols, edge computing gateways can act as protocol converters, enabling their integration into modern IIoT architectures [63]. These gateways allow older industrial systems to be part of a connected and intelligent network without requiring complete hardware replacement [64].
The adaptability of edge computing enables the development of customized solutions that cater to both legacy systems and modern data processing requirements [65]. This leads to optimized data processing and improved overall efficiency in industrial environments.
In summary, edge computing, by combining native and standardized protocols, provides a flexible and high-performance solution that accommodates both modern and legacy systems, facilitating a seamless transition into digital transformation.

2.3. Knowledge Modeling in Industrial Environments

Knowledge modeling in industrial environments is a key enabler for the implementation of Industry 4.0, as it facilitates the understanding and management of complex systems by providing structured information on equipment maintenance, resource optimization, and production processes. An Extended Reference Model for Generalized Ontologies, known as the Reference Generalized Ontology Model (RGOM), based on the Reference Architecture Model for Industry 4.0 (RAMI 4.0), has been proposed to improve equipment monitoring and production efficiency. RGOM allows the creation of a knowledge graph that can respond to real-time queries, which is crucial for the dynamic nature of Industry 4.0 applications [66].
The key technologies of Industry 4.0, including the Internet of Things (IoT), blockchain, artificial intelligence (AI), augmented reality, 3D printing, big data analytics, and cloud computing, play a vital role in bridging the gap between the physical and cyber world. These technologies support the digital transformation of manufacturing, enhancing efficiency and adaptability [67].

2.3.1. Challenges in Implementing Knowledge Modeling for Industry 4.0

The adoption of Industry 4.0 technologies in manufacturing faces several challenges, including:
  • Lack of skilled employees and necessary expertise.
  • Inadequate technological infrastructure to support real-time data integration.
  • Cybersecurity concerns requiring advanced security modeling and automated allocation.
Addressing these challenges is essential for fulfilling the requirements of future factories (FoF) and ensuring the successful transition towards smart manufacturing [66].

2.3.2. The Role of Dynamic Capabilities and Quality Management in Industry 4.0

Organizational dynamic capabilities evolve as companies adopt Industry 4.0 technologies, leading to the development of new competencies in manufacturing, environmental sustainability, competitiveness, business models, and workforce education. This transformation necessitates adaptations in quality management approaches, ensuring the seamless integration of technology, workforce expertise, and industrial processes [68].
To prepare employees for Industry 4.0, a competency model based on a behavioral approach is required. This model should cover information systems, IT, and engineering to align with Industry 4.0 job requirements and serve as a basis for future workforce training research [69].

2.3.3. The Strategic Role of Knowledge Modeling in Industry 4.0

Knowledge modeling is a crucial factor in Industry 4.0 adoption, integrating IoT and cyber-physical systems (CPS) to merge physical and digital manufacturing environments. However, the main barriers to successful implementation include the following:
  • Lack of vision and leadership from top management.
  • Insufficient training programs and educational frameworks.
  • Uncertainty regarding return on investment (ROI) for new technologies.
Developing dynamic organizational capabilities and investing in workforce education are essential to overcoming these challenges and achieving sustainable competitiveness in Industry 4.0-driven industries [70,71,72].
By leveraging structured knowledge models and addressing technological and workforce challenges, industries can establish a resilient and adaptive framework, ensuring long-term efficiency and innovation in the digitalized industrial landscape.

3. Scientific Methodology

The methodological foundation of this work is based on the Design Study Methodology (DSM) [73], which is particularly suited for applied, data-intensive research in industrial environments. DSM emphasizes iterative prototyping, close domain collaboration, and problem-driven system refinement.
The study follows a structured three-phase approach:
  • Discovery Phase: This phase involved a detailed analysis of the domain-specific problem in close collaboration with industrial stakeholders. Particular attention was given to challenges such as data heterogeneity, protocol fragmentation, and bottlenecks in knowledge transfer within real-world manufacturing environments.
  • Design Phase: Based on the insights gained during the discovery phase, a modular and scalable system architecture was designed to support real-time data acquisition and knowledge modeling. The architecture was developed with a focus on semantic interoperability, protocol integration, and human–machine interaction.
  • Deployment Phase: The developed system was implemented and evaluated in a real industrial environment. Feedback from domain experts and iterative updates informed the validation and refinement of the solution.

The Knowledge Staircase as Epistemic Model

To structure the transformation of raw data into knowledge, the Knowledge Staircase model was employed (cf. Figure 4). This model defines hierarchical stages—from data acquisition to contextualized knowledge—thereby serving as a conceptual guide for aligning system architecture components with knowledge-centric goals.
Each level of the staircase corresponds to functional blocks within the architecture:
  • Data Acquisition: native protocol integration (e.g., OPC UA, Profibus);
  • Data Normalization: timestamping, unit harmonization, semantic labeling;
  • Information Aggregation: filtering, event detection, and data stream correlation;
  • Knowledge Representation: model generation using ML/AI frameworks;
  • Human Interpretation: dashboard interaction, parameter refinement, rule-based feedback.
This methodology ensures that technical decisions are traceable to domain needs and knowledge-based objectives.

4. Problemcharacterization and Abstraction

This chapter discusses the fundamental challenges and concepts derived from the research questions, with the aim of developing a comprehensive understanding of the difficulties associated with designing a system architecture for dynamic data acquisition and effective knowledge modeling in industrial environments. It seeks to answer the first research question: How can a system architecture be designed and developed to efficiently support dynamic data acquisition for knowledge modeling in industrial environments? By identifying the specific requirements of such an architecture, a framework is established that enables a deeper understanding of the underlying technical and organizational challenges and provides solutions for their effective management.

4.1. Challenges in Data Acquisition

A key aspect of system architecture design involves data acquisition, which must address several challenges, including the integration of heterogeneous data sources, system scalability, and flexible data processing. These aspects are critical to answering the central research questions.

4.1.1. Analysis of Existing System Architectures

The integration and scalability of data acquisition systems in industrial environments present multiple challenges, both technically and organizationally. One of the main difficulties lies in integrated data networking, as data are often redundant and dispersed across multiple systems in varying formats, making its processing complex. The autonomy of data source systems results in independent data structures, interfaces, exchange formats, and communication protocols, contributing to data heterogeneity [5].
A major challenge in industrial data acquisition is the integration of heterogeneous data sources, as they often use different formats and protocols. To address this issue, we propose a modular system architecture that facilitates seamless data acquisition, transformation, and analysis [74]. The proposed architecture is depicted in Figure 5.
An effective data management strategy is essential to ensuring data quality and preventing multi-source conflicts, such as contradictory values or inconsistent units. To achieve this, data should be stored in a lean backbone using neutral, transparent, and archivable formats [5]. Additionally, modularization and reusability of analytical processes are necessary to facilitate scalability and knowledge transfer across projects. A well-documented and flexible module-based system allows for selecting and customizing components to meet specific analytical requirements [5].
The scalability of existing system architectures is another challenge, as they often fail to keep pace with the rapidly increasing data volumes and industrial requirements. A modular and adaptable approach to data analysis is needed to ensure long-term usability and efficient processing [5]. Furthermore, the heterogeneous nature of industrial use cases requires analytical modules to be highly customizable and adaptable to different data sources, which poses a significant challenge in maintaining interoperability across systems [5].
Efficiency is another critical factor, as systems must seamlessly integrate practical data analysis with continuous competency development and learning processes. This includes incorporating advanced analytical techniques such as supervised, semi-supervised, and unsupervised learning to optimize decision-making and predictive capabilities [5].
To address latency issues, edge computing has emerged as a powerful solution, allowing real-time data processing at the network edge. This significantly reduces response times, which is particularly important in industrial automation applications requiring instantaneous reactions [75]. Additionally, decentralized data processing via edge computing enhances security, as data are processed and stored locally, reducing the risk of data privacy violations [76].
Another critical aspect of modern system architectures is human–machine interaction (HMI). Industry 4.0 has transformed industrial process control and monitoring by enhancing the interconnectivity of machines and systems. As a result, intuitive HMI design is essential for improving operator efficiency and reducing error rates. Krupitzer ref. [77] emphasize that improved HMI design enhances productivity by facilitating smoother communication between operators and machines. Effective interface design fosters greater user acceptance and reduces operational friction. Moreover, the study highlights the importance of adaptive systems and feedback mechanisms, which enable interfaces to dynamically adjust based on user capabilities and system conditions, ensuring optimal interaction efficiency [77].
These challenges highlight the critical role of data acquisition and processing in industrial environments and must be carefully considered in the design of a robust and scalable system architecture.

4.1.2. Selection of Strategic Technology Partners for Modular Architecture Development

Following the identification of the general challenges in data acquisition and system integration, it was necessary to evaluate and select suitable technology partners to support the modular design of the system architecture. This selection was based on technical expertise in data acquisition, storage, and processing, as well as access to specialized tools and software solutions.
Several industrial solution providers, including General Electric, Rockwell Automation, and Bosch, offer relevant products and solutions for system integration. However, Siemens AG and iba AG were chosen as the primary technology partners for the KAVA system due to their strong process connectivity, interoperability, and adaptability. Their technologies are widely used and well-established in the research environment, making them ideal candidates for closer evaluation.
Siemens AG was selected for its exceptional interoperability and adherence to the OPC UA standard, which ensures seamless communication across industrial systems. The Siemens Industrial Edge platform further enhances IT-OT (Information Technology—Operational Technology) integration, facilitating efficient process coordination.
Similarly, iba AG was chosen for its high level of flexibility and connectivity, particularly with various automation systems. The ibaPDA system, in particular, offers advanced data integration capabilities, allowing direct access to internal control and regulation system values. This makes it particularly well-suited for use within the KAVA system.
The ibaPDA system was ultimately selected for experimental validation due to its high connectivity, robust data acquisition capabilities, and adaptability to different automation environments. It supports a wide range of data sources, including analog and digital I/O signals, fieldbus and drive bus signals, and control system data. Additionally, the system employs a client–server architecture, where the ibaPDA server functions as a central processing unit that manages data and output interfaces.
One of the key advantages of the ibaPDA system is its flexibility in data transmission, allowing seamless integration with databases and cloud services. It can function as an OPC UA server, capturing signals via an OPC UA client interface. Moreover, its adjustable data acquisition timeframes enable users to fine-tune sampling rates, optimizing data processing efficiency. Auto-detection features further simplify system configuration by automatically recognizing signal names and minimizing configuration errors.
The software also includes integrated online diagnostic tools, enabling the real-time monitoring of data source status and values, ensuring system functionality verification. Furthermore, ibaPDA provides multiple recording options and a highly customizable user interface, allowing operators to adapt the system to specific industrial requirements.
Another notable feature is the integration of the ibaPDA system into a broader network management structure, following the Simple Network Management Protocol (SNMP). Additionally, multiple ibaPDA systems can be synchronized via fiber-optic connections, enhancing data consistency and system coordination across multiple production environments.

4.2. User Survey on HMI Design

The development of an effective Human–Machine Interface (HMI) requires a detailed understanding of user needs and preferences [78]. To achieve this, a comprehensive user survey was conducted to gain insights into interaction patterns, user experiences, and design expectations. The findings of this survey serve as a foundation for optimizing the HMI design to ensure intuitive usability while simultaneously fulfilling the technical requirements of the system. The survey was structured to generate a robust data set that would inform the user-centered development of the interface. This approach ensures that the resulting HMI is not only technically functional but also highly tailored to end-user requirements.
  • Methodology
The survey targeted operators, supervisors, and support staff who regularly interact with the HMI. The diversity of participants was essential in capturing a broad spectrum of usage scenarios and experiences, ensuring that the interface is accessible and functional across different roles.
Due to the lack of an online survey infrastructure, the survey was conducted using pre-printed forms. The questionnaire was designed to be simple and easy to complete, allowing participation even for users with limited technical expertise. A combination of closed-ended and open-ended questions was used to collect both quantifiable data and qualitative insights into user preferences and experiences.
To maximize participation, the survey forms were distributed directly at the workplace during working hours, ensuring a high response rate. A designated collection box facilitated the return of completed forms, making the process convenient for participants. Given the relatively small number of respondents, the survey results were manually analyzed. Closed-ended responses were quantitatively evaluated, while open-ended responses were thematically analyzed to extract deeper insights into specific user requirements.

4.2.1. Survey Questions

The questionnaire focused on several key aspects:
  • Critical information display: Which process parameters or data points should always be visible on the HMI?
  • Preferred interaction type: What input method do users find most efficient (e.g., text fields, dropdown menus, buttons, or list views)? Usage frequency: How often do users expect to interact with the HMI during their work shifts?
  • Work-enhancing functions: What features or tools would improve their efficiency when using the HMI? Response time importance: How critical is fast system feedback for their work?
  • Additional functionality: Are there specific features users would like to see added?
  • Challenges and concerns: What difficulties do users anticipate when working with a new HMI?

4.2.2. Key Findings from the User Survey

The survey provided valuable insights into the preferences and needs of users regarding HMI design. Several key aspects emerged from the analysis:
  • Critical information display: Users identified process parameters, alarms, system errors, and temperature readings as the most essential data points for their daily work. Additionally, speed indicators and timestamps were frequently mentioned as important elements to display.
  • Preferred interaction type: Large buttons and dropdown menus were favored for their ease of use and ability to support quick and efficient navigation.
  • Work-enhancing functions: Users highlighted the importance of clearly visible large buttons, color-coded indicators, and predefined text input fields to streamline workflows and reduce interaction time.
  • Challenges and usability concerns: Commonly cited challenges included operability under difficult conditions (e.g., exposure to dirt or time constraints), which could impact the effectiveness of the HMI.
These application-dependent results form a solid foundation for the next phase of HMI development, ensuring that the design aligns with user expectations while addressing usability concerns. By integrating the identified preferences into the design process, the system can be optimized for both efficiency and ease of use in real-world industrial applications.

5. Materials and Methods

This section summarizes the essential resources, technologies, and methodological steps applied in the work. The aim is to establish a scalable system architecture for dynamic data acquisition and knowledge modeling in industrial contexts, placing emphasis on both technical implementation and user-centered design.

5.1. System Design Rationale

The system architecture was developed following the principles established in Section 3, with the aim to support high-frequency, protocol-diverse data streams and enable real-time knowledge modeling.
The design integrates three core dimensions:
Real-Time Data Acquisition: Native protocol-based communication enabled integration with both legacy (e.g., S7-300/400) and modern PLCs (e.g., S7-1200/1500). The use of IBH Link gateways ensured compatibility with older control units, while heterogeneous systems were connected directly via Ethernet and OPC UA servers.
Edge-Based Preprocessing: A dedicated edge node (Intel i5, 16 GB RAM, 1 TB SSD) runs custom Python 3.10 services for data parsing, semantic annotation, and real-time analysis. The system employs buffered queues and event-driven logic to support data rates > 10 MB/s while maintaining latency below 50 ms (see Section 6).
Visualization:
  • ibaQPanel for the visualization of measurement data in real time (value progression, limit value violations, signal status);
  • Configuration of the panels via ibaPDA configuration editor.
The design rationale ensures alignment between practical requirements and epistemic goals as outlined by the Knowledge Staircase. Each subsystem was validated under realistic conditions (Section 6.1, Section 6.2, and Section 6.3).

5.2. Materials and Software Components

To handle data collection and administration, the work employed a combination of ibaPDA, OPC UA, Modbus TCP, and native protocols offered by certain controllers. This setup enabled seamless information gathering from both legacy and modern systems. Long-term data storage was implemented via the ibaHD-Server, while time-series analysis was supported by InfluxDB. These elements meet common industrial big data requirements (e.g., high data throughput and flexible interfaces). According to [79], ibaPDA offers a reliable foundation for real-time data acquisition and modular connectivity. Additionally, both real-time and batch processing modes are supported to accommodate the rapid evolution of new IoT standards.

5.3. Experimental Setup

In this study, we deployed a modular system architecture, as shown in Figure 6, in an industrial environment to evaluate the feasibility of real-time data acquisition and knowledge modeling. Below, we provide detailed information regarding the hardware components, software configuration, and data acquisition parameters used in our experiments.
  • Hardware:
    Programmable Logic Controllers (PLCs):
    *
    S7-400 (CPU 6ES7414-2XG00-0AB0, Siemens AG, Munich, Germany) connected via Profibus with a bitrate of 1.5 Mbit/s.
    *
    S7-200 (CPU 6ES7214-1AD23-0XB0, Siemens AG, Munich, Germany) coupled via an IBH Link S7++ (IBHsoftec GmbH, Friedrichsdorf, Germany) interface. This setup enables communication with older S7-200 systems that do not natively support modern fieldbus protocols.
    Coating Thickness Measurement Device:
    A dedicated sensor for measuring lacquer coating thickness (Manufacturer: NDC Technologies Ltd., Essex, UK), integrated using an OPC UA interface.
    Host PC:
    Intel® Core i5-9400T CPU, 16 GB DDR4 RAM, 3 Ethernet interfaces with 1 GB and a 1 TB SSD (Dell GmbH, Vienna, Austria). A Siemens CP 571 6GK1571-1AA00 as a communication interface to receive data from the S7-400 via Profibus. This machine served as the central data acquisition and processing node.
  • Software Configuration:
    Data Acquisition: ibaPDA (Version 8.1), chosen for its real-time data handling capabilities and compatibility with multiple industrial protocols.
    Database Systems: InfluxDB for short-term time-series storage and an SQL-based solution (e.g., Microsoft SQL Server 2019) for long-term archiving and more complex queries.
    Visualization: ibaQPanel to monitor live signals and perform preliminary analyses (e.g., parameter tuning, status checks).
  • Data Acquisition Parameters:
    Sampling Rate: A frequency of 50 Hz was selected based on preliminary trials, which indicated that critical process changes can occur in the sub-second range.
    Fieldbus Throughput: The Profibus connection operates at 1.5 Mbit/s. For the S7-200 via IBH Link S7++, the effective throughput depends on the PLC scan cycle and link configuration, but typically remained stable in our setup.
    Network Traffic: Typical data throughput on the host PC reached up to 10–15 MB/s during peak load, requiring Quality-of-Service (QoS) mechanisms on the local network to minimize packet collisions or loss.

5.4. Rationale for Parameter Choices

The selection of the sampling rate (50 Hz) on the S7 CPU was determined by the cycle time, which was based on initial preliminary tests that showed that critical process changes sometimes take place in the millisecond range. A sampling rate of (10 Hz) was selected for OPC UA communication to the lacquer measuring system so as not to overload the server. Communication with the S7-200 took place at a sampling rate of (20 Hz) due to the system. ibaPDA was selected for its robust native protocol support, including profibus and OPC UA, which allowed us to integrate legacy systems (like the S7-200) alongside newer devices (coating thickness sensor).
In summary, the combination of these hardware and software components, alongside carefully chosen sampling rates and network configurations, establishes a robust foundation for real-time data collection and subsequent knowledge modeling.

5.5. Methodological Steps

5.5.1. Requirement Analysis

An initial examination focused on existing data acquisition processes, revealing heterogeneous industrial environments with multiple proprietary protocols and controller types.

5.5.2. Concept Development

Building on the identified requirements, a modular architecture was designed to accommodate new data sources or protocols with minimal effort. The work emphasizes that such an extensible strategy is essential for high-dynamic production settings, preventing semantic or format conflicts.

5.5.3. Prototyping and Implementation

A stepwise approach was used to achieve the following:
  • Integrate OPC UA, Modbus TCP, and native protocols for data retrieval from both legacy and modern controllers.
  • Configure ibaHD-Server and InfluxDB for long- and short-term data storage, respectively.
  • Provide an interactive interface (ibaQPanel) for real-time data display and initial analysis.

5.5.4. Validation and User Testing

Subsequent test runs evaluated performance metrics, latency, and system stability. Additionally, user feedback was gathered to refine interface requirements, showing that easy extensibility and clear data visualization are key to practical acceptance [79].

5.6. Data Security and Privacy

Data security was ensured by role-based access control, network protection (VPN, firewall), and encrypted connections. Anonymization and pseudonymization methods were applied if personal data appeared in the process data streams. The findings indicate that while technical safeguards (e.g., TLS encryption) are integrated into iba and InfluxDB environments, continuous monitoring and maintenance are indispensable for preventing unauthorized access.

5.7. Summary

The methodological framework presented in the work combines comprehensive requirement analysis, conceptual system design, prototyping, and validation. The selected tools and technologies (ibaPDA, OPC UA, Modbus TCP, ibaHD-Server, and InfluxDB) form a robust basis for dynamic data acquisition and analysis within industrial environments. The user studies highlight that a modular, user-friendly solution is critical to effectively leveraging process data for knowledge modeling while covering both real-time and long-term analysis needs.

6. Validation

The validation of the developed prototype is a crucial step to ensure its stability, reliability, and alignment with industrial requirements [80]. This process involves defining precise evaluation criteria to assess the system’s functionality, efficiency, and usability. Through a structured validation approach, the developed solution is tested under real industrial conditions to verify its technical capabilities and to identify areas for improvement before deployment [80,81].

6.1. Validation of Control System Integration

Ensuring seamless integration with existing control systems is essential. The evaluation focused on validating protocol support, system modularity, scalability, security, and long-term adaptability. The prototype was tested in a real industrial environment to assess its ability to connect with multiple control systems without requiring extensive manual configurations. The results demonstrated successful integration with industrial communication standards, confirming its robustness and compatibility with various protocols [82,83]. An overview of the setup is also provided in Figure 6 (Section 5.3).
Further validation was conducted on the modular architecture by dynamically adding and removing modules under operational conditions. The integration of OPC-UA, Modbus TCP, MQTT, and native protocol interfaces confirmed the system’s seamless adaptability. While a minor requirement for measurement restarts was noted, overall flexibility was preserved [79,84]. Additionally, the prototype’s ability to integrate system updates and new functionalities without disrupting operations was assessed. Successful software update implementations demonstrated its capacity for future adaptation, with user feedback playing a significant role in ensuring long-term relevance and maintainability [79].
Security and reliability were evaluated through stress tests to determine system stability under high loads, particularly concerning CPU utilization and data transmission security. The results confirmed that the system maintained consistent performance and robust security measures, even under maximum processing loads [79,85].

6.2. Validation of Data Integration and Processing

The ability to handle heterogeneous data sources and formats was assessed, focusing on scalability, transformation capabilities, and visualization. The prototype successfully integrated multiple database systems, including Microsoft SQL-Server, InfluxDB, and PostgreSQL. Tests conducted under controlled conditions demonstrated high compatibility, although integration into distributed environments may require additional network and security configurations [20].
Data transformation and preprocessing capabilities were validated using large datasets, where the system’s ability to normalize, clean, and structure raw data was tested. The ibaAnalyzer proved to be an effective tool for transformation, ensuring high data quality and structured formatting for subsequent analysis. However, hardware constraints such as CPU performance and network bandwidth were identified as critical factors influencing data transformation efficiency [79].

6.3. Validation of Data Visualization and Interaction

The validation of data visualization and interaction was conducted to assess usability, responsiveness, and support for real-time data exploration. A user survey identified key visualization requirements, such as real-time process parameters, alarms, and historical data views. These insights guided the redesign of the Human–Machine Interface (HMI), improving navigation and usability [78]. Further tests on visualization tools confirmed their effectiveness in representing complex datasets, with users reporting increased efficiency in data analysis through intuitive dashboards and filtering mechanisms. Based on user feedback, design modifications were implemented to enhance accessibility and responsiveness [86]. While qualitative feedback was collected during the trials, future work will include statistical evaluation of user satisfaction via standardized usability scales.

6.4. Validation of Data Security and Compliance

Ensuring the security of stored and processed data was a fundamental requirement, validated through multiple assessments. Security mechanisms, including user access control and integration with Active Directory, were tested, demonstrating effective user management and robust authentication procedures [83]. Additionally, the system’s capacity for long-term data storage and anonymization was evaluated using backup and recovery simulations. Implemented anonymization techniques, such as pseudonymization and signal mapping, successfully protected sensitive industrial data while preserving its analytical usability [87,88].

6.5. Quantitative Performance Results

To provide clearer insights into the system’s behavior under varying loads, we conducted a series of stress tests and measured latency as well as data throughput. Table 1 summarizes the mean values (± standard deviation) of these key metrics under three different scenarios:
  • Scenario A: Only the S7-400 (Profibus) transmitting data at 50 Hz;
  • Scenario B: Combined data acquisition from S7-400 (50 Hz) and S7-200 (20 Hz) via IBH Link S7++;
  • Scenario C: Full integration (S7-400, S7-200, coating sensor at 10 Hz via OPC UA).
As illustrated in Table 1, each data source (S7-400, S7-200, and the coating sensor) exhibits stable latencies in their respective communication paths, even when the system is fully loaded in Scenario C. Meanwhile, the overall throughput can reach up to 15 MB/s in total, which aligns with our initial QoS provisions. Figure 7 provides a visual breakdown of how the data rate evolves over time when transitioning from Scenario A (single data source) to C (all sources active).
A slight increase in latency can be observed once the coating sensor (OPC UA) is integrated, reflecting the overhead of an additional communication protocol. Nonetheless, the measured values stay within acceptable bounds for real-time monitoring in our targeted industrial context.

6.6. Summary

The validation results confirmed that the developed architecture is scalable, adaptable, and secure, making it well-suited for industrial applications. While minor optimizations were identified, the system demonstrated strong integration capabilities, data processing efficiency, and user-centric design. Future enhancements should focus on refining edge computing integration, AI-driven analytics, and cybersecurity measures to further strengthen the system’s robustness.

7. Lessons Learned and Limitations

This section summarizes key insights and practical constraints encountered during the prototyping and validation phases. By reflecting on both technical and organizational challenges, it aims to inform future implementations of similar system architectures in industrial contexts.

7.1. Limitations

Despite the overall success of the architecture, several limitations became evident:
  • Integration Overhead: While OPC UA and native protocols facilitated broad compatibility, legacy controllers required dedicated configuration and testing efforts, especially when using intermediate gateways (e.g., IBHLink S7++).
  • Latency Tuning: Maintaining real-time responsiveness across all sources proved challenging during multi-source operation. The system required per-source buffer adjustments to ensure latency below 50 ms (see Table 1).
  • UI Prototyping: The visualization layer was realized using ibaQPanel. A full implementation of the KAVA concept is planned.

7.2. Lessons Learned

Several findings from the implementation and evaluation process may guide future applications:
  • Modularity Supports Resilience: Dynamic reconfiguration and the hot-swapping of modules proved feasible, even under active operation. This validated the modularity concept of the architecture.
  • Protocol Heterogeneity is Manageable: The mixed use of OPC UA and native protocols did not cause system instability, confirming that the architecture’s semantic abstraction layer worked as intended.
  • User Feedback is Crucial: Early-stage operator feedback significantly influenced UI layout and system messaging, underscoring the importance of human-centric design, even in highly technical environments.

8. Reflection

This section critically evaluates the research conducted, highlighting strengths, challenges, achievements, and potential future research directions.

8.1. Strengths

The primary strength of this research lies in the successful development of a conceptual system architecture designed to support dynamic data acquisition in industrial environments. By employing state-of-the-art technologies and methodologies, a robust and scalable system was created, capable of efficiently processing large volumes of data. The integration of knowledge models further enhances data-driven decision-making, improving industrial processes through real-time insights.

8.2. Challenges and Limitations

Despite its successes, several challenges were encountered:
  • Technical Challenges: Integrating heterogeneous data sources was complex due to semantic inconsistencies and data reliability issues.
  • System Complexity: Designing an architecture flexible enough for various industrial scenarios required significant effort and time.
  • Data Simulation: While historical data could be replayed, its analysis using machine learning (ML) techniques remains an open research question. A more advanced ML-based approach could detect patterns and anomalies, enabling predictive analysis and optimization, which necessitates further exploration.
Future work should focus on refining data processing workflows and ensuring seamless ML integration, alongside the development of a user-friendly interface that allows non-experts to conduct complex analyses.

8.3. Achievements

The study successfully demonstrated how an innovative system architecture can meet Industry 4.0 requirements. Real-world applications validated its functionality, improving data quality and accelerating decision-making processes. The practical implementation confirmed the feasibility and benefits of the proposed architecture.

8.4. Areas for Improvement

Future advancements should focus on the following:
  • Technology Expansion: Investigating the integration of Siemens Edge and similar platforms for real-time data processing and edge computing.
  • System Interoperability: Exploring hybrid systems that combine multiple architectures to enhance flexibility and adaptability.
  • Machine Learning and AI: Developing predictive models for fault detection and process optimization, leveraging Python-based ML/AI frameworks.
  • Human Factors and Knowledge Transfer: Investigating psychological barriers to knowledge sharing and technology acceptance, addressing concerns such as job security and resistance to automation.

9. Discussion, Contributions, and Future Directions

This study presents an innovative system architecture for dynamic data acquisition and knowledge modeling in industrial environments, contributing significantly to Industry 4.0 and Smart Manufacturing. In this section, we first discuss the core aspects of the proposed solution and contextualize it in relation to the existing literature. We then highlight the main contributions, implications for industrial practice, and potential avenues for future research.

9.1. Overview of the Proposed Architecture

The proposed architecture prioritizes real-time data extraction from industrial controllers, integrating both native protocols (for direct access to legacy devices) and standardized solutions such as OPC UA. This approach ensures high flexibility, allowing for simultaneous support of modern IIoT components and older, profibus-based systems. By combining streaming analytics, event-driven processing, and edge computing, the system can optimize data flow in real time while reducing network load. Furthermore, advanced error detection and adaptive sampling improve data integrity and responsiveness. A modular design facilitates scalability and future upgrades, ensuring that the architecture can evolve in tandem with emerging industrial standards.

9.2. Knowledge Modeling and AI Integration

A key element of the framework is the incorporation of continuous knowledge modeling through Python-based machine learning (ML) and AI libraries (e.g., TensorFlow, Scikit-Learn). This integration enables both predictive maintenance and real-time process optimization. Additionally, intuitive user interfaces empower operators to query, interpret, and refine knowledge models without extensive technical expertise, bridging the gap between automated data processing and human decision-making. Real-time streaming and caching techniques further enhance data aggregation, paving the way for interactive visual analytics in high-throughput production settings.

9.3. Semantic Interoperability and Standardization

To address semantic conflicts and ensure data consistency across heterogeneous sources, the system relies on standardized data management practices, including unified naming conventions and real-time data transformation. Incorporating industrial communication standards (e.g., ModbusTCP, OPC UA, Profibus) aids in delivering a seamless integration pathway. These measures minimize discrepancies, reduce data-cleaning overhead, and ultimately improve the overall reliability of downstream analytics and knowledge-generation pipelines.

9.4. Key Contributions

  • Integrated Data Acquisition Across Heterogeneous Protocols: By enabling direct communication over OPC UA, ModbusTCP, Profibus, and native connections, the architecture unifies both legacy and modern infrastructures, creating a more agile data flow and paving the way for streamlined knowledge modeling.
  • Scalable Edge-Cloud Architecture: The combination of on-site edge gateways (for low-latency analytics) and cloud or database-based repositories (for long-term storage) ensures near-real-time anomaly detection while retaining large-scale historical data for in-depth process optimization.
  • Continuous Knowledge Modeling (KAVA): By linking industrial signals to a flexible knowledge model, our system merges automated data processing with domain expertise. This supports advanced reasoning—e.g., trend detection, anomaly classification, and rule-based decision-making—in real time.
  • Practical Validation in an Industrial Setting: Field trials under varied load conditions confirm the ability to handle frequent signal updates (sampling rates up to 100 Hz) and data volumes exceeding 10–15 MB/s, underscoring the approach’s practical feasibility.

9.5. Implications for Industry 4.0

The findings emphasize the centrality of interoperable, real-time data architectures within Industry 4.0 contexts. By facilitating native protocol integration, manufacturers can prolong the service life of existing production lines while incrementally introducing advanced digital technologies. Additionally, the architecture’s capability to dynamically scale data processing—from lightweight filtering at the edge to extensive analytics in the cloud—positions organizations to adapt to fluctuating production loads, seasonal variations, and evolving regulatory demands.
A further implication is the potential for cross-company collaboration. As modern supply chains increasingly span multiple facilities and organizational boundaries, a secure, standardized data exchange framework can accelerate joint process optimization, innovation, and transparency across stakeholders.

9.6. Future Directions

Looking ahead, several key areas warrant further exploration:
  • Machine Learning for Predictive Maintenance: Incorporating online or streaming ML algorithms (e.g., incremental random forests) could facilitate proactive fault detection and adaptive maintenance strategies.
  • Edge-Focused Computations at Higher Sampling Rates: Investigating frequencies above 100 Hz may be necessary for processes with extremely transient dynamics, where real-time classification or anomaly filtering is best handled directly at the edge.
  • Hybrid Multi-Criteria Optimization: Extending the comparison of methods such as composite desirability versus TOPSIS can refine decision-making under multiple, potentially conflicting objectives (e.g., maximizing data fidelity versus minimizing computational overhead).
  • Enhanced Semantic Modeling: Future work could explore ontology-based reasoning and semantic web technologies (RDF, SPARQL) for deeper knowledge extraction and broader enterprise-wide data reuse.
  • Integration of Digital Twins: Real-time data from the proposed architecture can serve as a foundation for digital twin platforms, enabling virtual commissioning, “what-if” scenario analyses, and closed-loop feedback for continuous improvement.
Overall, the flexible and modular nature of the proposed system setup ensures adaptability to rapid technological advances and evolving industrial requirements. By combining robust data acquisition, scalable analytics, and domain-focused knowledge modeling, manufacturers can not only optimize today’s processes, but can also proactively prepare for tomorrow’s industrial challenges.

9.7. Potential Sources of Error and Recommended Countermeasures

Although the system performed reliably under most test conditions, certain discrepancies between expected and observed performance measures (e.g., latency spikes or transient data loss) were detected. The following error sources proved most critical:
  • Network Bottlenecks: High traffic peaks on the local network occasionally caused packet collisions, leading to short-term increases in latency.
  • PLC Scan Cycles: The scan time of older PLCs (particularly the S7-200) can vary depending on the active program logic, causing fluctuations in data arrival intervals.
  • Sensor Calibration: The Coating Thickness Device can exhibit measurement drifts over time if not recalibrated. This yields slight deviations from the nominal values stored in the knowledge model.
  • Synchronization Delays: When multiple data streams (Profibus, IBH Link S7++, OPC UA) merge, time stamp alignment can become inconsistent unless carefully managed via a unified clock source or synchronization protocol.
Improvement Measures:
  • Quality-of-Service Optimization: Reserving a dedicated bandwidth segment for real-time data transmission and enabling priority queuing can minimize collisions under peak loads.
  • Regular Calibration Cycles: Automating sensor calibration—especially for the coating measurement device—helps maintain data accuracy over long operating periods.
  • Time Stamp Normalization: Introducing a master clock reference (e.g., via NTP) ensures that all incoming data streams are properly synchronized, reducing the risk of misaligned signals in the post-processing phase.
In many cases, simply reducing the sampling rates (e.g., from 50 Hz to 30 Hz) in low-variance process phases helped avoid network saturation. Over time, a data-driven adaptive sampling mechanism could automate this trade-off between resolution and throughput, further improving robustness and accuracy.

Author Contributions

Conceptualization, E.R.; methodology, E.R.; software, E.R.; validation, E.R., M.W. and T.M.; formal analysis, E.R.; investigation, E.R.; resources, E.R.; data curation, E.R.; writing—original draft preparation, E.R.; writing—review and editing, M.W. and T.M.; visualization, E.R.; supervision, M.W. and T.M.; project administration, E.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Austrian Federal Ministry of Labour and Economy, the National Foundation for Research, Technology and Development, and the Christian Doppler Research Association.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to industrial confidentiality agreements.

Acknowledgments

The financial support by the Austrian Federal Ministry of Labour and Economy, the National Foundation for Research, Technology and Development, and the Christian Doppler Research Association is gratefully acknowledged.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Drath, R.; Horch, A. Industrie 4.0: Hit or Hype? [Industry Forum]. IEEE Ind. Electron. Mag. 2014, 8, 56–58. [Google Scholar] [CrossRef]
  2. Helmut-Schmidt-Universität Hamburg; Gundlach, C.S.; Fay, A. Industrie 4.0 mit dem “Digitalen Zwilling” gestalten—Eine methodische Unterstützung bei der Auswahl der Anwendungen. Ind. 4.0 Manag. 2020, 36, 7–10. [Google Scholar] [CrossRef]
  3. Tinz, J.; Tinz, P.; Zander, S. Wissensmanagementmodelle für die Industrie 4.0: Eine Gegenüberstellung aktueller Ansätze. Z. Wirtsch. Fabr. 2019, 114, 404–407. [Google Scholar] [CrossRef]
  4. North, K.; Maier, R. Wissen 4.0—Wissensmanagement im digitalen Wandel. HMD Prax. Wirtsch. 2018, 55, 665–681. [Google Scholar] [CrossRef]
  5. Deuse, J.; Klinkenberg, R.; West, N. (Eds.) Industrielle Datenanalyse: Entwicklung einer Datenanalyse-Plattform für die Wertschaffende, Kompetenzorientierte Kollaboration in Dynamischen Wertschöpfungsnetzwerken; Springer Fachmedien: Wiesbaden, Germany, 2024. [Google Scholar] [CrossRef]
  6. Gräßler, I.; Oleff, C. Systems Engineering: Verstehen und Industriell Umsetzen; Springer: Berlin/Heidelberg, Germany, 2022. [Google Scholar] [CrossRef]
  7. Oztemel, E.; Gursev, S. Literature review of Industry 4.0 and related technologies. J. Intell. Manuf. 2020, 31, 127–182. [Google Scholar] [CrossRef]
  8. Sony, M.; Naik, S. Critical factors for the successful implementation of Industry 4.0: A review and future research direction. Prod. Plan. Control 2020, 31, 799–815. [Google Scholar] [CrossRef]
  9. Prause, M.; Weigand, J. Industry 4.0 and Object-Oriented Development: Incremental and Architectural Change. J. Technol. Manag. Innov. 2016, 11, 104–110. [Google Scholar] [CrossRef]
  10. Giustozzi, F.; Saunier, J.; Zanni-Merk, C. A semantic framework for condition monitoring in Industry 4.0 based on evolving knowledge bases. Semant. Web 2023, 15, 583–611. [Google Scholar] [CrossRef]
  11. Palmeira, J.; Coelho, G.; Carvalho, A.; Carvalhal, P.; Cardoso, P. Migrating legacy production lines into an Industry 4.0 ecosystem. In Proceedings of the 2022 IEEE 20th International Conference on Industrial Informatics (INDIN), Perth, Australia, 25–28 July 2022; pp. 429–434. [Google Scholar] [CrossRef]
  12. Wang, S.; Wan, J.; Li, D.; Liu, C. Knowledge Reasoning with Semantic Data for Real-Time Data Processing in Smart Factory. Sensors 2018, 18, 471. [Google Scholar] [CrossRef]
  13. Nagy, J.; Oláh, J.; Erdei, E.; Máté, D.; Popp, J. The Role and Impact of Industry 4.0 and the Internet of Things on the Business Strategy of the Value Chain—The Case of Hungary. Sustainability 2018, 10, 3491. [Google Scholar] [CrossRef]
  14. Piccarozzi, M.; Aquilani, B.; Gatti, C. Industry 4.0 in Management Studies: A Systematic Literature Review. Sustainability 2018, 10, 3821. [Google Scholar] [CrossRef]
  15. Federico, P.; Wagner, M.; Rind, A.; Amor-Amorós, A.; Miksch, S.; Aigner, W. The Role of Explicit Knowledge: A Conceptual Model of Knowledge-Assisted Visual Analytics. In Proceedings of the 2017 IEEE Conference on Visual Analytics Science and Technology (VAST), Phoenix, AZ, USA, 3–6 October 2017. [Google Scholar]
  16. Stoiber, C.; Wagner, M.; Ceneda, D.; Pohl, M.; Gschwandtner, T.; Miksch, S.; Streit, M.; Girardi, D.; Aigner, W. Knowledge-assisted Visual Analytics meets Guidance and Onboarding. In Proceedings of the IEEE Application Spotlight, Vancouver, BC, Canada, 20–25 October 2019. [Google Scholar]
  17. Andrienko, N.; Lammarsch, T.; Andrienko, G.; Fuchs, G.; Keim, D.; Miksch, S.; Rind, A. Viewing Visual Analytics as Model Building. Comput. Graph. Forum 2018, 37, 275–299. [Google Scholar] [CrossRef]
  18. Wagner, M.; Slijepcevic, D.; Horsak, B.; Rind, A.; Zeppelzauer, M.; Aigner, W. KAVAGait: Knowledge-Assisted Visual Analytics for Clinical Gait Analysis. IEEE Trans. Vis. Comput. Graph. 2019, 25, 1528–1542. [Google Scholar] [CrossRef] [PubMed]
  19. Rind, A.; Slijepcevic, D.; Zeppelzauer, M.; Unglaube, F.; Kranzl, A.; Horsak, B. Trustworthy Visual Analytics in Clinical Gait Analysis: A Case Study for Patients with Cerebral Palsy. In Proceedings of the 2022 IEEE Workshop on TRust and EXpertise in Visual Analytics (TREX), Oklahoma City, OK, USA, 16 October 2022; pp. 8–15. [Google Scholar] [CrossRef]
  20. Roth, A. (Ed.) Einführung und Umsetzung von Industrie 4.0; Springer: Berlin/Heidelberg, Germany, 2016. [Google Scholar] [CrossRef]
  21. Andelfinger, V.P.; Hänisch, T. (Eds.) Industrie 4.0; Springer Fachmedien: Wiesbaden, Germany, 2017. [Google Scholar] [CrossRef]
  22. Junoing, W.; Member, I.; Wensheng, Z.; Youkang, S.; Shihui, D. Industrial Big Data Analytics: Challenges, Methodologies, and Applications. arXiv 2018, arXiv:1807.01016. [Google Scholar]
  23. SAP. Was Ist Industrie 4.0? SAP: Walldorf, Germany, 2023. [Google Scholar]
  24. Weber, C.; Wieland, M.; Reimann, P. Konzepte zur Datenverarbeitung in Referenzarchitekturen für Industrie 4.0: Konsequenzen bei der Umsetzung einer IT-Architektur. Datenbank-Spektrum 2018, 18, 39–50. [Google Scholar] [CrossRef]
  25. Melzer, B. Reference Architectural Model Industrie 4.0 (RAMI 4.0). 2018. Available online: https://www.plattform-i40.de/IP/Redaktion/EN/Downloads/Publikation/rami40-an-introduction.html (accessed on 28 April 2025).
  26. Young, D.T.T. The Industrial Internet Reference Architecture. 2022. Available online: https://www.engineering.com/iic-releases-industrial-internet-reference-architecture-v1-10/ (accessed on 28 April 2025).
  27. Kuhn, T. Digitaler Zwilling. Informatik-Spektrum 2017, 40, 440–444. [Google Scholar] [CrossRef]
  28. Geuer, L.; Ulber, R. Digitale Zwillinge in der naturwissenschaftlichen Bildung: Konstruktivistische Perspektive. Medien. Z. Theor. Prax. Medien. 2024, Occasional Papers, 69–94. [Google Scholar] [CrossRef]
  29. Follath, A.; Bross, F.; Galka, S. Vorgehensmodell zur Erstellung Digitaler Zwillinge für Produktion und Logistik. Z. Wirtsch. Fabr. 2022, 117, 691–696. [Google Scholar] [CrossRef]
  30. Mandic, Z.; Stankovski, S.; Ostojic, G.; Popovic, B. Potential of Edge Computing PLCs in Industrial Automation. In Proceedings of the 2022 21st International Symposium INFOTEH-JAHORINA (INFOTEH), East Sarajevo, Bosnia and Herzegovina, 16–18 March 2022; pp. 1–5. [Google Scholar] [CrossRef]
  31. Wu, Y.; Dai, H.N.; Wang, H. Convergence of Blockchain and Edge Computing for Secure and Scalable IIoT Critical Infrastructures in Industry 4.0. IEEE Internet Things J. 2021, 8, 2300–2317. [Google Scholar] [CrossRef]
  32. Gui, A.; Fernando, Y.; Shaharudin, M.S.; Mokhtar, M.; Karmawan, I.G.M.; Suryanto. Drivers of Cloud Computing Adoption in Small Medium Enterprises of Indonesia Creative Industry. JOIV Int. J. Inform. Vis. 2021, 5, 69–75. [Google Scholar] [CrossRef]
  33. Kyratzi, S.; Azariadis, P. Cloud Computing as a Platform for Design-Oriented Applications. In Proceedings of the 24th Pan-Hellenic Conference on Informatics, Athens, Greece, 20–22 November 2020; pp. 226–228. [Google Scholar] [CrossRef]
  34. Saif, Y.; Yusof, Y.; Rus, A.Z.M.; Ghaleb, A.M.; Mejjaouli, S.; Al-Alimi, S.; Didane, D.H.; Latif, K.; Abdul Kadir, A.Z.; Alshalabi, H.; et al. Implementing circularity measurements in industry 4.0-based manufacturing metrology using MQTT protocol and Open CV: A case study. PLoS ONE 2023, 18, e0292814. [Google Scholar] [CrossRef] [PubMed]
  35. Aknin, R.; Bentaleb, Y. Enhanced MQTT Architecture for Smart Supply Chain. Int. J. Adv. Comput. Sci. Appl. 2023, 14, 861–869. [Google Scholar] [CrossRef]
  36. Balduino Lopes, G.; Fernandes, R.F., Jr. A remote MQTT-based data monitoring system for energy efficiency in industrial environments. VETOR—Rev. Ciênc. Exatas Eng. 2021, 31, 25–35. [Google Scholar] [CrossRef]
  37. Leitner, S.H.; Mahnke, W. OPC UA—Service-Oriented Architecture for Industrial Applications. 2006. Available online: https://dl.gi.de/items/2139da16-3041-40a3-957a-8ca600bf4c23 (accessed on 28 April 2025).
  38. Trifonov, H.; Heffernan, D. OPC UA TSN: A next-generation network for Industry 4.0 and IIoT. Int. J. Pervasive Comput. Commun. 2023, 19, 386–411. [Google Scholar] [CrossRef]
  39. Vijayakumar, K. Concurrent Engineering: Research and Applications (CERA)—An international journal: Special issue on “Data Analytics in Industrial Internet of Things (IIoT)”. Concurr. Eng. 2021, 29, 82–83. [Google Scholar] [CrossRef]
  40. Yan, H.; Wan, J.; Zhang, C.; Tang, S.; Hua, Q.; Wang, Z. Industrial Big Data Analytics for Prediction of Remaining Useful Life Based on Deep Learning. IEEE Access 2018, 6, 17190–17197. [Google Scholar] [CrossRef]
  41. Lade, P.; Ghosh, R.; Srinivasan, S. Manufacturing Analytics and Industrial Internet of Things. IEEE Intell. Syst. 2017, 32, 74–79. [Google Scholar] [CrossRef]
  42. Matthiesen, S.; Paetzold-Byhain, K.; Wartzack, S. Virtuelle Inbetriebnahme mit dem digitalen Zwilling. Konstruktion 2023, 75, 54–57. [Google Scholar] [CrossRef]
  43. Cramer, S.; Huber, M.; Knott, A.L.; Schmitt, R.H. Wertschöpfung in Industrie 4.0: Virtuelle 100%-Prüfung durch Predictive Quality. Z. Wirtsch. Fabr. 2023, 118, 344–349. [Google Scholar] [CrossRef]
  44. Laroque, C.; Löffler, C.; Scholl, W.; Schneider, G. Einsatzmöglichkeiten der Rückwärtssimulation zur Produktionsplanung in der Halbleiterfertigung. In Proceedings ASIM SST 2020; ARGESIM Publisher: Vienna, Austria, 2020; pp. 397–401. [Google Scholar] [CrossRef]
  45. Santos, R.C.; Martinho, J.L. An Industry 4.0 maturity model proposal. J. Manuf. Technol. Manag. 2019, 31, 1023–1043. [Google Scholar] [CrossRef]
  46. Felippes, B.; Da Silva, I.; Barbalho, S.; Adam, T.; Heine, I.; Schmitt, R. 3D-CUBE readiness model for industry 4.0: Technological, organizational, and process maturity enablers. Prod. Manuf. Res. 2022, 10, 875–937. [Google Scholar] [CrossRef]
  47. Altan Koyuncu, C.; Aydemir, E.; Başarır, A.C. Selection Industry 4.0 maturity model using fuzzy and intuitionistic fuzzy TOPSIS methods for a solar cell manufacturing company. Soft Comput. 2021, 25, 10335–10349. [Google Scholar] [CrossRef]
  48. Ünlü, H.; Demirörs, O.; Garousi, V. Readiness and maturity models for Industry 4.0: A systematic literature review. J. Softw. Evol. Process. 2023, 36, e2641. [Google Scholar] [CrossRef]
  49. Hästbacka, D.; Jaatinen, A.; Hoikka, H.; Halme, J.; Larrañaga, M.; More, R.; Mesiä, H.; Björkbom, M.; Barna, L.; Pettinen, H.; et al. Dynamic and Flexible Data Acquisition and Data Analytics System Software Architecture. In Proceedings of the 2019 IEEE SENSORS, Montreal, QC, Canada, 27–30 October 2019; pp. 1–4. [Google Scholar] [CrossRef]
  50. Martínez, P.; Dintén, R.; Drake, J.; Zorrilla, M.E. A big data-centric architecture metamodel for Industry 4.0. Future Gener. Comput. Syst. 2021, 125, 263–284. [Google Scholar] [CrossRef]
  51. LeClair, A.; Jaskolka, J.; MacCaull, W.; Khédri, R. Architecture for ontology-supported multi-context reasoning systems. Data Knowl. Eng. 2022, 140, 102044. [Google Scholar] [CrossRef]
  52. Rossit, D.; Tohmé, F. knowledge representation in Industry 4.0 scheduling problems. Int. J. Comput. Integr. Manuf. 2022, 35, 1172–1187. [Google Scholar] [CrossRef]
  53. Havard, V.; Sahnoun, M.; Bettayeb, B.; Duval, F.; Baudry, D. Data architecture and model design for Industry 4.0 components integration in cyber-physical production systems. Proc. Inst. Mech. Eng. Part B J. Eng. Manuf. 2020, 235, 2338–2349. [Google Scholar] [CrossRef]
  54. Trunzer, E.; Calá, A.; Leitão, P.; Gepp, M.; Kinghorst, J.; Lüder, A.; Schauerte, H.; Reifferscheid, M.; Vogel-Heuser, B. System architectures for Industrie 4.0 applications. Prod. Eng. 2019, 13, 247–257. [Google Scholar] [CrossRef]
  55. Martikkala, A.; Wiikinkoski, O.; Asadi, R.; Queguineur, A.; Ylä-Autio, A.; Flores Ituarte, I. Industrial IoT system for laser-wire direct energy deposition: Data collection and visualization of manufacturing process signals. IOP Conf. Ser. Mater. Sci. Eng. 2023, 1296, 012006. [Google Scholar] [CrossRef]
  56. Ho, M.H.; Yen, H.C.; Lai, M.Y.; Liu, Y.T. Implementation of DDS Cloud Platform for Real-time Data Acquisition of Sensors. In Proceedings of the 2021 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS), Hualien City, Taiwan, 16–19 November 2021; pp. 1–2. [Google Scholar] [CrossRef]
  57. Bosi, F.; Corradi, A.; Foschini, L.; Monti, S.; Patera, L.; Poli, L.; Solimando, M. Cloud-enabled Smart Data Collection in Shop Floor Environments for Industry 4.0. In Proceedings of the 2019 15th IEEE International Workshop on Factory Communication Systems (WFCS), Sundsvall, Sweden, 27–29 May 2019; pp. 1–8. [Google Scholar] [CrossRef]
  58. Pinheiro, J.; Pinto, R.; Gonçalves, G.; Ribeiro, A. Lean 4.0: A Digital Twin approach for automated cycle time collection and Yamazumi analysis. In Proceedings of the 2023 3rd International Conference on Electrical, Computer, Communications and Mechatronics Engineering (ICECCME), Tenerife, Canary Islands, Spain, 19–21 July 2023; pp. 1–6. [Google Scholar] [CrossRef]
  59. Oliveira, M.; Afonso, D. Industry Focused in Data Collection: How Industry 4.0 is Handled by Big Data. In Proceedings of the 2019 2nd International Conference on Data Science and Information Technology, Seoul, Republic of Korea, 19–21 July 2019; pp. 12–18. [Google Scholar] [CrossRef]
  60. Gao, Z.; Cao, J.; Wang, W.; Zhang, H.; Xu, Z. Online-Semisupervised Neural Anomaly Detector to Identify MQTT-Based Attacks in Real Time. Secur. Commun. Netw. 2021, 2021, 4587862. [Google Scholar] [CrossRef]
  61. An-dong, S.; Fang, Z. Research on Open Source Solutions of Data Collection for Industrial Internet of Things. In Proceedings of the 2021 7th International Symposium on Mechatronics and Industrial Informatics (ISMII), Zhuhai, China, 22–24 January 2021; pp. 180–183. [Google Scholar] [CrossRef]
  62. Yu, W.; Liang, F.; He, X.; Hatcher, W.G.; Lu, C.; Lin, J.; Yang, X. A Survey on the Edge Computing for the Internet of Things. IEEE Access 2018, 6, 6900–6919. [Google Scholar] [CrossRef]
  63. Rocha, M.S.; Sestito, G.S.; Dias, A.L.; Turcato, A.C.; Brandão, D.; Ferrari, P. On the performance of OPC UA and MQTT for data exchange between industrial plants and cloud servers. Acta IMEKO 2019, 8, 80. [Google Scholar] [CrossRef]
  64. Peniak, P.; Holečko, P.; Bubeníková, E.; Kanáliková, A. LoRaWAN Sensors Integration for Manufacturing Applications via Edge Device Model with OPC UA. In Proceedings of the 2023 International Conference on Applied Electronics (AE), Pilsen, Czech Republic, 6–7 September 2023; pp. 1–6. [Google Scholar] [CrossRef]
  65. Brecko, A.; Burda, F.; Papcun, P.; Kajati, E. Applicability of OPC UA and REST in Edge Computing. In Proceedings of the 2022 IEEE 20th Jubilee World Symposium on Applied Machine Intelligence and Informatics (SAMI), Poprad, Slovakia, 2–5 March 2022; pp. 255–260. [Google Scholar] [CrossRef]
  66. Ehrlich, M.; Wisniewski, L.; Trsek, H.; Jasperneite, J. Modelling and automatic mapping of cyber security requirements for industrial applications: Survey, problem exposition, and research focus. In Proceedings of the 2018 14th IEEE International Workshop on Factory Communication Systems (WFCS), Imperia, Italy, 13–15 June 2018; pp. 1–9. [Google Scholar] [CrossRef]
  67. Ng, T.C.; Ghobakhloo, M. Energy sustainability and industry 4.0. IOP Conf. Ser. Earth Environ. Sci. 2020, 463, 012090. [Google Scholar] [CrossRef]
  68. Souza, F.F.D.; Corsi, A.; Pagani, R.N.; Balbinotti, G.; Kovaleski, J.L. Total quality management 4.0: Adapting quality management to Industry 4.0. TQM J. 2022, 34, 749–769. [Google Scholar] [CrossRef]
  69. Prifti, L.; Knigge, M.; Kienegger, H.; Krcmar, H. A Competency Model for “Industrie 4.0” Employees. 2017. Available online: https://aisel.aisnet.org/wi2017/track01/paper/4/ (accessed on 28 April 2025).
  70. Gupta, A.; Kr Singh, R.; Kamble, S.; Mishra, R. Knowledge management in industry 4.0 environment for sustainable competitive advantage: A strategic framework. Knowl. Manag. Res. Pract. 2022, 20, 878–892. [Google Scholar] [CrossRef]
  71. Ahmed baha Eddine, A.; Silva, C.; Ferreira, L. Transforming for Sustainability: Total Quality Management and Industry 4.0 Integration with a Dynamic Capability View. In Proceedings of the International Conference on Industrial Engineering and Operations Management, Lisbon, Portugal, 18–20 July 2023. [Google Scholar] [CrossRef]
  72. Bakhtari, A.R.; Waris, M.M.; Sanin, C.; Szczerbicki, E. Evaluating Industry 4.0 Implementation Challenges Using Interpretive Structural Modeling and Fuzzy Analytic Hierarchy Process. Cybern. Syst. 2021, 52, 350–378. [Google Scholar] [CrossRef]
  73. Sedlmair, M.; Meyer, M.; Munzner, T. Design study methodology: Reflections from the trenches and the stacks. IEEE Trans. Vis. Comput. Graph. 2012, 18, 2431–2440. [Google Scholar] [CrossRef]
  74. Koch, C. Data Integration Against Multiple Evolving Autonomous Schemata. Master’s Thesis, Institut fur medizinische Kybernetik und Artificial Intelligence Universitat Wien, Vienna, Austria, 2001. [Google Scholar]
  75. Sodiya, E.O.; Umoga, U.J.; Obaigbena, A.; Jacks, B.S.; Ugwuanyi, E.D.; Daraojimba, A.I.; Lottu, O.A. Current state and prospects of edge computing within the Internet of Things (IoT) ecosystem. Int. J. Sci. Res. Arch. 2024, 11, 1863–1873. [Google Scholar] [CrossRef]
  76. Lu, S.; Lu, J.; An, K.; Wang, X.; He, Q. Edge Computing on IoT for Machine Signal Processing and Fault Diagnosis: A Review. IEEE Internet Things J. 2023, 10, 11093–11116. [Google Scholar] [CrossRef]
  77. Krupitzer, C.; Müller, S.; Lesch, V.; Züfle, M.; Edinger, J.; Lemken, A.; Schäfer, D.; Kounev, S.; Becker, C. A Survey on Human Machine Interaction in Industry 4.0. arXiv 2020, arXiv:2002.01025. [Google Scholar]
  78. Villani, V.; Sabattini, L.; Zanelli, G.; Callegati, E.; Bezzi, B.; Baranska, P.; Mockallo, Z.; Zolnierczyk-Zreda, D.; Czerniak, J.N.; Nitsch, V.; et al. A User Study for the Evaluation of Adaptive Interaction Systems for Inclusive Industrial Workplaces. IEEE Trans. Autom. Sci. Eng. 2022, 19, 3300–3310. [Google Scholar] [CrossRef]
  79. Reinhart, G. (Ed.) Handbuch Industrie 4.0: Geschäftsmodelle, Prozesse, Technik; Hanser: München, Germany, 2017. [Google Scholar]
  80. Qaisi, H.A.; Quba, G.Y.; Althunibat, A.; Abdallah, A.; Alzu’bi, S. An Intelligent Prototype for Requirements Validation Process Using Machine Learning Algorithms. In Proceedings of the 2021 International Conference on Information Technology (ICIT), Amman, Jordan, 14–15 July 2021; pp. 870–875. [Google Scholar] [CrossRef]
  81. Anjum, R.; Azam, F.; Anwar, M.W.; Amjad, A. A Meta-Model to Automatically Generate Evolutionary Prototypes from Software Requirements. In Proceedings of the 2019 7th International Conference on Computer and Communications Management, Bangkok, Thailand, 27–29 July 2019; pp. 131–136. [Google Scholar] [CrossRef]
  82. Martelli, C. A Point of View on New Education for Smart Citizenship. Future Internet 2017, 9, 4. [Google Scholar] [CrossRef]
  83. IBA AG. 2024. Available online: https://www.iba-ag.com/en/security/iba-2024-03 (accessed on 28 April 2025).
  84. Werner, F.; Woitsch, R. Data Processing in Industrie 4.0: Data Analysis and Knowledge Management in Industrie 4.0. Datenbank-Spektrum 2018, 18, 15–25. [Google Scholar] [CrossRef]
  85. Cobb, C.; Sudar, S.; Reiter, N.; Anderson, R.; Roesner, F.; Kohno, T. Computer Security for Data Collection Technologies. In Proceedings of the Eighth International Conference on Information and Communication Technologies and Development, Ann Arbor, MI, USA, 3–6 June 2016; pp. 1–11. [Google Scholar] [CrossRef]
  86. McDonald, A.; Leyhane, T. Drill down with root cause analysis. Nurs. Manag. 2005, 36, 26–31; quiz 31–32. [Google Scholar]
  87. Horvat, D.; Som, O. Wettbewerbsvorteile durch informationsbasierten Wissensvorsprung. In Industrie 4.0 für die Praxis; Wagner, R.M., Ed.; Springer Fachmedien: Wiesbaden, Germany, 2018; pp. 185–200. [Google Scholar] [CrossRef]
  88. WKO. EU-Datenschutz-Grundverordnung (DSGVO): Grundsätze und Rechtmäßigkeit der Verarbeitung. 2024. Available online: https://www.wko.at/datenschutz/eu-dsgvo-grundsaetze-verarbeitung (accessed on 28 April 2025).
Figure 1. Stages of the industrial revolution, illustrating the historical development up to Industry 4.0 (figure by Edmund Radlbauer).
Figure 1. Stages of the industrial revolution, illustrating the historical development up to Industry 4.0 (figure by Edmund Radlbauer).
Applsci 15 05081 g001
Figure 2. Horizontal/vertical integration illustrates how horizontal and vertical integration works in Industry 4.0 to enable a seamless flow of information between different production stages and cross-company processes [21] (figure by Edmund Radlbauer).
Figure 2. Horizontal/vertical integration illustrates how horizontal and vertical integration works in Industry 4.0 to enable a seamless flow of information between different production stages and cross-company processes [21] (figure by Edmund Radlbauer).
Applsci 15 05081 g002
Figure 3. The figure illustrates how the MQTT (Message Queuing Telemetry Transport) protocol works by showing how data are exchanged between MQTT clients and the MQTT broker, with the broker serving as a central node that receives messages from publishers and forwards them to the appropriate subscribers [36] (figure by Edmund Radlbauer).
Figure 3. The figure illustrates how the MQTT (Message Queuing Telemetry Transport) protocol works by showing how data are exchanged between MQTT clients and the MQTT broker, with the broker serving as a central node that receives messages from publishers and forwards them to the appropriate subscribers [36] (figure by Edmund Radlbauer).
Applsci 15 05081 g003
Figure 4. Knowledge staircase 4.0 according to the idea of [4] (source: own illustration; figure created by Edmund Radlbauer).
Figure 4. Knowledge staircase 4.0 according to the idea of [4] (source: own illustration; figure created by Edmund Radlbauer).
Applsci 15 05081 g004
Figure 5. System design by Edmund Radlbauer (figure by Edmund Radlbauer).
Figure 5. System design by Edmund Radlbauer (figure by Edmund Radlbauer).
Applsci 15 05081 g005
Figure 6. Schematic overview of the experimental setup (figure by Edmund Radlbauer).
Figure 6. Schematic overview of the experimental setup (figure by Edmund Radlbauer).
Applsci 15 05081 g006
Figure 7. Graphical representation of latency (figure by Edmund Radlbauer).
Figure 7. Graphical representation of latency (figure by Edmund Radlbauer).
Applsci 15 05081 g007
Table 1. Measured latencies per data source and communication path.
Table 1. Measured latencies per data source and communication path.
Data SourceProtocol/PathSampling RateLatency (ms)
S7-400Profibus (1.5 Mbit/s)50 Hz5.2 ± 0.8
S7-200IBH Link S7++ (Ethernet)20 Hz10.3 ± 1.5
Coating SensorOPC UA10 Hz15.6 ± 2.1
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Radlbauer, E.; Moser, T.; Wagner, M. Designing a System Architecture for Dynamic Data Collection as a Foundation for Knowledge Modeling in Industry. Appl. Sci. 2025, 15, 5081. https://doi.org/10.3390/app15095081

AMA Style

Radlbauer E, Moser T, Wagner M. Designing a System Architecture for Dynamic Data Collection as a Foundation for Knowledge Modeling in Industry. Applied Sciences. 2025; 15(9):5081. https://doi.org/10.3390/app15095081

Chicago/Turabian Style

Radlbauer, Edmund, Thomas Moser, and Markus Wagner. 2025. "Designing a System Architecture for Dynamic Data Collection as a Foundation for Knowledge Modeling in Industry" Applied Sciences 15, no. 9: 5081. https://doi.org/10.3390/app15095081

APA Style

Radlbauer, E., Moser, T., & Wagner, M. (2025). Designing a System Architecture for Dynamic Data Collection as a Foundation for Knowledge Modeling in Industry. Applied Sciences, 15(9), 5081. https://doi.org/10.3390/app15095081

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop