Next Article in Journal
Numerical Investigation on Natural Gas Leakage and Diffusion from Buried Pipelines in Soil: Effects of Pipeline Parameters and Leakage Hole Characteristics
Previous Article in Journal
Detection of Soluble Solid Content in Xinyu Pears Using Near-Infrared Spectroscopy and Deep Fusion of Multi-Preprocessed Spectral Data
Previous Article in Special Issue
A Surface Wear Prediction Framework and Performance Evaluation Strategy for Polymer Gears
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Human-Supervised CPS-Based Optimization of Insulation Material Production: An Industrial Case Study

1
Faculty of Mechanical Engineering, University of Novo Mesto, Na Loko 2, 8000 Novo Mesto, Slovenia
2
Faculty of Engineering, University of Rijeka, Vukovarska 58, 51000 Rijeka, Croatia
*
Author to whom correspondence should be addressed.
Appl. Sci. 2026, 16(10), 4730; https://doi.org/10.3390/app16104730
Submission received: 10 April 2026 / Revised: 30 April 2026 / Accepted: 5 May 2026 / Published: 10 May 2026
(This article belongs to the Special Issue Cyber-Physical Systems for Smart Manufacturing)

Featured Application

The proposed human-supervised CPS-oriented framework is directly applicable to insulation-material plants and comparable process industries in which bottlenecks in loading, setup, internal logistics, reliability, and energy use can be addressed through cyber-supported monitoring, feedback-informed decision-making, and selective automation.

Abstract

Insulation-material manufacturers face increasing pressure to improve productivity, cost efficiency, energy performance and worker safety while maintaining stable quality in highly constrained production environments. Existing lean and smart-manufacturing studies often examine isolated tools, individual monitoring technologies or material-level sustainability, but fewer studies provide conservative plant-level validation of an integrated intervention in insulation-material production. This study therefore examines the optimization of insulation-material production in a human-supervised cyber–physical manufacturing system through an industrial before–after intervention. The framework combines bottleneck identification, value stream mapping, SMED, selective automation, preventive maintenance and KPI-based digital monitoring. The baseline system was constrained by manual crusher loading, long changeovers, inefficient pallet transport, repeated breakdowns, scrap and limited real-time visibility. After implementation, productivity increased from 7864 to 9000 kg/day (+14.5%), monthly production costs decreased from EUR 200,000 to EUR 180,000 (−10%), breakdown frequency fell from 5 to 3 events/month (−40%), scrap decreased from 5% to 3% (−40%), crusher loading time fell from 30 to 10 min/pallet (−66%), annual energy use dropped from 500 to 450 MWh (−10%) and reported safety incidents decreased to zero during the 12-month post-implementation observation period. An OEE-based surrogate model yielded pre- and post-state theoretical capacity estimates differing by less than 1%, supporting internal consistency. The results are interpreted as descriptive and practically meaningful before–after differences because the full raw monthly dataset is commercially sensitive and classical inferential testing was not performed. The study contributes by presenting a reproducible, conservative and human-supervised CPS-oriented plant-intervention protocol rather than by claiming a fully autonomous closed-loop CPS.

1. Introduction

Cyber–physical systems (CPSs) in manufacturing connect physical production processes with digital data acquisition, monitoring, analysis, and feedback mechanisms, enabling improved process visibility, faster response to deviations, and more informed operational decision-making [1,2,3,4,5]. In this study, the term CPS is used in the manufacturing sense of a cyber–physical manufacturing system (CPMS), in which shop-floor operations, material flow, maintenance activities, and performance indicators are linked to a cyber layer that supports monitoring and feedback-based optimization. Following the NIST view that CPSs involve interacting digital, physical, and human components engineered through integrated logic and physics [1,2], the present work deliberately positions the implemented system as a human-supervised CPS rather than as a fully autonomous closed-loop controller.
This distinction is important because insulation-material production is shaped by interacting with physical and organizational constraints. Operational performance depends not only on machine efficiency and throughput, but also on reliable material handling, changeover stability, internal logistics, energy consumption, defect reduction and worker safety. Manual loading, inefficient transport, frequent stoppages and delayed reaction to deviations can significantly reduce competitiveness and increase production costs. At the same time, insulation manufacturing is increasingly exposed to sustainability and resource-efficiency expectations [6,7,8,9,10,11,12,13,14,15,16,17,18], as well as to smart-manufacturing, digital-monitoring and Industry 4.0 expectations [19,20,21,22,23,24,25,26,27,28,29,30,31,32].
The baseline production system examined in this study was constrained by manual crusher loading, long setup and changeover activities, inefficient pallet transport, frequent breakdowns, material scrap and limited performance visibility. These constraints jointly reduced productivity, increased production costs, and weakened process reliability. The case therefore represented a suitable industrial setting for examining how a conservative CPS-oriented improvement framework could support practical optimization in a real insulation-material manufacturing environment.
Figure 1 summarizes the logic of the industrial case by linking baseline bottlenecks, physical interventions, cyber-layer support and validated plant-level outcomes before the detailed methodological description.
Four literature streams are relevant. First, lean manufacturing and sustainability studies show that lean, Six Sigma, and sustainability are increasingly treated as integrated systems rather than as independent managerial approaches [6,7,8,9,25,26,33]. Second, SMED and setup-reduction research confirms that changeover improvement is most effective when combined with standardization, organizational clarity, and structured implementation [34,35,36,37]. Third, digital monitoring, predictive maintenance, digital-twin and IIoT research position manufacturing as a data-rich environment in which sensing, feedback, diagnosis, scheduling, and intervention are increasingly coupled [19,20,21,22,23,24,27,28,29,30,31,32]. Fourth, insulation-material and mineral wool studies address circularity, reuse and recycling, but often focus on material pathways rather than factory-level operational optimization [10,11,12,13,14,15,16,17,18].
The artificial intelligence literature also indicates a pathway for future development. Recent work on super-resolution reconstruction, graph-based transfer learning for rotating machinery fault diagnosis, fire-door defect inspection, and semi-supervised AHU fault detection illustrates how AI can support advanced diagnosis, inspection, and predictive maintenance [38,39,40,41]. However, these AI approaches were not implemented in the present plant. They are cited to position future extensions and to clarify that the present contribution is an industrial CPS-oriented monitoring and improvement framework, not an AI diagnostic algorithm.
Despite the breadth of the literature, four gaps remain relevant. First, many lean-sustainability studies are conceptual, review-based, or broad framework integrations rather than tightly bounded industrial interventions with explicit before-and-after KPI validation. Second, many smart-manufacturing studies validate only one dominant mechanism at a time, such as SMED, bottleneck detection, energy monitoring, digital-twin monitoring, or predictive maintenance, rather than a single plant-level intervention that combines lean methods, selective automation, maintenance, and cyber-supported KPI monitoring. Third, the insulation-material literature is stronger in circularity and product-system sustainability than in factory-level production optimization. Fourth, few studies report a unified KPI set spanning productivity, cost, reliability, scrap, energy, and safety in insulation-material production within a CPS-oriented environment.
The novelty of this work is therefore not the proposal of a new autonomous CPS algorithm. Rather, it lies in the conservative industrial validation of a human-supervised CPS-oriented diagnosis–intervene–monitor–feedback framework in a real insulation-material production line, using a coherent KPI architecture and an OEE-based consistency check.
The study addresses the following research question: How, and to what extent, can a human-supervised CPS that combines bottleneck analysis, lean methods, selective automation, preventive maintenance, and KPI monitoring improve productivity, cost efficiency, reliability, quality, energy performance, and worker safety in insulation-material production?
The working hypotheses are:
H1: 
The framework reduces non-value-added time in critical operations;
H2: 
The framework improves production output and cost efficiency;
H3: 
The framework improves operational stability and quality;
H4: 
The framework improves sustainability-related performance.
In this study, the CPS layer considered an enabling architecture that supports all four hypotheses through enhanced visibility, improved feedback quality, and greater intervention effectiveness. Table 1 positions the present manuscript against the representative CPPS, lean, digital-monitoring, AI and industrial case study literature.

2. Materials and Methods

The study was designed as a single-site industrial intervention case study with a before–after evaluation framework implemented in a human-supervised cyber–physical manufacturing system. The methodological logic combines process diagnostics, lean intervention, selective automation, and KPI-based validation in a real insulation-material production environment. Because the intervention was implemented on a single operating production system, a fully controlled experimental design with replicated treatment and control lines was not feasible. The analysis therefore focuses on absolute and relative changes in key performance indicators before and after the intervention, triangulated across multiple operational data sources.
The physical layer comprised material preparation, crusher loading, setup/changeover operations, pallet transport, machine operation, maintenance activities, and downstream production flow. The cyber layer comprised MES-supported production data collection, smart monitoring of machine and energy performance, KPI visualization, and feedback routines connecting measured plant performance to operational and managerial responses. The architecture did not represent a fully autonomous closed-loop control system; it functioned as a human-supervised CPS environment in which digital visibility supported adaptive decision-making, faster response to deviations and iterative process optimization. Table 2 expands the production-line setting and explains the confidentiality boundary that limits disclosure of vendor-specific details.
The as-is data reported a crusher loading time of 30 min/pallet, monthly production costs of EUR 200,000, breakdown frequency of 5 breakdowns/month, downtime/time losses of 10 h/month, daily productivity of 7864 kg/day, scrap of 5%, cycle time of 45 min/unit, and an average waiting time of 10 min between operations. Worker movement of approximately 5 km/shift also indicated suboptimal layout and internal material flow. These baseline values were used to identify the dominant bottlenecks and to prioritize the intervention package.
The baseline diagnosis used a combination of time studies, direct process observation, spaghetti diagrams, value stream mapping (VSM), 5 Whys analysis, fishbone analysis, MES-supported operational records, and a review of existing plant KPIs. Spaghetti diagrams were used to visualize worker and material movement. Time studies and production records quantified critical operation durations. VSM represented material and information flows and identified waiting times, transport losses, and cycle inefficiencies. Root-cause techniques were then applied to distinguish dominant losses from downstream symptoms.
Figure 2 presents the PDCA-based improvement workflow used to move from baseline diagnosis to monitored post-implementation stabilization.
The intervention followed a PDCA-based human-supervised CPS-oriented improvement logic and can be summarized in seven stages: (1) baseline diagnosis of the production system, (2) identification and prioritization of bottlenecks, (3) selection of lean and technical interventions, (4) phased implementation, (5) post-implementation KPI monitoring, (6) before–after comparison, and (7) corrective action and continuous improvement. Within this framework, lean methods and selective automation acted primarily on the physical layer, whereas KPI monitoring, digital data acquisition, and supervisory feedback acted on the cyber layer. The interaction of these layers enabled repeated adjustment of interventions based on observed production system behavior.
Figure 3 summarizes the root-cause structure that guided intervention prioritization.
Table 3 clarifies the implementation sequence and separates the main intervention blocks, responding to the concern that multiple measures were introduced together.
The measures were intentionally implemented as a phased package rather than as isolated experiments. This design improves industrial feasibility but limits the ability to attribute each final KPI change to a single measure. The strongest directly attributable local effects were crusher-loading automation for loading time, SMED-oriented measures for setup/changeover time and transport automation for pallet movement. Broader effects on output, cost, downtime, scrap, energy and safety are interpreted as system-level outcomes of the combined intervention.
Table 4 defines the KPI set used to evaluate the intervention and links each metric to its evidence source and hypothesis role.
Data were collected at daily, monthly and annual aggregation levels using repeated time studies, production records, maintenance logs, MES-supported operational data, and smart monitoring of plant performance and energy use. The resulting cyber layer provided ongoing visibility into process behavior and enabled feedback-informed intervention decisions. The CPS’s role was therefore expressed through improved visibility, faster response, and sustained KPI improvement rather than through an independent cyber performance metric.
The evidence base combined repeated operation-level time studies, daily production logs, monthly maintenance, and cost records, quality records, annual energy records, and consolidated plant KPI tables. Operation-level bottleneck indicators were retained when supported by repeated time studies and implementation validation. Plant-level indicators were retained when consistent with consolidated production, maintenance, quality, cost, and energy records. No statistical outlier filtering such as a 3σ or IQR rule was applied to undisclosed raw series; instead, inconsistent implementation-stage values were excluded and the final conservative KPI set was used as the source of truth.
Figure 4 provides a detailed CPS/IT-OT data-flow architecture for the implemented human-supervised manufacturing framework. It separates physical/OT signals, edge and plant-interface functions, cyber-layer aggregation and visualization, supervisory rules, and human-supervised feedback so that the physical-to-cyber and cyber-to-physical information paths are explicit.
Physical/OT data from crusher loading, setup/changeover, pallet conveyance, machine operation, and quality/safety logs flow upward via common industrial communication protocol classes (OPC UA, Modbus TCP and MQTT) through an acquisition and IT/OT integration layer (PLC/SCADA/MES signal capture, data validation, and aggregation buffer) to the cyber/IT data and analytics layer. There, KPIs (OEE, cost, and energy) are computed using a time-series historian and MES database, with analytics services and an API layer. A dashboard and rules layer provides visualization through the plant-standard dashboard interface, a threshold engine (OEE, downtime, or quality), and an alert queue with an audit log.
Human-supervised feedback—alarms, work orders, scheduling priorities, and PDCA/CAPA actions—flows back to operators, supervisors, and maintenance staff under human authority. A threshold update loop supports continuous improvement. The architecture expresses a plant-level diagnose–intervene–monitor–feedback logic and explicitly operates as a supervisory CPS; no autonomous high-speed closed-loop control is implemented, and native OT safety interfaces remain unchanged.
Industrial communication between physical/OT equipment and the cyber layer relied on common industrial communication protocol classes, including OPC UA, Modbus TCP, and MQTT. Time-series process data were stored in a dedicated process historian, while aggregated KPI data and MES records were maintained in a relational database. KPI dashboards were implemented using a plant-standard business intelligence interface, providing shift/daily visual indicators without exposing proprietary vendor identities or software versions.
Table 5 expands the earlier conceptual CPS table by adding technical specification classes, data sources, sampling/aggregation level, latency interpretation, and human authority boundaries.
Vendor-specific manufacturer names, sensor models, server identifiers, software versions, and proprietary plant-interface details are anonymized under the industrial confidentiality boundary. For reproducibility, the manuscript reports the functional IT/OT architecture, data-source categories, aggregation levels, and decision logic. The system should not be interpreted as a sub-second autonomous control system; rather, near-real-time is used in the operational supervisory context of minutes-to-shift visibility for deviations and KPI updates. No version-dependent software routines were used in the reported calculations.
To clarify how the cyber layer operated in the implemented human-supervised CPS environment, the KPI monitoring and feedback routine is shown as a graphical algorithmic flowchart in Figure 5. The figure illustrates the data pathway from physical/OT signals to cyber-layer KPI computation and from KPI deviations back to supervisory action. It is intentionally presented as a human-supervised decision-support procedure rather than as a fully autonomous closed-loop controller, because the implemented system generated dashboard updates, alerts, work-order/CAPA actions, and PDCA feedback, while final operational authority remained with operators, supervisors, and maintenance personnel. The figure provides the algorithmic representation of the KPI monitoring and supervisory feedback logic and clarifies the pathway from KPI deviation to human-supervised corrective action.
The flowchart describes the complete monitoring cycle: data acquisition from the physical/OT layer, validation, and KPI computation in the cyber layer, threshold-based deviation detection with safety-critical escalation, human-supervised assignment and execution of corrective actions, persistence checking, and PDCA/CAPA escalation. The loop repeats after the defined monitoring interval and preserves human authority for all operational decisions.
Table 6 links the research question and hypotheses to the validation strategy used in Section 3.
To support the formal evaluation of these hypotheses, the OEE-based model and the algorithmic feedback logic introduced earlier in Section 2 use a set of mathematical symbols and variables. For clarity, all symbols are listed in Table 7.
To formalize the production logic and provide an internal consistency check between plant KPIs, the study introduced an OEE-based surrogate model following established OEE formulations in the manufacturing literature [42,43]. The model was not used to replace the plant KPI system or to claim high-frequency autonomous control; it was used to verify whether reported productivity changes were numerically coherent with measured OEE values.
Q = Q c a p × O E E = Q c a p × A × P × Y
O E E = A × P × Y
c = C / Q a
e = E / Q a
Q c a p = Q / O E E
S x = [ ( f ( x + Δ x ) f ( x ) ) / f ( x ) ] / [ Δ x / x ]
In Equations (1) and (2), A represents availability, P is performance and Y is quality yield. Availability was anchored in validated downtime/availability records, while quality yield was anchored in the reported scrap rate. The performance factor was treated as the implied residual OEE component required to reconcile realized output with theoretical line capacity under stabilized operating conditions. Output-normalized cost and energy indicators were derived from annual plant records by normalizing annual cost and annual energy use by annual output.
Table 8 clarifies aggregation levels and statistical interpretation, avoiding any implication that descriptive industrial data were analyzed as a fully controlled replicated experiment.
Because the submitted manuscript package contains consolidated plant-level KPI values rather than the full raw monthly dataset, the results are interpreted as descriptive and practically meaningful before–after differences. Classical inferential tests, confidence intervals, coefficient-of-variation values and time-series diagnostics were therefore not reported, because the raw monthly observations required for such analyses were not available for disclosure. The study reports aggregated values and triangulates time studies, MES records, maintenance logs and energy-monitoring outputs; no paired t-tests, repeated-measures ANOVA, Wilcoxon signed-rank tests, Durbin–Watson tests, time-series decomposition or CV calculations were performed on unpublished raw monthly observations.
Additional evidence-source mapping, CAPEX assumptions and calculation notes are provided in Appendix A.

3. Results

The results retain only annual consolidated KPI tables and the most internally consistent operation-level measurements as the source of truth. More optimistic target-state or workshop-stage values reported during implementation were not used as headline results. This conservative approach is important in a CPS-oriented manuscript because the credibility of the cyber–physical claim depends on alignment between digital performance visibility, recorded KPI values and validated plant-level outcomes.
The first result block addresses H1. Repeated time studies and post-implementation analysis showed substantial reductions in all three major bottleneck operations. Crusher loading time decreased from 30 min/pallet to 10 min/pallet (−66%), validated setup/changeover time decreased from 30 min to 15 min (−50%) and pallet transport time decreased from 10 min to 4 min (−60%). Cycle time decreased from 45 min/unit to 30 min/unit. These results indicate that the intervention shortened the most time-intensive non-value-adding activities identified in the baseline stage. Figure 6 visualizes the validated local time reductions that support H1.
The second result block addresses H2. Daily productivity increased from 7864 kg/day to 9000 kg/day (+14.5%). Annual production increased from 2,872,360 kg to 3,285,000 kg, corresponding to an additional 412,640 kg/year. Monthly production costs decreased from EUR 200,000 to EUR 180,000 (−10%), and annual production costs decreased from EUR 2,400,000 to EUR 2,160,000, representing annual savings of EUR 240,000. These outcomes show that the intervention improved both throughput and cost efficiency. Figure 7 summarizes the main validated plant-level changes used as headline results.
The third result block addresses H3. Breakdown frequency decreased from 5 to 3 events/month (−40%), equivalent to a decrease from 60 to 36 breakdowns/year. Sustained unplanned downtime decreased from 10 h/month to 5 h/month after PDCA stabilization. Scrap decreased from 5% to 3% (−40%). The combination of lower breakdowns, lower downtime, higher availability, and lower scrap indicates improved structural stability rather than a simple increase in production speed.
The fourth result block addresses H4. Annual energy consumption decreased from 500 MWh/year to 450 MWh/year (−10%), and energy consumption per production unit decreased from 30 kWh/unit to 25 kWh/unit. Reported safety incidents decreased from 3/month to 0/month during the 12-month post-implementation observation period, consistent with reduced manual handling and automation of loading and transport tasks. Figure 8 visualizes the sustainability- and safety-related outcomes that support H4.
Table 9 consolidates the before–after KPI results used as the main evidence base of the manuscript.
Using Equation (5), the back-calculated theoretical capacity was 10,485 kg/day in the pre-intervention state and 10,588 kg/day in the post-intervention state. The relative discrepancy between the two estimates was only 0.98%, indicating good internal consistency between observed productivity values and independently reported OEE values. The implied performance factor remained approximately stable at 92–93%, suggesting that the main improvement mechanism was the combined gain in availability and quality yield rather than an overstatement of nominal running speed. Table 10 summarizes the OEE decomposition, model-based consistency check, and post-state sensitivity scenarios.

4. Discussion

The results support all four hypothesis groups. Non-value-added time was reduced in the most critical operations, throughput increased, production costs declined, breakdown frequency and scrap fell, and energy and safety indicators improved in parallel. This pattern suggests that the intervention did not merely shift losses from one part of the system to another; instead, the production system became faster, more stable, less wasteful and safer.
A key explanatory point is that the intervention was built around measured bottlenecks rather than technology-first implementation. Crusher loading, setup/changeover, pallet transport, breakdowns, and scrap were identified as dominant sources of loss before the capital and organizational measures were selected. The CPS layer did not replace this logic; it strengthened it by improving performance visibility, deviation response, and PDCA-based decision quality.
Compared with SMED-centered studies, the present case reports a setup improvement but extends the evidence base to loading, logistics, breakdowns, scrap, energy, and safety. Compared with energy-monitoring and digital-twin studies, it is less technologically ambitious but stronger as a conservative before–after industrial KPI validation. Compared with insulation circularity studies, its contribution lies at the factory-operation level rather than at the product-system or recycling-pathway levels.
The CPS’s claim is deliberately limited. The implemented architecture should be interpreted as a human-supervised CPS, not as a fully autonomous digital twin, device twin or closed-loop controller in the strict control-engineering sense. The cyber layer primarily provided data acquisition, aggregation, visualization, deviation classification and feedback support, while final intervention authority remained with operators, supervisors and maintenance personnel. This position aligns with CPS frameworks that include digital, physical and human components [1,2,3,4,5], and it clearly distinguishes the present case from more digitally ambitious architectures that implement autonomous scheduling, high-fidelity model synchronization or device-twin control [30,32].
The intervention also has managerial implications. First, the most effective improvements targeted measured bottlenecks rather than fashionable technologies. Second, a mixed package combining lean methods, selective automation, maintenance and KPI monitoring appears more robust than isolated tool deployment. Third, improvement projects should use KPI architectures that include throughput, cost, reliability, quality, energy and safety rather than only output and cost [6,7,8,9,25,26,33,44].
Table 11 extends the economic discussion by adding a conservative sensitivity check for hidden implementation costs such as software, dashboard refinement, training and maintenance.
For transfer to comparable process industries, the case suggests the following sequence: quantify physical bottlenecks using time studies, VSM and maintenance records; define a conservative KPI architecture before intervention; prioritize actions that jointly reduce waiting, manual handling and unplanned stoppages; connect the affected operations to a cyber layer for KPI visibility; validate the intervention with a consolidated pre/post evidence set; and use model-based consistency checks to test whether throughput, OEE, cost and energy figures remain coherent. The same logic could support maintenance and inspection-related applications, for example, by linking defect-classification methods for fire-door inspection or semi-supervised fault detection for air-handling units to future CPS monitoring and CAPA workflows [40,41].
Several limitations should be acknowledged. First, the study is based on a single industrial case, which limits statistical generalizability and supports analytical transferability rather than broad statistical inference. Second, the current manuscript package contains consolidated plant-level observations rather than the full raw monthly dataset; therefore, the before–after differences are interpreted primarily as descriptive and practically meaningful, not as statistically tested effects. Third, the cyber layer was evaluated primarily through its contribution to plant-level performance improvement rather than dedicated cyber–physical metrics such as latency, event-detection accuracy, model fidelity or automated response quality. Fourth, a full discounted lifecycle appraisal, including software, training, maintenance and cybersecurity costs, was outside the scope of the present study. Fifth, AI-assisted predictive maintenance and defect-classification modules were discussed as future extensions but were not implemented.

5. Conclusions

This paper presented a data-driven industrial case study of insulation-material production optimization within a human-supervised cyber–physical manufacturing system. The intervention combined bottleneck analysis, lean methods, selective automation, preventive maintenance, and KPI monitoring to address losses related to crusher loading, setup/changeovers, pallet transport, breakdowns, scrap, and energy use. Across the validated KPI set, the plant improved simultaneously in operational, economic, sustainability, and worker-safety dimensions.
The most important validated outcomes were the crusher loading reduction from 30 to 10 min/pallet, validated setup/changeover reduction from 30 to 15 min, pallet transport reduction from 10 to 4 min, productivity increase from 7864 to 9000 kg/day (+14.5%), monthly production cost reduction from EUR 200,000 to EUR 180,000 (−10%), breakdown-frequency reduction from 5 to 3 events/month (−40%), scrap reduction from 5% to 3% (−40%), annual energy reduction from 500 to 450 MWh (−10%) and a reduction in reported safety incidents decreasing to zero during the 12-month post-implementation observation period.
The main contribution lies in demonstrating that insulation-material production can be optimized when lean interventions, selective automation and KPI-based monitoring are integrated through a feedback-supported, human-supervised CPS architecture. The study does not present a CPS as an abstract digital label, a fully autonomous closed-loop controller or a full digital twin; it presents a practically implemented manufacturing environment in which cyber and physical components jointly supported measurable improvement while final corrective action authority remained with humans.
In addition to the intervention framework itself, the paper contributes a compact OEE-based model that cross-checks throughput consistency and reveals stronger unit-level gains in cost and energy efficiency than are visible from the absolute before–after totals alone.
Future work should extend the framework through longer observation windows, multi-site replication, fuller inferential statistics based on shareable monthly data, explicit lifecycle cost appraisal, dedicated cyber–physical performance metrics, AI-assisted predictive maintenance modules and more rigorous carbon-accounting methodology.

Author Contributions

Conceptualization, L.R. and D.I.; methodology and formal analysis, L.R., E.H., M.P. and D.I.; investigation, L.R. and E.H.; resources, L.R., E.H. and M.P.; data curation, L.R. and D.I.; writing—original draft preparation, L.R.; writing—review and editing, E.H., M.P. and D.I.; visualization, L.R. and D.I.; supervision, D.I.; funding acquisition, E.H. and M.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the European Union—NextGenerationEU—IMOST—uniri-iz-25-287. The views and opinions expressed are solely those of the authors and do not necessarily reflect the official stance of the European Union or the European Commission. Neither the European Union nor the European Commission can be held accountable for them.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
CAPACorrective and preventive action
CAPEXCapital expenditure
CO2Carbon dioxide
CPMSCyber–physical manufacturing system
CPSCyber–physical system
IIoTIndustrial Internet of Things
IoTInternet of Things
IT/OTInformation technology/operational technology
KPIKey performance indicator
MESManufacturing execution system
OEEOverall equipment effectiveness
PDCAPlan–do–check–act
RQResearch question
SCADASupervisory control and data acquisition
SMEDSingle-minute exchange of die
VSMValue stream mapping

Appendix A

Table A1 maps the main headline results to the evidence sources used in the manuscript.
Table A1. Evidence-source mapping for the consolidated results.
Table A1. Evidence-source mapping for the consolidated results.
Reported ResultPrePostEvidence Source
Crusher loading time30 min/pallet10 min/palletTime study + implementation validation
Setup/changeover time30 min15 minTime study + implementation records
Pallet transport time10 min4 minSpaghetti/VSM + time study
Productivity7864 kg/day9000 kg/dayProduction logs
Monthly production costEUR 200,000EUR 180,000Plant cost records
Breakdown frequency5/month3/monthMaintenance logs
Scrap rate5%3%Quality reports
Annual energy consumption500 MWh/year450 MWh/yearEnergy monitoring
Table A2 summarizes the reported capital expenditure and payback assumptions used in the economic discussion.
Table A2. CAPEX and payback assumptions for the intervention package.
Table A2. CAPEX and payback assumptions for the intervention package.
Intervention BlockCAPEXPaybackNotes
SMED/changeover packageEUR 18,000–28,0006–10 monthsRange depends on implementation stage
Automatic crusher loadingEUR 20,000Projected 12 monthsLoading time and safety influence
Automatic pallet transportEUR 15,000Reported 9 monthsTransport automation
Digital inventory/labeling supportEUR 10,000Reported 8 months for inventory-monitoring componentTraceability and visibility support
Overall equipment packageEUR 73,000Average amortization reported as 9 monthsInstalled in several phases

Calculation Procedure for Derived Indicators

OEE. The study uses OEE as a composite indicator of plant performance: OEE = Availability × Performance × Quality.
Availability is the proportion of planned time during which the equipment is operational.
Performance is the actual production speed relative to the target or nominal rate.
Quality is the proportion of good output relative to total output.
Energy per unit. Energy consumption per unit = Total annual energy consumption/annual production output.
Cost per unit. Cost per production unit = Total production cost/production output.

References

  1. Griffor, E.R.; Greer, C.; Wollman, D.A.; Burns, M.J. Framework for Cyber-Physical Systems: Volume 1, Overview; NIST Special Publication 1500-201; National Institute of Standards and Technology: Gaithersburg, MD, USA, 2017. [Google Scholar] [CrossRef]
  2. Greer, C.; Burns, M.; Wollman, D.; Griffor, E. Cyber-Physical Systems and Internet of Things; NIST Special Publication 1900-202; National Institute of Standards and Technology: Gaithersburg, MD, USA, 2019. [Google Scholar] [CrossRef]
  3. Hozdić, E.; Kozjek, D.; Butala, P. A cyber-physical approach to the management and control of manufacturing systems. Stroj. Vestn. J. Mech. Eng. 2020, 66, 61–70. [Google Scholar] [CrossRef]
  4. Monostori, L.; Kádár, B.; Bauernhansl, T.; Kondoh, S.; Kumara, S.; Reinhart, G.; Sauer, O.; Schuh, G.; Sihn, W.; Ueda, K. Cyber-physical systems in manufacturing. CIRP Ann. Manuf. Technol. 2016, 65, 621–641. [Google Scholar] [CrossRef]
  5. Wang, L.; Törngren, M.; Onori, M. Current status and advancement of cyber-physical systems in manufacturing. J. Manuf. Syst. 2015, 37, 517–527. [Google Scholar] [CrossRef]
  6. Cherrafi, A.; Elfezazi, S.; Chiarini, A.; Mokhlis, A.; Benhida, K. The integration of lean manufacturing, Six Sigma and sustainability: A literature review and future research directions for developing a specific model. J. Clean. Prod. 2016, 139, 828–846. [Google Scholar] [CrossRef]
  7. Caldera, H.T.S.; Desha, C.; Dawes, L. Exploring the role of lean thinking in sustainable business practice: A systematic literature review. J. Clean. Prod. 2017, 167, 1546–1565. [Google Scholar] [CrossRef]
  8. May, G.; Stahl, B.; Taisch, M.; Kiritsis, D. Energy management in manufacturing: From literature review to a conceptual framework. J. Clean. Prod. 2017, 167, 1464–1489. [Google Scholar] [CrossRef]
  9. Garza-Reyes, J.A.; Kumar, V.; Chaikittisilp, S.; Tan, K.H. The effect of lean methods and tools on the environmental performance of manufacturing organisations. Int. J. Prod. Econ. 2018, 200, 170–180. [Google Scholar] [CrossRef]
  10. Wiprächtiger, M.; Haupt, M.; Heeren, N.; Waser, E.; Hellweg, S. A framework for sustainable and circular system design: Development and application on thermal insulation materials. Resour. Conserv. Recycl. 2020, 154, 104631. [Google Scholar] [CrossRef]
  11. Bjørnbet, M.M.; Skaar, C.; Fet, A.M.; Schulte, K.Ø. Circular economy in manufacturing companies: A review of case study literature. J. Clean. Prod. 2021, 294, 126268. [Google Scholar] [CrossRef]
  12. Zaragoza-Benzal, A.; Ferrández, D.; Santos, P.; Atanes-Sánchez, E. Upcycling EPS waste and mineral wool to produce new lightweight gypsum composites with improved thermal performance. Constr. Build. Mater. 2024, 449, 138464. [Google Scholar] [CrossRef]
  13. Doschek-Held, K.; Krammer, A.C.; Steindl, F.R.; Sattler, T.; Juhart, J. Recycling of mineral wool waste as supplementary cementitious material through thermochemical treatment. Waste Manag. Res. 2024, 42, 806–813. [Google Scholar] [CrossRef]
  14. Borzova, M.; Gauvin, F.; Schollbach, K. Upcycling waste mineral wool into ambient pressure-dried silica aerogels. ACS Sustain. Chem. Eng. 2025, 13, 2955–2965. [Google Scholar] [CrossRef]
  15. Acar, G.; Steeman, M.; Van den Bossche, N. Reusing thermal insulation materials: Reuse potential and durability assessment of stone wool insulation in flat roofs. Sustainability 2024, 16, 1657. [Google Scholar] [CrossRef]
  16. Milat, M.; Juradin, S.; Ostojić-Škomrlj, N.; Tešovnik, A. Recycling mineral wool waste: Towards sustainable construction materials. Recycling 2025, 10, 174. [Google Scholar] [CrossRef]
  17. Huculak-Mączka, M.; Nieweś, D.; Marecka, K.; Braun-Giwerska, M. Valorization of waste mineral wool and low-rank peat in the fertilizer industry in the context of a resource-efficient circular economy. Sustainability 2025, 17, 7083. [Google Scholar] [CrossRef]
  18. Kántor, P.; Béri, J.; Képes, B.; Székely, E. Glass wool recycling by water-based solvolysis. ChemEngineering 2024, 8, 93. [Google Scholar] [CrossRef]
  19. Wang, P.; Luo, M. A digital twin-based big data virtual and real fusion learning reference framework supported by industrial internet towards smart manufacturing. J. Manuf. Syst. 2021, 58, 16–32. [Google Scholar] [CrossRef]
  20. Huang, C.; Bu, S.; Lee, H.H.; Chan, C.H.; Kong, S.W.; Yung, W.K.C. Prognostics and health management for predictive maintenance: A review. J. Manuf. Syst. 2024, 75, 78–101. [Google Scholar] [CrossRef]
  21. Feng, Q.; Zhang, Y.; Sun, B.; Guo, X.; Fan, D.; Ren, Y.; Song, Y.; Wang, Z. Multi-level predictive maintenance of smart manufacturing systems driven by digital twin: A matheuristics approach. J. Manuf. Syst. 2023, 68, 443–454. [Google Scholar] [CrossRef]
  22. Shoorkand, H.D.; Nourelfath, M.; Hajji, A. A hybrid deep learning approach to integrate predictive maintenance and production planning for multi-state systems. J. Manuf. Syst. 2024, 74, 397–410. [Google Scholar] [CrossRef]
  23. Braghirolli, L.F.; Mendes, L.G.; Engbers, H.; Leohold, S.; Triska, Y.; Flores, M.R.; de Souza, R.O.; Freitag, M.; Frazzon, E.M. Improving production and maintenance planning with meta-learning-based failure prediction. J. Manuf. Syst. 2024, 75, 42–55. [Google Scholar] [CrossRef]
  24. Alarcón, M.; Martínez-García, F.M.; de León Hijes, F.C.G. Energy and maintenance management systems in the context of Industry 4.0: Implementation in a real case. Renew. Sustain. Energy Rev. 2021, 142, 110841. [Google Scholar] [CrossRef]
  25. Saad, S.M.; Bahadori, R.; Bhovar, C.; Zhang, H. Industry 4.0 and lean manufacturing—A systematic review of the state-of-the-art literature and key recommendations for future research. Int. J. Lean Six Sigma 2024, 15, 997–1024. [Google Scholar] [CrossRef]
  26. Kassem, B.; Callupe, M.; Rossi, M.; Rossini, M.; Portioli-Staudacher, A. Lean 4.0: A systematic literature review on the interaction between lean production and Industry 4.0 pillars. J. Manuf. Technol. Manag. 2024, 35, 821–847. [Google Scholar] [CrossRef]
  27. Cai, W.; Wang, L.; Li, L.; Xie, J.; Jia, S.; Zhang, X.; Jiang, Z.; Lai, K.-H. A review on methods of energy performance improvement towards sustainable manufacturing from perspectives of energy monitoring, evaluation, optimization and benchmarking. Renew. Sustain. Energy Rev. 2022, 159, 112227. [Google Scholar] [CrossRef]
  28. Chen, X.; Li, C.; Tang, Y.; Xiao, Q. An Internet of Things based energy efficiency monitoring and management system for machining workshop. J. Clean. Prod. 2018, 199, 957–968. [Google Scholar] [CrossRef]
  29. Rodríguez Aguilar, M.J.; Cardiel, I.A.; Somolinos, J.A.C. IIoT system for intelligent detection of bottleneck in manufacturing lines. Appl. Sci. 2024, 14, 323. [Google Scholar] [CrossRef]
  30. Yang, J.; Zheng, Y.; Wu, J.; Wang, Y.; He, J.; Tang, L. Enhancing manufacturing excellence with digital-twin-enabled operational monitoring and intelligent scheduling. Appl. Sci. 2024, 14, 6622. [Google Scholar] [CrossRef]
  31. Hanifi, S.; Alkali, B.; Lindsay, G.; McGlinchey, D. Optimizing energy and air consumption in smart manufacturing: An Industrial Internet of Things-based monitoring and efficiency enhancement solution. Appl. Sci. 2025, 15, 3222. [Google Scholar] [CrossRef]
  32. Bădoi, C.I.; Kartal Çetin, B.; Çetin, K.; Karataş, Ç.; Özbek, M.E.; Şahin, S. A hierarchical framework leveraging IIoT networks, IoT hub, and device twins for intelligent industrial automation. Appl. Sci. 2026, 16, 645. [Google Scholar] [CrossRef]
  33. Koemtzi, M.D.; Psomas, E.; Antony, J.; Tortorella, G.L. Lean manufacturing and human resources: A systematic literature review on future research suggestions. Total Qual. Manag. Bus. Excell. 2023, 34, 468–495. [Google Scholar] [CrossRef]
  34. Khakpour, R.; Ebrahimi, A.; Seyed-Hosseini, S. SMED 4.0: A development of Single-Minute Exchange of Die in the era of Industry 4.0 technologies to improve sustainability. J. Manuf. Technol. Manag. 2024, 35, 568–589. [Google Scholar] [CrossRef]
  35. Braglia, M.; Di Paco, F.; Marrazzini, L. A new lean tool for efficiency evaluation in SMED projects. Int. J. Adv. Manuf. Technol. 2023, 127, 431–446. [Google Scholar] [CrossRef]
  36. Mohammad, A.; Hamja, A.; Hasle, P. Reduction of changeover time through SMED with RACI integration in garment factories. Int. J. Lean Six Sigma 2024, 15, 201–219. [Google Scholar] [CrossRef]
  37. Sousa, S.; Silva, M.M.; Gaspar, P.D. Implementation of SMED workshops: A strategic approach in the automotive sector. Appl. Sci. 2025, 15, 8943. [Google Scholar] [CrossRef]
  38. Yan, J.K.; Wang, Q.; Cheng, Y.; Su, Z.Y.; Zhang, F.; Zhong, M.L.; Liu, L.; Jin, B.; Zhang, W.H. Optimized single-image super-resolution reconstruction: A multimodal approach based on reversible guidance and cyclical knowledge distillation. Eng. Appl. Artif. Intell. 2024, 133, 108496. [Google Scholar] [CrossRef]
  39. Wang, X.; Jiang, H.; Dong, Y.; Mu, M. Spatial-channel collaborative multi-scale graph interaction deep transfer learning for unsupervised rotating machinery fault diagnosis. Eng. Appl. Artif. Intell. 2026, 176, 114691. [Google Scholar] [CrossRef]
  40. Wang, S. Graph neural network-driven text classification for fire-door defect inspection in pre-completion construction. Sci. Rep. 2025, 15, 44382. [Google Scholar] [CrossRef]
  41. Wang, S. Class-aware temporal and contextual contrastive framework for semi-supervised automated fault detection and diagnosis in air handling units. Energy Build. 2026, 358, 117233. [Google Scholar] [CrossRef]
  42. Muchiri, P.; Pintelon, L. Performance measurement using overall equipment effectiveness (OEE): Literature review and practical application discussion. Int. J. Prod. Res. 2008, 46, 3517–3535. [Google Scholar] [CrossRef]
  43. Ng Corrales, L.d.C.; Lambán, M.P.; Hernandez Korner, M.E.; Royo, J. Overall equipment effectiveness: Systematic literature review and overview of different approaches. Appl. Sci. 2020, 10, 6469. [Google Scholar] [CrossRef]
  44. Małysa, T.; Furman, J.; Pawlak, S.; Šolc, M. Application of selected lean manufacturing tools to improve work safety in the construction industry. Appl. Sci. 2024, 14, 6312. [Google Scholar] [CrossRef]
Figure 1. Conceptual overview of the study: baseline bottlenecks, physical interventions, cyber-layer support, and validated plant-level outcomes.
Figure 1. Conceptual overview of the study: baseline bottlenecks, physical interventions, cyber-layer support, and validated plant-level outcomes.
Applsci 16 04730 g001
Figure 2. PDCA-based human-supervised CPS-oriented improvement framework used in the study.
Figure 2. PDCA-based human-supervised CPS-oriented improvement framework used in the study.
Applsci 16 04730 g002
Figure 3. Fishbone analysis of major sources of productivity loss in the baseline state.
Figure 3. Fishbone analysis of major sources of productivity loss in the baseline state.
Applsci 16 04730 g003
Figure 4. Proposed CPS/IT-OT data-flow architecture for the implemented human-supervised manufacturing framework.
Figure 4. Proposed CPS/IT-OT data-flow architecture for the implemented human-supervised manufacturing framework.
Applsci 16 04730 g004
Figure 5. Human-supervised CPS monitoring and feedback logic. Arrows indicate the direction of data flow from the physical/OT layer to the cyber layer and the feedback path from detected deviations to human-supervised corrective action; loop arrows indicate repeated monitoring and PDCA/CAPA iteration.
Figure 5. Human-supervised CPS monitoring and feedback logic. Arrows indicate the direction of data flow from the physical/OT layer to the cyber layer and the feedback path from detected deviations to human-supervised corrective action; loop arrows indicate repeated monitoring and PDCA/CAPA iteration.
Applsci 16 04730 g005
Figure 6. Reductions in critical time-based bottlenecks after the intervention. Bars show validated before-and-after values for the principal time-loss mechanisms in the physical layer.
Figure 6. Reductions in critical time-based bottlenecks after the intervention. Bars show validated before-and-after values for the principal time-loss mechanisms in the physical layer.
Applsci 16 04730 g006
Figure 7. Validated plant-level performance changes after implementation. Positive values represent improvement in productivity, while negative values represent reductions in cost, breakdowns, scrap and energy use.
Figure 7. Validated plant-level performance changes after implementation. Positive values represent improvement in productivity, while negative values represent reductions in cost, breakdowns, scrap and energy use.
Applsci 16 04730 g007
Figure 8. Sustainability and worker-safety outcomes following implementation. Plant-level annual energy use, estimated CO2 impact and reported safety incidents are shown as supporting CPS-enabled outcome indicators.
Figure 8. Sustainability and worker-safety outcomes following implementation. Plant-level annual energy use, estimated CO2 impact and reported safety incidents are shown as supporting CPS-enabled outcome indicators.
Applsci 16 04730 g008
Table 1. The representative literature benchmark and positioning of the present human-supervised CPS-oriented study.
Table 1. The representative literature benchmark and positioning of the present human-supervised CPS-oriented study.
StudySector/ContextMain Method(s)Primary KPI Focus/Reported EffectGap Relative to This Study
Hozdić et al. [3]Manufacturing CPPSCPPS conceptual and control modelCyber, physical, and human layersConceptual/simulation-oriented CPPS; no insulation-production before-and-after KPI validation
Sousa et al. [37]Automotive injection/setup workshopsStructured SMED workshops38.8% setup time reduction Setup-centered intervention; no integrated logistics, energy, and safety KPI package
Hanifi et al. [31]Smart manufacturing/legacy equipmentIIoT energy and air monitoringMachine-level utility savingsEnergy-focused intervention rather than a bounded multi-KPI plant intervention
Yang et al. [30]Manufacturing operationsDigital-twin-enabled monitoring and schedulingImproved operational visibility Digital representation emphasis; no insulation-manufacturing before-and-after KPI package
Wang [40]Construction inspectionGNN-based defect text classificationAutomated fire-door defect categorizationRelevant for future inspection extension, not a factory-level production intervention
Wang [41]Building AHU systemsSemi-supervised fault detection and diagnosisAHU AFDD supportRelevant for future predictive maintenance extension, not the present plant implementation
Present studyInsulation-material productionBottleneck analysis + VSM + SMED + selective automation + maintenance + KPI monitoring+14.5% output; −10% monthly cost; −40% breakdowns; −40% scrap; −10% energy; 0 safety incidentsIntegrated human-supervised CPS case with a conservative before-and-after KPI architecture
Table 2. Case setting, production-line context, and confidentiality boundaries.
Table 2. Case setting, production-line context, and confidentiality boundaries.
ItemDescription
Production contextInsulation-material production involving material preparation, crushing/grinding, loading, setup/changeover operations, pallet transport, maintenance, and downstream production flow.
Main product familyInsulation-material products with repeated material handling and conversion steps; product-specific details are anonymized due to commercial sensitivity.
Dominant baseline constraintsManual crusher loading, long setup/changeover, forklift-based pallet transport, repeated equipment stoppages, material scrap, and limited KPI visibility.
Operational constraintsSingle operating plant; no parallel untreated control line; phased implementation required to avoid unacceptable production disruption.
Confidentiality boundaryVendor names, exact sensor models, plant layout coordinates, and proprietary monthly raw data are anonymized, but functional architecture, KPI definitions, and evidence sources are reported.
Equipment/software disclosureManufacturer names, supplier locations, exact sensor models and software versions are anonymized under the industrial confidentiality boundary; the manuscript therefore reports functional equipment classes, data sources and aggregation levels rather than vendor-specific identifiers.
Table 3. Intervention sequence, implemented measures and primary KPI influence.
Table 3. Intervention sequence, implemented measures and primary KPI influence.
PhaseIntervention BlockImplemented ActionPrimary Influence
1Baseline diagnosisTime studies, direct observation, VSM, spaghetti diagram, 5 Whys/fishbone, review of KPI recordsLoss mechanisms quantified before capital actions
2SMED/changeover packageAdditional jaw sets, quick clamping, lift-assisted handling, standard workSetup/changeover time; safety; downtime
3Crusher loading automationAutomatic crusher/pre-crusher loading supportCrusher loading time; productivity; manual handling risk
4Pallet transport automationRoller conveyor/internal transport automationTransport time; waiting; operator travel
5Maintenance and monitoringPreventive maintenance, machine-state monitoring, deviation responseBreakdowns; downtime; availability
6Digital inventory/labeling supportLabeling support, inventory visibility and KPI dashboard reportingOrder-processing speed; traceability; response time
7Training and PDCA stabilizationOperator/supervisor training, follow-up corrective actions, standard-work updatesSustained KPI stabilization and CAPA closure
Table 4. KPI operational definitions, primary evidence sources, and role in hypothesis testing.
Table 4. KPI operational definitions, primary evidence sources, and role in hypothesis testing.
KPIOperational DefinitionUnitPrimary Evidence SourceRole
Crusher loading timeTime required to load the crusher per pallet in repeated time studiesmin/palletManual time study; implementation validationH1
Setup/changeover timeTime required to complete validated wire/net setup–change tasksmin/changeManual time study; SMED recordsH1
Pallet transport timeInternal transfer time between workstations for pallet movementmin/operationSpaghetti/VSM observation; time studyH1
ProductivityAverage daily production output under representative operationkg/dayProduction logs; plant KPI recordsH2
Monthly production costTotal plant production cost per monthEUR/monthPlant cost records; consolidated KPI tablesH2
Breakdown frequencyUnplanned failure events causing production disturbance within one monthevents/monthMaintenance and failure recordsH3
Sustained unplanned downtimeCumulative monthly duration of unplanned stoppages after stabilizationh/monthMaintenance records; annual consolidated KPI tableH3
Scrap rateDefective output relative to total production%Quality and production reportsH3
Annual energy consumptionAnnual plant energy use across the production systemMWh/yearSmart metering/energy monitoring recordsH4
Safety incidentsRecorded work-related incidents reported by the plantevents/monthSafety log/incident recordsH4
Table 5. CPS-oriented architecture and IT/OT technical specification classes.
Table 5. CPS-oriented architecture and IT/OT technical specification classes.
LayerComponentsData/Interface TypeSampling/Aggregation ClassLatency/Refresh InterpretationMain Function
Physical/OT layerCrusher loading, setup/changeover, pallet transport, machine operation, maintenance activitiesPLC/SCADA-compatible equipment signals, operator entries, production/maintenance/quality recordsEvent, shift, daily and monthly records depending on KPI; no millisecond control-loop claimPlant-record/dashboard refresh cycle; operational near-real-time means minutes-to-shift visibility, not sub-second controlExecutes production and material flow
Cyber layerMES-supported collection, KPI dashboard, energy monitoring, maintenance logsMES/work-order records, energy meter records, quality and downtime logsDaily/monthly KPI aggregation; local event flags for deviationsDashboard visibility used for supervisory response and PDCA reviewAggregates and visualizes KPI state; computes OEE and derived indicators
Feedback layerDeviation detection, alarms, CAPA/PDCA, maintenance planningThreshold comparison and human-confirmed alertsAt each monitoring window W and when critical deviations are reportedResponse time recorded operationally; not validated as a cyber metricTurns deviations into corrective actions
Human layerOperators, supervisors, maintenance, quality, energy/CAPA ownersOperator validation and authority levelsContinuous during operation; formal review by shift/day/month as relevantHuman authority remains final except existing safety proceduresInterprets, authorizes and closes corrective actions
Table 6. Research questions, hypotheses and validation logic.
Table 6. Research questions, hypotheses and validation logic.
ItemStatementMain Validation Metrics
RQ1How, and to what extent, can a human-supervised CPS combining bottleneck analysis, lean methods, selective automation, preventive maintenance and KPI monitoring improve insulation-material production?Integrated before–after KPI package; OEE consistency check; descriptive interpretation
H1The CPS-oriented framework reduces non-value-added time in critical operations.Crusher loading time; setup/changeover time; pallet transport time; cycle time
H2The CPS-oriented framework improves production output and cost efficiency.Productivity; monthly cost; annual output; unit cost
H3The CPS-oriented framework improves operational stability and quality.Breakdown frequency; sustained downtime; scrap rate; availability/OEE
H4The CPS-oriented framework improves sustainability-related performance.Annual energy consumption; energy per unit; safety incidents
Table 7. Nomenclature and mathematical symbols used in the model and in the graphical algorithmic flowchart.
Table 7. Nomenclature and mathematical symbols used in the model and in the graphical algorithmic flowchart.
SymbolMeaningUnit/Type
AAvailability component of OEEdimensionless/%
CAnnual production costEUR/year
cSpecific production costEUR/kg
EAnnual energy consumptionkWh/year or MWh/year
eSpecific energy consumptionkWh/kg
K p r e Baseline KPI vector before implementationdata vector
K t KPI vector at time tdata vector
L t Alarm/deviation level at time tcategorical
NNumber of consecutive windows for persistence checkingcount
OEEOverall Equipment Effectivenessdimensionless/%
PPerformance component of OEEdimensionless/%
QDaily good outputkg/day
Q a Annual outputkg/year
Q c a p Theoretical daily line capacitykg/day
R t PDCA/CAPA record at time trecord
S t Physical/OT data streams and plant records at time tdata vector
WKPI aggregation windowshift, day, month
YQuality yield component of OEEdimensionless/%
a t Corrective action or work order at time taction record
ΔtMonitoring intervaltime
Θ Plant-specific KPI threshold vectordata vector
Table 8. Data aggregation, disclosure limitations and statistical interpretation.
Table 8. Data aggregation, disclosure limitations and statistical interpretation.
Indicator GroupOriginal Data BasisAggregation UsedOutlier/Data-Quality TreatmentCV/Raw-Data StatusInterpretation
Time study indicatorsRepeated operational time studies and implementation recordsValidated before–after values for critical operationsObvious recording errors checked by operator/supervisor verificationRaw replicate-level SD/CV not available for disclosureDescriptive before–after comparison
Production and cost indicatorsMonthly reporting windows before and after intervention (n = 12 + 12) and annual consolidated recordsMonthly means and annual consolidated totalsCross-check against consolidated KPI records; no formal 3σ/IQR exclusion reportedFull raw monthly dataset commercially sensitive; CV not computedDescriptive and practically meaningful difference only
Maintenance and quality indicatorsMonthly counts/logs and annual consolidationEvents/month, h/month and scrap %Reconciled with maintenance/quality logsSD/CV not disclosedDescriptive difference supported by triangulation
Energy and safety indicatorsEnergy records and safety incident logsAnnual energy and monthly safety incident ratesConsistency checks against energy and safety recordsRaw time-series unavailable for publicationDescriptive sustainability and safety outcomes
Inferential statisticsNot applicable to the available manuscript packageNo paired t-test, ANOVA, Wilcoxon, confidence interval, Durbin–Watson or decomposition reportedAvoid pseudo-replication from aggregated single-case dataNot computedResults intentionally labeled as descriptive and practically meaningful
Table 9. Consolidated validated pre- and post-implementation results used for the main claims of the manuscript.
Table 9. Consolidated validated pre- and post-implementation results used for the main claims of the manuscript.
KPIPre-ImplementationPost-ImplementationChange
Crusher loading time30 min/pallet10 min/pallet−66%
Validated setup/changeover time30 min15 min−50%
Pallet transport time10 min4 min−60%
Productivity7864 kg/day9000 kg/day+14.5%
Annual production2,872,360 kg3,285,000 kg+412,640 kg
Monthly production costsEUR 200,000EUR 180,000−10%
Annual production costsEUR 2,400,000EUR 2,160,000−10%
Cost per production unitEUR 50/unitEUR 45/unit−10%
Breakdown frequency5/month3/month−40%
Annual breakdowns60/year36/year−40%
Sustained unplanned downtime10 h/month5 h/month−50%
Scrap rate5%3%−40%
Overall Equipment Effectiveness75%85%+10 p.p.
Machine availability85%95%+10 p.p.
Productivity per hour160 units/h220 units/h+60 units/h
Cycle time45 min/unit30 min/unit−15 min/unit
Order-processing speed5 days3 days−2 days
Energy consumption per unit30 kWh/unit25 kWh/unit−5 kWh/unit
Annual energy consumption500 MWh/year450 MWh/year−10%
Estimated environmental impact1000 t CO2/year900 t CO2/year−10%
Safety incidents3/month0/month−100%
Table 10. OEE decomposition, internal consistency check, derived efficiency indicators, and OEE sensitivity scenarios.
Table 10. OEE decomposition, internal consistency check, derived efficiency indicators, and OEE sensitivity scenarios.
Indicator/ScenarioPrePost/PredictedInterpretation
Availability A (%)85.095.0Anchored in validated availability records
Quality yield Y (%)95.097.0Implied from validated scrap rates
Implied performance factor P (%)92.992.3Residual OEE component under plant-level aggregation
Back-calculated theoretical capacity (kg/day)10,48510,5880.98% discrepancy across states
Specific production cost (EUR/kg)0.8360.658−21.3% vs. baseline
Specific energy consumption (kWh/kg)0.1740.137−21.3% vs. baseline
Predicted output at OEE = 0.80 (kg/day)8470Scenario below validated post-state
Predicted output at OEE = 0.90 (kg/day)9529+5.9% vs. OEE = 0.85 state
Predicted output at OEE = 0.92 (kg/day)9741+8.2% vs. OEE = 0.85 state
Table 11. Economic sensitivity to hidden implementation costs.
Table 11. Economic sensitivity to hidden implementation costs.
ScenarioAdjusted InvestmentAssumptionInterpretation
Base equipment packageEUR 73,000EUR 240,000/year annual cost reductionIndicative payback ≈ 3.7 months; reported project amortization ≈ 9 months due to phased implementation
+10% hidden implementation costEUR 80,300Software, dashboard tuning, training and early maintenance allowanceStill below first-year annual savings
+20% hidden implementation costEUR 87,600More conservative allowance for software/training/maintenanceStill below first-year annual savings
+30% hidden implementation costEUR 94,900High allowance for undisclosed lifecycle costsStill positive under the observed annual savings, but full lifecycle appraisal remains future work
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Rihar, L.; Hozdić, E.; Perinić, M.; Ištoković, D. Human-Supervised CPS-Based Optimization of Insulation Material Production: An Industrial Case Study. Appl. Sci. 2026, 16, 4730. https://doi.org/10.3390/app16104730

AMA Style

Rihar L, Hozdić E, Perinić M, Ištoković D. Human-Supervised CPS-Based Optimization of Insulation Material Production: An Industrial Case Study. Applied Sciences. 2026; 16(10):4730. https://doi.org/10.3390/app16104730

Chicago/Turabian Style

Rihar, Lidija, Elvis Hozdić, Mladen Perinić, and David Ištoković. 2026. "Human-Supervised CPS-Based Optimization of Insulation Material Production: An Industrial Case Study" Applied Sciences 16, no. 10: 4730. https://doi.org/10.3390/app16104730

APA Style

Rihar, L., Hozdić, E., Perinić, M., & Ištoković, D. (2026). Human-Supervised CPS-Based Optimization of Insulation Material Production: An Industrial Case Study. Applied Sciences, 16(10), 4730. https://doi.org/10.3390/app16104730

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop