Next Article in Journal
Novel Workstation Module and Method for Automatic Blanking of Surgical Forceps
Previous Article in Journal
Output Feedback-Based Neural Network Sliding Mode Control for Electro-Hydrostatic Systems with Unknown Uncertainties
Previous Article in Special Issue
A Review of Time-Series Forecasting Algorithms for Industrial Manufacturing Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Digital Performance Management: An Evaluation of Manufacturing Performance Management and Measurement Strategies in an Industry 4.0 Context

1
Department of Manufacturing Engineering, Brigham Young University, Provo, UT 84602, USA
2
Northrup Grumman Space Systems, 9160 UT-83, Corinne, UT 84307, USA
3
PTC Inc., 121 Seaport Blvd, Boston, MA 02210, USA
*
Author to whom correspondence should be addressed.
Machines 2024, 12(8), 555; https://doi.org/10.3390/machines12080555
Submission received: 29 June 2024 / Revised: 24 July 2024 / Accepted: 25 July 2024 / Published: 14 August 2024
(This article belongs to the Special Issue Smart Manufacturing and Industrial Automation)

Abstract

:
Manufacturing management and operations place heavy emphasis on monitoring and improving production performance. This supervision is accomplished through strategies of manufacturing performance management, a set of measurements and methods used to monitor production conditions. Over the last 30 years, the most prevalent measurement of traditional performance management has been overall equipment effectiveness, a percentile summary metric of a machine’s utilization. The technologies encapsulated by Industry 4.0 have expanded the ability to gather, process, and store vast quantities of data, creating the opportunity to innovate on how performance is measured. A new method of managing manufacturing performance utilizing Industry 4.0 technologies has been proposed by McKinsey & Company (New York City, NY, USA), and software tools have been developed by PTC Inc. (Boston, MA, USA) to aid in performing what they both call digital performance management. To evaluate this new approach, the digital performance management tool was deployed on a Festo (Esslingen, Germany) Cyber-Physical Lab (FCPL), an educational mock production environment, and compared to a digitally enabled traditional performance management solution. Results from a multi-day production period displayed an increased level of detail in both the data presented to the user and the insights gained from the digital performance management solution as compared to the traditional approach. The time unit measurements presented by digital performance management paint a clear picture of what and where losses are occurring during production and the impact of those losses. This is contrasted by the single summary metric of a traditional performance management approach, which easily obfuscates the constituent data and requires further investigation to determine what and where production losses are occurring.

1. Introduction

Manufacturing management and operations frequently concern themselves with production performance and process improvement. Understanding of these issues necessitates a means to determine the current state on the factory floor and track improvement. This production monitoring is performed using manufacturing performance management strategies and tools. The primary facet of performance management consists of the measurements that indicate the health or efficiency of the manufacturing process.
Performance management in manufacturing came into focus in the late 1980s and through the 1990s, with numerous papers and performance measurement proposals [1,2]. This shift in thinking came about from a number of factors including technological advances, changing work conditions, production improvement strategies, and market competition [2,3]. Dissatisfaction with performance measures and the need to change those metrics with time and technology were also recognized during this period [2,3]. However, performance measurement has seen little change since the general adoption of overall equipment effectiveness (OEE), the diagnostic measure introduced with Seiichi Nakajima’s Total Productive Maintenance [4,5].
A plethora of technological innovation has occurred since the turn of the 20th century, creating ample opportunity to further change performance measures. With the advent of Industry 4.0, the fourth industrial revolution, the ability to take advantage of more data from previous industrial computerization has grown greatly [6,7]. Innovations in sensors, data storage, and computer processing facilitate the capture, logging, and processing of large amounts of discrete data. This newfound data granularity should drive performance measurements to grow above previous summary metrics designed for manual recording.

2. Literature Review

It is important to identify that manufacturing performance management is not quality management. While it may incorporate parts of quality management systems and strategies such as six sigma and statistical process control, performance management is distinct in its overall management of production performance rather than just product quality. Performance management is also different from strategies such as lean manufacturing and just-in-time inventory management systems, though it does support those strategies and incorporates many of their elements. Manufacturing performance management is best defined as a set of measurements and methods used to determine the health of a production environment or process that monitors the effect of changes and improvements upon the observed assets.
The most widespread application of performance management is through the use of overall equipment effectiveness (OEE) as a measurement [5,8,9,10]. OEE was introduced in the 1980s by Seiichi Nakajima as a diagnostic measure and performance analysis tool to be used as the foremost element of total productive maintenance (TPM) [4,11]. By itself, TPM is not performance management, but rather a regular maintenance strategy with the goal of reducing machine downtime. However, OEE has generally been divorced from TPM and its strategies. The original OEE calculation result is a conglomerate percentage of a machine’s availability, performance efficiency, and rate of quality products (quality rate) as seen in the following equations from Nakajima:
O E E = A v a i l a b i l i t y × P e r f o r m a n c e   E f f i c i e n c y × Q u a l i t y   R a t e ,
A v a i l a b i l i t y = A c t u a l   O p e r a t i n g   T i m e P l a n n e d   P r o d u c t i o n   T i m e ,
P e r f o r m a n c e   E f f i c e n c y = I d e a l   C y c l e   T i m e × P r o c e s s e d   U n i t s A c t u a l   O p e r a t i n g   T i m e ,
R a t e   o f   Q u a l i t y   P r o d u c t s = P r o c e s s e d   U n i t s D e f e c t i v e   U n i t s P r o c e s s e d   U n i t s
These three components are meant to capture six loss categories, including equipment failure, setup, idling and small stops, non-ideal process speed, product defects, and startup yield losses [4]. Changes in OEE over time may be monitored to identify machine issues and track improvement efforts.
The general adoption of OEE as a performance metric has created several issues. The foremost of these is the use of OEE as a comparator between production lines, factories, and companies [5] rather than comparing a single machine to itself as intended. While Nakajima proposed an ideal OEE of 85% or greater [4], maximizing utilization is not desirable for non-bottleneck processes and the optimization of OEE for any asset is a complex affair. OEE is also not a statistically valid measure, and by its nature as a percentage resulting from different constituent measurement units, often conceals what is happening to production quality and output [5].
Recognizing that original OEE is limited to a single machine and only a few component variables, several attempts have been made to modify or alter the calculations to account for more assets and losses on the factory floor. Proposals have been made for OEE calculations for various production line layouts [12] or factory-level throughput efficiency measures [13]. Thorough literature reviews have been conducted on OEE variations and proposed OEE based metrics, showing that the proposed changes and measures either expand the scope of the calculation beyond individual equipment or include additional performance losses to the component calculations [12,14]. However, these new metrics follow the same pattern of reducing data to a few component numbers representing the performance of the machinery in their defined scope, often ending in a single metric. These one number summary measures obfuscate the factors that create them by reducing many inputs of differing units to a single output [5]. While these variant calculations seek to improve OEE through situational customization, they do not innovate on the method of measurement.
Changing performance measures is vital for enterprises to remain competitive, but the correct things must be measured for the change to be effective. The general dissatisfaction evidenced by the variations of OEE are consistent with the trends identified by Dixon regarding changing performance management systems. Dixon also identified general issues with performance management systems overemphasizing cost-based measures while neglecting non-financial metrics [3]. Following a survey of manufacturing executives, Schmenner and Vollmann further identified overemphasized performance measures that produce inefficient and ineffective managerial actions, including both machine and labor efficiencies [15]. In addition to these findings, it is also important to recognize that great advances in technology have occurred since the inception of OEE in performance management that should drive changes to performance metrics.
A data-driven utilization of industrial computerization, Industry 4.0 and like initiatives are characterized by technologies driving manufacturing improvement. This primarily involves the Industrial Internet of Things (IIoT), a network of digitally enabled production assets [6,7]. Industry 4.0 and the IIoT are requiring data communication to be real-time and metrics to be readily visualized [6]. These requirements pair well with performance management, as it has been recognized that performance measurements need to be gathered and presented in real-time to be of value [3,15].
Several IIoT architectures and frameworks have been proposed to measure performance management metrics. Many of these proposals utilize IIoT to gather the component data and/or calculate OEE, or OEE-based measures [16,17,18]. Others use IIoT to measure production quality, the data from which could be fed into OEE calculations [19,20]. Though not focused on performance management, further IIoT-based production measurement proposals have been made for gathering granular production data [21,22,23]. More detailed frameworks put forth specific data communications protocols, including Message Queuing Telemetry Transport (MQTT) and Zigbee [17,21]. None of these proposals attempt to generate new performance metrics or management strategies with the granular data achievable from IIoT technologies.
Beginning in 2019, McKinsey & Company put forth the principles of an alternative performance management strategy with emphasis on taking advantage of digital technologies [24]. This new thought process focused on making proper metric comparisons, accurately capturing and distributing data, real-time data-based alerts, and enterprise spanning performance understanding and dialogue [24]. Their new strategy would take shape as Performance Management 2.0 with further focus on dissolving siloed or department segregated data and forming enterprise spanning data sources and actionable insights from granular data and reports [25]. Further emphasis on Industry 4.0 technology and digital transformation would evolve Performance Management 2.0 into Digital Performance Management (DPM), maintaining the aforementioned principles while focusing on utilizing technology to achieve the granular data capture and single data silo of the previous iteration [26]. A commercial software tool has been developed by PTC Inc. (PTC) based on and named after DPM, solidifying the principles of the new strategy using time-based measurements rather than an OEE based calculation or metric [26,27]. However, the DPM strategy and tool have yet to receive academic evaluation regarding their effectiveness.
The purpose of this research is to evaluate the McKinsey & Company Digital Performance Management strategy as deployed by the Digital Performance Management tool developed by PTC and to compare that solution and how it measures and presents performance measurements with a traditional OEE based performance management strategy. To make this comparison between traditional and digital performance management, data will be gathered from a multi-day production schedule using the four-station model of a Festo Cyber-Physical Lab (FCPL), a commercially available educational mock production environment. Both performance management solutions will be run during production, and the gathered data will be compared following the production period. The goal of this comparison is to highlight differences in the insights gained from the two performance management strategies and to ascertain how IIoT may change performance management in the light of Industry 4.0.

3. Methodology

The Festo Cyber-Physical Lab (FCPL), shown in Figure 1, is a commercially available educational mock production environment consisting of four stations, including a magazine module, measuring module, drilling module, and output module. The workpiece used by this four-station model is the front cover of a simplified mobile device, shown in Figure 2. These parts are transported between stations using carriers that ride on conveyors. Each carrier supports a pallet on which the parts are held in a workpiece reception jig. Each carrier is also tagged using Radio Frequency Identification (RFID) containing information on the carrier itself, the held part status, process destinations/work instructions, and job order information. Each station has a stopper unit that halts carriers in the correct position for the application of the station to be performed and releases the carrier afterward. Each stopper has a post that interfaces with a slot in the part carrier to stop it. The post is pneumatically retracted to allow the carrier to continue traveling on the conveyor after a process is completed.
Station one, the magazine module shown in Figure 3, places parts on the carriers. When a carrier stops at the module, sensors check to ensure that a workpiece is not already present on the carrier. If the carrier is empty, the unit consisting of the inventory magazine and part isolation system is lowered close to the carrier’s workpiece reception jig. A single part is isolated from the rest of the inventory with pneumatically actuated inlay strips, preventing excess inventory from falling with the part to be released. Another set of inlay strips then release the isolated single part onto the carrier, and the magazine and isolation system unit move upward and reset to prepare for the next part, at which point the carrier is released.
Station two, the measuring module shown in Figure 4, inspects parts to ensure they were placed on the conveyor in the correct direction. Carriers are stopped under a pair of optical laser distance sensors, which take a differential measurement between the areas just to the left and right of the part center. When the workpieces are correctly oriented on the conveyor, the primary upward face of the part will be on the right, and a lower pocket in the part will be on the left. If the part is correctly oriented and is seated properly in the workpiece reception jig, the carrier RFID will be written with instructions to proceed to the drilling station for processing, and the part data are written into the Manufacturing Execution System (MES). If the part does not pass this inspection, the RFID will instead be written to send the part to the scrap tray at the output station, again reporting the part status to the MES. Once the inspection is completed and the information recorded on the carrier RFID tag and in the MES, the carrier is released to continue to the next station.
Station three, the drilling module shown in Figure 5, performs a mock drilling operation for the four corner holes in the workpiece. When a carrier stops at the drilling station, the RFID tag is read to determine if the operation should be performed as dictated by the result of the inspection at the measuring module. If the part is marked for scrap, the carrier is immediately released. Otherwise, sensors are read to determine if the part is present, correctly seated in the jig, and free of obstructions above the part. If these conditions are met, the mock drilling operation begins, starting the drill motors and plunging toward the holes on the left side of the part. The drill bits then retract, and a pneumatic actuator moves the drills to the right, where they repeat the plunge and retract motion. The drill motors are then powered off, move back to their initial position on the left, and the carrier is released.
Station four, the output module shown in Figure 6, removes parts from the carriers and deposits them in either good or scrap trays. When a carrier stops at the module, the datum regarding whether the part is marked as good or scrap is read from the MES, and sensors check to determine if a part is present on the carrier. A pneumatic actuator moves a gripper down to the part where the gripper closes to pick up the workpiece. The actuator then retracts and, according to the data from the MES, moves either left or right toward the good or scrap output trays, respectively, using a motor and belt. Once in the depositing position of the appropriate tray, the gripper opens and releases the part. Once the part has cleared the drop area as detected by the breaking of an infrared beam, the gripper unit returns to the center position, and the empty carriage is released to receive another part at station one.
The testing period consisted of three shifts of eight hours each, conducted over the course of three days. Each of these phases attempted to produce as many parts as possible in the given time while also attempting to minimize downtime. Data was captured during production and processed using both traditional and digital performance management solutions. Between testing phases, improvement efforts were undertaken to increase the productivity of the FCPL. These improvements utilized the available changeable parameters of the four stations as summarized in Table 1, and changes were made according to the data insights gathered during the previous production phase. Table 2 details the initial ideal cycle times for each station. Any change to these cycle times as a result of improvement efforts was applied to the performance management solutions to generate accurate performance calculations.
Data was gathered from the Siemens (Munich, Germany) S7-1500 Programable Logic Controllers (PLC) of each station of the FCPL. Each station’s PLC operates on the same hierarchical level and are networked together. The PLCs are connected via ethernet to a computer hosting a PTC Inc. ThingWorx Kepware Server (v6.13), a configurable Open Platform Communications (OPC) server utilizing various drivers to facilitate communication with industrial equipment over diverse protocols. The ThingWorx Kepware server (Kepware) then streams the data to a cloud-hosted instance of ThingWorx (v9.3.4), an IoT development platform, via a gateway entity. This gateway facilitates reading the PLC tags in other ThingWorx entities, such as those representing the stations of the FCPL, as well as writing certain Kepware proxy tags (local variables). See Figure 7 for an illustration of the data flow architecture.
The traditional performance management solution intakes data read by the gateway entity as linked properties. As these properties update, ThingWorx will perform the necessary calculations for OEE and its components. After each recalculation, the generated metrics and the constituent data will be recorded in a database inside ThingWorx for later retrieval and review. Generated metrics will also be available as they are calculated for operator review.
Data going into the digital performance management solution will first be gathered and processed by an administrative entity to calculate, clean, and direct data to the required inputs. Based on incoming tag values, the administrative entity will generate appropriate production counts and will standardize various error signals and data into a unified format and tag. These processed values are then written to proxy tags in Kepware, where they can be read back into ThingWorx by entities representing the individual stations of the FCPL. These entities were automatically generated in ThingWorx by the Digital Performance Management product from PTC Inc. after they were defined within an included web-based operator reporting and data visualization display, or dashboard. This dashboard also allows for defining standard error reporting codes, production schedules, materials, job orders, and production demand. These definitions facilitate the automatic processing, recording, and later retrieval of production data in the digital performance management solution.

4. Results

4.1. Production Period Summary

Following the testing period, the data from both the traditional and digital performance management solutions were exported, cleaned, and analyzed. A total of 7051 parts were processed over the three-day production schedule (see Table 3). Improvement efforts between testing days resulted in a 36.6% increase in parts produced on the last day as compared to the first. The overall quality of parts processed was 99.53% good product.
A machine breakdown occurred late on the second day of the test, resulting in a drop in production on that day. This machine failure occurred in the magazine module’s part isolation system, which had experienced occasional errors at the end of the first day and in the early hours of the second day. At six hours into the shift (14:00 h of production), the part isolation system began to detrimentally misbehave, failing to drop parts correctly and jamming frequently. Within a few minutes, the pneumatically actuated inlay strips had seized and would no longer release a part without manual assistance. Galling between the upper set of inlay strips and their housing was determined to be the cause of the issue and was remedied with appropriate application of machine grease. The problem identification, partial disassembly, lubrication, and reassembly of the system resulted in 73 min of unplanned downtime for maintenance. The downtime duration recorded at each station differed slightly due to other automatically reported loss statuses (i.e., emergency stop, conveyor output jam) caused by the maintenance being performed. Another seven minutes of unplanned maintenance occurred on the third production day to remove excess grease that had begun to leak onto the parts being released and inhibit their transfer onto the part carriers.
As guided by the in-process data being received from the performance management solutions, improvement efforts were undertaken between testing phases. These improvements took place at the bottleneck process identified by the performance management systems. Improvements focused on reducing the cycle time of the production-limiting station, utilizing the available changeable parameters identified in Table 1.
At the end of the first testing phase, both traditional and digital performance management solutions showed that the drilling module acted as the system bottleneck. To decrease the cycle time at that station, the drilling operation parameter was changed from the default of drilling both hole sets to just drilling the left set. This removed the horizontal traversal of the drills and reduced the cycle time from 10.9 s to 4.9 s. In a real production environment, this would be akin to adding another set of drills to machine all four corner holes at the same time. In addition, the pneumatic pressure of the system was raised from the default 5 Bar to 6 Bar. This pressure increase was taken in reaction to the errors seen at the end of the first day on the magazine module and increase the consistency of that process. Changing the pneumatic pressure had a negligible effect on ideal cycle times.
At the end of the second testing phase, both performance management solutions agreed that the output module had become the bottleneck process. To alleviate this station’s restriction on production, the gripper horizontal motor speed was increased from the default 60 mm/s to the maximum 800 mm/s. This new speed should not be reached in normal operation as a limited acceleration rate would require the full stroke length of the horizontal axis to achieve that speed. However, this new setting raised the achievable speed of a normal half stroke operation between the center and an output tray to over 300 mm/s and decreased the cycle time from 10.1 s to 6.6 s, as shown in Table 4.

4.2. Traditional Performance Management

Reviewing the testing results from a traditional performance management solution can be done in three layers of increasing detail. The first of these is the OEE metric, followed by the component measures (availability, performance efficiency, and quality rate), and lastly, the constituent data that forms the component measures. At the beginning of each day, the data used in the calculations were reset, allowing for the individual days to be measured independent of each other. The ideal cycle time was also updated for the drilling and output modules as improvements were made to them between phases.
The OEE metric for the testing data is shown in Figure 8 on the vertical axis with cumulative production hours on the horizontal axis. The drilling and output modules can be identified as the bottlenecks due to their being the greatest OEE during a given day. This correlation between greatest OEE and bottleneck processes is due to the nature of the system. The FCPL has a limited number of part carriers and does not create excess inventory between stations, meaning faster modules cannot operate at a pace greater than the bottleneck process cycle time. The high OEE to bottleneck relation is also inherent to lean balanced production lines for similar reasons, as maximizing OEE for non-bottleneck processes creates surplus work in progress inventory.
Over the course of the three-day testing period, both the bottleneck OEE and average OEE decreased despite a significant increase in the number of parts produced. Additionally, the OEE of the output module on day one was very near that of the drilling module bottleneck, explaining the lack of increased production rate the following day where the output module became the bottleneck. The effect of the unplanned downtime maintenance event is apparent from the negative slopes beginning at 14:00 h of production. Day three showed an increase in OEE for all stations except the output module, which decreased from the first day’s 76.6% to 68.5%, while production between these two days increased by over one-third.
Breaking OEE into its component parts should yield further insight regarding the trends seen in the parent measure. A plot of availability, shown in Figure 9, displays the ratio of operating time, or uptime, to planned production time (see Equation (2)). As such, the unplanned maintenance downtime on day two was readily apparent, creating a large drop in availability roughly between 14:00 and 15:15 production hours. Outside of this event, there were two notable exceptions to the general trend of near 100% availability: the obvious drop in availability at the drilling module during day two, and the low starting points of the measuring and output modules on days one and three, respectively. The latter anomalies were due to short periods of downtime early in the shift, lowering the availability ratio immediately but approaching normalcy as more uptime accrued during the day. The drilling module availability drop was also due to downtime, though it did not recover and stabilized around 80% availability. The reasons for this trend will be examined further when investigating availability’s constituent data. Other than the noted exceptions, availability stayed near 100% and had only a minor impact on OEE.
A plot of performance efficiency, shown in Figure 10, can be summarized as the ratio between the ideal effective production time, representative of producing at the ideal cycle time, and operating time. Some noise was present in the data during the unplanned maintenance downtime period due to errant signals during maintenance. The similarity between performance efficiency and OEE in the data shows the former to be the driving component of the latter, and thus, many of the observations regarding OEE apply to performance efficiency as well. This includes the decrease in the metric at the bottleneck process despite an increase in units produced. Notable here, however, is the tightening range of the performance efficiency and OEE values between stations. This trend is indicative of the smaller range of the cycle times between stations that occurred because of the improvements made, resulting in less backup at the bottleneck station and less idle time waiting for carriages at the non-bottleneck stations.
The quality rate, shown in Figure 11, is the ratio between the number of good parts produced and the total parts processed. The measuring module is the only station of the FCPL that reports part quality and produces scrap parts. Losses from passing scrap along in the drilling and output modules were captured in lengthened cycle times between good parts and were thus assumed to have a quality rate of 100% to avoid double counting the lost time. The magazine module was also assumed to have a perfect quality rate as bad parts cannot be identified at that stage. The only notable anomaly present in the quality rate data from the measuring module was a short but sharp drop at the beginning of day two (8:00 production hours). This reduction in quality was due to the second part processed that day being marked as scrap. This resulted in a single data point showing a 50% quality rate, after which the quality rate quickly recovered (see Figure 12 and Figure 13). Outside of the first 15 min of day two, the quality rate stayed above 98% (see Figure 12), having only a minor effect on OEE.
Actual operating time provides insight into the drop in availability observed at the drilling module on the second day of testing. Actual operating time is deduced from measuring downtime and subtracting it from the planned production time. The downtime recorded during testing, shown in Figure 14, shows the reasoning behind the drop in availability at the drilling module on the second testing day. The cumulative recorded downtime data show the drilling module downtime increasing at a steady rate of approximately 12 min per hour throughout the day (1.19 h total), excluding the unplanned maintenance downtime between roughly 14:00 and 15:15 production hours when all stations experienced downtime. However, without further investigation past this final level of discretization for OEE, the cause of this additional 1.19 h of downtime accumulation cannot be determined. Additionally, reporting only final values for the day would not reveal this trend but would only show the final difference in downtime between the drilling module and the other stations.
From examining OEE and the data that formed it, several insights have been gleaned.
  • Due to the nature of the process, the module with the highest OEE is identified as the bottleneck. From this correlation, it is evident that improvements between testing phases shifted the bottleneck from the drilling module to the output module.
  • A machine failure requiring an unplanned maintenance event lowered availability and OEE for all stations between approximately 14:00 and 15:15 production hours.
  • Despite an increase in production and decreased cycle time, the performance efficiency and OEE at the output module were reduced between the first and third day of testing.
  • The availability at the drilling module on the second day of testing was decreased due to a steady accumulation of 1.19 additional hours of downtime over the course of that day.
  • The range of the four station’s OEE and performance efficiency decreased due to process improvements during the testing period, balancing cycle times across the entire FCPL and showing increased utilization overall.

4.3. Digital Performance Management

Results presented by the digital performance management solution come in two forms, namely waterfall and pareto charts, and can be viewed for the FCPL as a whole or by individual station. The DPM solution autonomously differentiates data by day, ensuring the three phases can be assessed independently. Ideal cycle times were updated through the dashboard for the drilling and output modules following the inter-phase improvements. Data were recorded from the dashboard output, and the visualizations are recreated here for consistency and clarity.
A cumulative waterfall chart is shown in Figure 15a, combining data from all four stations over the three-day testing period, which sums the four station’s planned production of three 8 h shifts for a total of 96 total production hours. This total planned production time is the leftmost column of the waterfall charts presented by DPM. The columns following this subtract time in various loss categories, such as unplanned downtime. A key measurement from these waterfall charts is the effective production time, which is a result of subtracting the reported losses from the planned production time. From this cumulative view, DPM shows an effective production time of 45.23 h out of the 96 total scheduled production hours. When differentiated by day, each view having a total planned production time of 32 h from the sum of the four stations’ individual 8 h shifts (see Figure 15b–d), the FCPL shows a decrease in effective time between the first and last day of testing, with increases in all loss categories except scrap. The unplanned maintenance event on day two is also apparent from the increase in unplanned downtime on that day.
The time loss categories are further expounded upon through pareto charts, which detail the individual loss reasons in each category. The pareto charts presented by DPM show the loss reasons that constitute a category in order of greatest to least impact. A line accompanies each set of bars showing the percentage of the category accounted for as the reasons are listed. The cumulative FCPL speed loss pareto chart, shown in Figure 16, visualizes the large difference between the loss category’s two reasons, the automatically reported conveyor output jam and manually reported general speed loss events. Conveyor output jams are caused by a backup of carriages on the conveyor ahead of a station, preventing the release of a processed part from the reporting station. DPM reported that 1878 conveyor output jam events summed to 2.04 h of lost time. General speed losses are manually reported by the operator as any time not accounted for by other loss reasons, primarily encapsulating non-bottleneck stations waiting for part carriers and running over their ideal cycle time as they slow to the pace of the bottleneck. For example, on day three, the magazine module’s average cycle time increased by an average of 4.64 s as it had to wait for parts to arrive for inspection. Over the 3001 parts it processed that day, this extra waiting time accumulated into 3.87 h of general speed loss (see Figure 23c).
Inspecting the pareto chart of the FCPL unplanned downtime on day two (see Figure 17) sheds further light on the impact of the unplanned maintenance event on that day. The obvious impact of performing the maintenance is visible, along with additional losses that may have been caused by the event including the initial system failure leading to the maintenance. Another notable loss on the second day occurred at the drilling module. Inspecting the pareto chart of the speed loss of that station on that day (see Figure 18) shows a significant portion of the losses from the combined speed loss pareto chart (see Figure 16) can be attributed to this location and time, comprising 1.90 h of the total loss of 2.04 h. Further information from the DPM solution shows that the cumulative conveyor output jam losses occurred over 1878 events, 1746 of which occurred at the drilling module on the second day of testing. These data show an obvious issue at that station resulting in the gradual accumulation of speed loss from a plethora of small events.
Further loss explanation regarding the drop in the FCPL’s effective time between the first and third day can be gleaned using the planned downtime pareto charts (see Figure 19 and Figure 20). These charts show that cleaning time nearly doubled between the first and last day, with an additional 36 min of allotted time. Combining this additional time with the known 7 min maintenance event on the third day (28 min cumulative) and the 26 min increase in speed loss visible between Figure 15b,d results in 90 min of loss. This additional lost time accounts for 89.6% of the difference in effective time between the first and third day, giving specific insight to the reasons behind that drop.
When viewing the stations individually over the testing period, totaling 24 h of planned production from the single station’s three 8 h shifts (see Figure 21, Figure 22, Figure 23, Figure 24 and Figure 25), a correlation between the bottleneck station and effective time becomes apparent. In similar fashion to the highest OEE and bottleneck station relation, the greatest amount of effective time was held by the bottleneck station. This strong correlation between high effective time and the bottleneck station occurred for the same reason, namely that the limited number of part carriers restricted in-process inventory and slowed all stations to the bottleneck pace, resulting in input starved stations and lengthened cycle times. These waterfall charts also show a change in effective production time between the first and last testing day for all stations, with the magazine and measuring stations increasing and the drilling and output stations decreasing their effective time. In a similar manner to the trend in OEE, the range of effective time also decreased over the testing period due to process improvements balancing the cycle times across the FCPL and increasing utilization.
The DPM visualizations and data provide the following insights:
  • Due to the nature of the process, the module with the greatest effective production time is identified as the bottleneck. Examining the daily waterfall charts, it is evident that the bottleneck changed from the drilling module to the output module between day one and two due to improvements between these days.
  • Over the course of the testing period, the effective production time at the magazine and measuring modules increased, while the effective production time at the drilling and output modules decreased.
  • The range of the four station’s effective production times decreased over the testing period due to the process improvements between phases, leading to more balanced cycle times across the FCPL.
  • The planned downtime pareto charts show that cleaning time roughly doubled between the first and second day, remaining at that value into the third day.
  • This increase in cleaning time, a short maintenance period, and a small increase in speed losses accounted for 90% of the additional lost time on day three as compared to day one.
  • A total of 1878 conveyor output jam events created 2.04 h of lost time. Overall, 1.90 h of this lost time came from 1746 events occurring at the drilling module on day two.

4.4. Performance Management Strategy Comparison

Comparison between traditional and digital performance management will be done in three parts. First, compare how each approach presents data to the user and how that data can be interpreted. Second, compare the insights provided by both solutions. And lastly, compare the granularity and fidelity provided by each of the performance management strategies.
A key concept in Industry 4.0 is the visualization of data and how different users will require different data depending on their role in the enterprise. For example, a machine operator may be interested in the specific loss reasons being recorded by their equipment and the impact and severity of those events. Such a view is readily available in the DPM toolset, containing live data showcasing the recorded events and measuring how well current production is meeting the demand on the machine. Custom solutions taking the DPM approach will also have the data to create such operator-oriented views. From another point of view, a production supervisor may not be interested in the details of each machine at any given moment but might instead require a general summary of how multiple machines are performing at any point throughout the day. These views for end of day performance analysis have been presented in this research, though there is value in being able to view such waterfall charts at any point during a day or shift to monitor production health live and spot potential problems as they are forming and accumulating lost time. These different views allow for direct informed action on the factory floor and appropriate resource allocation higher up the enterprise. In contrast to the available visualizations in the DPM solution, a traditional performance management approach has only one set of views, that being the OEE measure and its components. To an operator, such a view has little to no value as it does not provide ready insights that can help them understand underlying problems with their equipment. While there can be some appropriate use of the OEE metric over time for a supervisor, such as spotting downtime events as they happen or falling utilization over several days or more, actionable insights are still absent.
The primary difference in how traditional performance management and digital performance management present their data is in their units of measurement. Traditional performance management uses percentages in its primary measure of OEE and its component measures of availability, performance efficiency, and quality rate. OEE has no context on its own as a percentage. Historical data can help contextualize OEE, as trends of change can indicate possible issues or improvements. However, no further information can be gleaned from it without additional information. Additional context is added when OEE is viewed with its components as any obvious similarities between the parent measure and the constituent parts will indicate the driving component, as was seen in the testing data with performance efficiency. However, the OEE components themselves are percentages and suffer from lack of context as well, though to a lesser degree. Drops in availability or performance efficiency may hint at downtime or speed loss issues but fail to help further pinpoint problem origins. Contrasting those measures is quality rate, which can succinctly summarize the rate of good parts produced without full production numbers.
The measures of digital performance management use time as their sole unit. Time measurements can be immediately understood in the fundamental context of a finite resource. With only so many hours in a day, week, month, or year, a measure of how much of that time was either wasted or effective in making a product becomes invaluable. At a level deeper, having the component data break down easily into specific categories and loss reasons aids in pinpointing where the greatest issues are and how much they are affecting production. Additionally, the presentation of more than one metric as the default high level output of this strategy forces the user to view key values and their impact on production, rather than gloss over a single metric.
This comparison of context can also be applied to how that data is interpreted by the user. In traditional performance management, an operator will know that OEE should be at or above some seemingly arbitrary number, while a supervisor might give special attention to the equipment with the lowest OEE attempting to improve the number. At a higher level, dissimilar machines performing a variety of tasks may compete for resources based on a single metric. Contrasting this, under a digital performance management approach an operator will be able to see how much of their shift is spent effectively making product and where their equipment is losing time. Managers will see and direct attention toward specific loss reasons to gain productive time back from them.
These differences in data presentation and interpretation can be best seen by comparing the insights gained from testing. Two specific insights that resulted concurrently in the two performance management solutions can be expounded on to see this difference, namely the drops in effectiveness on the last testing day and the performance and speed loss issues at the drilling module on day two. Both approaches agreed that effectiveness on the last day was reduced from that observed on the first day. Traditional performance management showed this with an 8.1% drop in OEE at the bottleneck output module, which could be attributed to a similar drop in performance efficiency. In comparison, digital performance management showed an additional 1.48 h of lost time between the first and last day, which could be attributed to an increase in planned downtime from additional cleaning time, increased unplanned downtime from a short maintenance event, and a small increase in speed losses. In this example, the digital approach pinpoints and quantifies the impact of specific reasons for the drop in effectiveness, while the traditional approach only hints at a performance efficiency issue.
The second pair of insights to compare is the availability drop and speed loss issue at the drilling module on day two. Traditional performance management showed that the availability of the drilling module on day two was reduced to roughly 80%. From viewing the reported downtime, the reason for this availability drop was a steady accumulation of an additional 1.19 h of downtime (compared to the next highest downtime) during the duration of that testing day. The digital performance management solution instead showed an increase to losses from 1746 conveyor output jam events totaling 1.90 h. While both solutions pointed to an accumulation of loss occurring over the course of the day, the specificity of the loss reason, its frequency, and the quantified impact from the digital approach gives actionable insight to the cause, while the traditional approach would require further inquiry to determine the cause.
This example also highlights comparisons to be made regarding data granularity. The traditional performance management solution provides a cumulative downtime figure at its most detailed level, while the digital performance management approach segregates downtime losses into specific loss reasons. The lack of granular data in the traditional approach hinders problem resolution as further investigation into the general causes of the downtime is required before root cause analysis can continue. On the other hand, a digital approach will give specific insight as to the general cause of a downtime event and can lead immediately into root cause analysis. Additionally, the frequency of when these performance management insights are updated and available can be quite different. As designed, a traditional approach will take measurements periodically and via manual means whereas a digital approach will constantly gather data at high refresh rates. With electronically gathered data, a traditional approach can achieve these high frequency measurements, but insights are only valuable after a sufficient data population has been gathered to produce a stable baseline for comparison. Digital approaches, however, will have insights available at any time as the measurements taken do not need this baseline comparison but instead are an accounting of how time was being utilized over the period observed.
The fidelity of the data composing and presented by the two performance management solutions is also apparent in the granularity of their data and methods of accounting. By nature, DPM will have equal or better fidelity than traditional, manually gathered performance metrics. Using units of time, a finite resource, requires that all time be accounted for. It is important to acknowledge that manually entered values in a DPM system may reduce the fidelity and granularity of the data as rounding times and summarization of events can occur in such situations. These accounting errors can be mitigated with increased data gathering automation, relieving operator reporting responsibility. Electronically enabled traditional performance management can aid in increasing the fidelity of its data, but the approach still suffers from the limitations of a single summary measure wherein the calculation of that measure can obfuscate data and blur insights available from the raw measurements. Furthermore, the many proposed variations of the OEE metric also suffer from the same drawbacks, even when data for those calculations are gathered by electronic or computerized means. The availability of these variations also lends itself to possible manipulation of the data, picking and choosing the metric that may make production look best rather than reflect its actual state.

5. Conclusions

As stated from the purpose of this research, an evaluation of the McKinsey & Company Digital Performance Management strategy as deployed by the Digital Performance Management tool developed by PTC Inc. was completed, and its results were compared to those of a traditional OEE based performance management strategy. Digital performance management has shown an increase in detail in the insights gained from its measures as compared to a traditional OEE based performance management strategy. From the comparisons regarding how these two approaches to performance management present their data, a definite advantage is seen in the detailed time-based figures produced by DPM over the summary percentile measures of traditional performance management. The conclusions pulled from the comparisons made in evaluating performance management include:
  • Industry 4.0 technologies allow for greater speed and fidelity in gathering data and should change how performance is measured to harness its inherent advantages.
  • Traditional performance management is limited in its use of a single summary metric, obfuscating the constituent data and reducing available insights regardless of the detail offered by digitally gathered input data.
  • Digital performance management presents data in greater detail and in a manner that facilitates better understanding of impacts on production than does traditional performance management.
  • Performance management and measurement using units of time, rather than percentile measures, provide greater understanding of production conditions and generates specific and readily actionable insights.
  • Time-based performance measurement creates higher fidelity metrics by requiring an accounting of a finite resource and how it is utilized, rather than summarizing data into a normalized percentile unit.
A change in how performance is measured and managed is needed to harness the capability of industry 4.0 technologies. Without such changes, the value of these internet-enabled computational innovations is limited by a measurement method designed for clipboards and stopwatches. Reviewing the production status from both a traditional and digital performance management approach shows clear distinctions between their form and utility. Insights and contextual measures gained by a data-rich digital approach are invaluable in pinpointing and quickly resolving manufacturing issues. Additionally, the shift from percentile figures to time-based metrics forms meaningful numbers with direct impact and interpretation, removing ambiguity.

Author Contributions

N.D.S.: conceptualization, methodology, software, investigation, data curation, writing—original draft, visualization, project administration. Y.H.: conceptualization, methodology, writing—review and editing, supervision. J.T.: conceptualization, funding acquisition. S.B.: software, resources. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Northrup Grumman Corporation [grant R0602721].

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Acknowledgments

My thanks to Joe Tenny for managing the partnership with Northrup Grumman and in providing current industry insights. I would also like to give special recognition to the many wonderful folks at PTC Inc., including Sebastian Bergner and Peter Zink, for their aid in understanding and deploying the digital performance management tool. Additional thanks go my fellow graduate students for their aid in gathering data. Finally, I give my utmost gratitude to Yuri Hovanski, whose mentorship and support was key to completing this work.

Conflicts of Interest

The authors declare that this study received funding from the Northrup Grumman Corporation [grant R0602721]. The funder had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results. Joe Tenny was employed by Northrup Grumman Space Systems, and Sebastian Bergner was employed by PTC. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Folan, P.; Browne, J. A review of performance measurement: Towards performance management. Comput. Ind. 2005, 56, 663–680. [Google Scholar] [CrossRef]
  2. Neely, A. The performance measurement revolution: Why now and what next? Int. J. Oper. Prod. Manag. 1999, 19, 205–228. [Google Scholar] [CrossRef]
  3. Dixon, J.R. The New Performance Challenge: Measuring Operations for World-Class Competition; Nanni, A.J., Vollmann, T.E., Eds.; Dow Jones-Irwin: Homewood, IL, USA, 1990. [Google Scholar]
  4. Nakajima, S. Introduction to TPM: Total Productive Maintenance; Productivity Press, Inc.: Cambridge, MA, USA, 1988. [Google Scholar]
  5. Williamson, R.M. Using Overall Equipment Effectiveness: The Metric and the Measures; Strategic Work System, Inc.: Columbus, NC, USA, 2006; pp. 1–6. [Google Scholar]
  6. Jeschke, S.; Brecher, C.; Meisen, T.; Özdemir, D.; Eschert, T. Industrial Internet of Things and Cyber Manufacturing Systems. In Industrial Internet of Things: Cybermanufacturing Systems; Jeschke, S., Brecher, C., Song, H., Rawat, D.B., Eds.; Springer International Publishing: Cham, Switzerland, 2017; pp. 3–19. [Google Scholar]
  7. Thoben, K.-D.; Wiesner, S.; Wuest, T. “Industrie 4.0” and Smart Manufacturing—A Review of Research Issues and Application Examples. Int. J. Autom. Technol. 2017, 11, 4–16. [Google Scholar] [CrossRef]
  8. Sinkora, E. Measuring Up Efficiency; Manufacturing Engineering; SME: Southfield, MI, USA, 2022. [Google Scholar]
  9. Huang, S.H.; Dismukes, J.P.; Shi, J.; Su, Q.I.; Razzak, M.A.; Bodhale, R.; Robinson, D.E. Manufacturing productivity improvement using effectiveness metrics and simulation analysis. Int. J. Prod. Res. 2003, 41, 513–527. [Google Scholar] [CrossRef]
  10. Andersson, C.; Bellgran, M. On the complexity of using performance measures: Enhancing sustained production improvement capability by combining OEE and productivity. J. Manuf. Syst. 2015, 35, 144–154. [Google Scholar] [CrossRef]
  11. Nakajima, S. TPM Develpoment Program: Implementing Total Productive Maintenance; Productivity Press, Inc.: Cambridge, MA, USA, 1989. [Google Scholar]
  12. Aleš, Z.; Pavlů, J.; Legát, V.; Mošna, F.; Jurča, V. Methodology of overall equipment effectiveness calculation in the context of Industry 4.0 environment. Eksploat. Niezawodn. 2019, 21, 411–418. [Google Scholar] [CrossRef]
  13. Muthiah, K.M.N.; Huang, S.H. Overall throughput effectiveness (OTE) metric for factory-level performance monitoring and bottleneck detection. Int. J. Prod. Res. 2007, 45, 4753–4769. [Google Scholar] [CrossRef]
  14. Muchiri, P.; Pintelon, L. Performance measurement using overall equipment effectiveness (OEE): Literature review and practical application discussion. Int. J. Prod. Res. 2008, 46, 3517–3535. [Google Scholar] [CrossRef]
  15. Schmenner, R.W.; Vollmann, T.E. Performance Measures: Gaps, False Alarms, and the “Usual Suspects”. Int. J. Oper. Prod. Manag. 1994, 14, 58–69. [Google Scholar] [CrossRef]
  16. Hwang, G.; Lee, J.; Park, J.; Chang, T.-W. Developing performance measurement system for Internet of Things and smart factory environment. Int. J. Prod. Res. 2016, 55, 2590–2602. [Google Scholar] [CrossRef]
  17. Wang, L.; Liu, P.; Jiang, S.; Xue, Y.; Wang, K.; Li, X. Production Management System for Small and Medium Sized Manufacturing Enterprises. In Proceedings of the 2018 IEEE International Conference on Industrial Engineering and Engineering Management (IEEM), Bangkok, Thailand, 16–19 December 2018; pp. 685–689. [Google Scholar]
  18. Gonzalez, S. Industrial analytics from the edge up: Industrial manufacturers are using edge controllers and industrial PCs to implement practical analytics initiatives from the edge up. Plant Eng. 2021, 75, 39–42. [Google Scholar]
  19. Zhuang, C.; Liu, J.; Xiong, H. Digital twin-based smart production management and control framework for the complex product assembly shop-floor. Int. J. Adv. Manuf. Technol. 2018, 96, 1149–1163. [Google Scholar] [CrossRef]
  20. Mandrakov, E.S.; Vasiliev, V.A.; Dudina, D.A. Non-conforming Products Management in a Digital Quality Management System. In Proceedings of the 2020 International Conference Quality Management, Transport and Information Security, Information Technologies (IT&QM&IS), Yaroslav, Russia, 7–11 September 2020; pp. 266–268. [Google Scholar]
  21. Amoretti, M.; Pecori, R.; Protskaya, Y.; Veltri, L.; Zanichelli, F. A Scalable and Secure Publish/Subscribe-Based Framework for Industrial IoT. IEEE Trans. Ind. Inform. 2021, 17, 3815–3825. [Google Scholar] [CrossRef]
  22. Pizoń, J.; Kłosowski, G.; Lipski, J. Key role and potential of Industrial Internet of Things (IIoT) in modern production monitoring applications. MATEC Web Conf. 2019, 252, 09003. [Google Scholar] [CrossRef]
  23. Kahveci, S.; Alkan, B.; Ahmad, M.H.; Ahmad, B.; Harrison, R. An end-to-end big data analytics platform for IoT-enabled smart factories: A case study of battery module assembly system for electric vehicles. J. Manuf. Syst. 2022, 63, 214–223. [Google Scholar] [CrossRef]
  24. Eloot, K.; Wang, S. The digital difference in measuring production performance. In Insights on Operations; McKinsey & Company: New York, NY, USA, 2019. [Google Scholar]
  25. Benjamin, G.; Lung, H.; Murali, R. Performance Management 2.0: Tech-enabled optimization of field forces. In Insights on Operations; McKinsey & Company: New York, NY, USA, 2020. [Google Scholar]
  26. Heppelmann, H.; Melrose, C.; Zhang, J. Digital Performance Management: From the Front Line to the Bottom Line; Coxon, M., Johnson, C., Eds.; McKinsey & Company: New York, NY, USA, 2020. [Google Scholar]
  27. Melrose, C. Inside Digital Performance Management: Driving Manufacturing Effeciency with Digital Innovation. 2021. Available online: https://www.ptc.com/en/blogs/iiot/inside-digital-performance-management-driving-manufacturing-efficiency-digital-innovation (accessed on 14 March 2021).
Figure 1. Festo Cyber-Physical Lab, showing the output module (left) and the magazine module (right).
Figure 1. Festo Cyber-Physical Lab, showing the output module (left) and the magazine module (right).
Machines 12 00555 g001
Figure 2. FCPL workpiece, the front cover of a simplified mobile device.
Figure 2. FCPL workpiece, the front cover of a simplified mobile device.
Machines 12 00555 g002
Figure 3. FCPL magazine module.
Figure 3. FCPL magazine module.
Machines 12 00555 g003
Figure 4. FCPL measuring module.
Figure 4. FCPL measuring module.
Machines 12 00555 g004
Figure 5. FCPL drilling module.
Figure 5. FCPL drilling module.
Machines 12 00555 g005
Figure 6. FCPL output module with the gripper over the good part tray.
Figure 6. FCPL output module with the gripper over the good part tray.
Machines 12 00555 g006
Figure 7. Performance management solution data flow architecture.
Figure 7. Performance management solution data flow architecture.
Machines 12 00555 g007
Figure 8. OEE of the FCPL stations over the cumulative production time.
Figure 8. OEE of the FCPL stations over the cumulative production time.
Machines 12 00555 g008
Figure 9. FCPL availability over the testing period.
Figure 9. FCPL availability over the testing period.
Machines 12 00555 g009
Figure 10. FCPL performance over the testing period.
Figure 10. FCPL performance over the testing period.
Machines 12 00555 g010
Figure 11. Measuring module reported quality rate over the testing period.
Figure 11. Measuring module reported quality rate over the testing period.
Machines 12 00555 g011
Figure 12. Detail of the measuring module reported quality rate between 98% and 100%.
Figure 12. Detail of the measuring module reported quality rate between 98% and 100%.
Machines 12 00555 g012
Figure 13. Defective units over cumulative production hours, reset at the beginning of each day.
Figure 13. Defective units over cumulative production hours, reset at the beginning of each day.
Machines 12 00555 g013
Figure 14. FCPL daily downtime over the testing period.
Figure 14. FCPL daily downtime over the testing period.
Machines 12 00555 g014
Figure 15. Waterfall charts using the combined data from all four stations of the FCPL (a) over all three days of testing, (b) on the first day of testing, (c) on the second day of testing, and (d) on the third day of testing.
Figure 15. Waterfall charts using the combined data from all four stations of the FCPL (a) over all three days of testing, (b) on the first day of testing, (c) on the second day of testing, and (d) on the third day of testing.
Machines 12 00555 g015
Figure 16. Pareto chart of the speed loss category of the cumulative FCPL data.
Figure 16. Pareto chart of the speed loss category of the cumulative FCPL data.
Machines 12 00555 g016
Figure 17. Pareto chart of the unplanned downtime category from the cumulative FCPL data.
Figure 17. Pareto chart of the unplanned downtime category from the cumulative FCPL data.
Machines 12 00555 g017
Figure 18. Pareto chart of the speed loss category at the drilling module on day two of testing.
Figure 18. Pareto chart of the speed loss category at the drilling module on day two of testing.
Machines 12 00555 g018
Figure 19. Pareto chart of the planned downtime category for the FCPL on day one of testing.
Figure 19. Pareto chart of the planned downtime category for the FCPL on day one of testing.
Machines 12 00555 g019
Figure 20. Pareto chart of the planned downtime category for the FCPL on day three of testing.
Figure 20. Pareto chart of the planned downtime category for the FCPL on day three of testing.
Machines 12 00555 g020
Figure 21. Waterfall charts using combined data over the three testing days of each individual station of the FCPL being (a) the magazine module, (b) the measuring module, (c) the drilling module, and (d) the output module.
Figure 21. Waterfall charts using combined data over the three testing days of each individual station of the FCPL being (a) the magazine module, (b) the measuring module, (c) the drilling module, and (d) the output module.
Machines 12 00555 g021aMachines 12 00555 g021b
Figure 22. Waterfall charts for the magazine module (a) on the first day of testing, (b) on the second day of testing, and (c) on the third day of testing.
Figure 22. Waterfall charts for the magazine module (a) on the first day of testing, (b) on the second day of testing, and (c) on the third day of testing.
Machines 12 00555 g022aMachines 12 00555 g022b
Figure 23. Waterfall charts for the measuring module (a) on the first day of testing, (b) on the second day of testing, and (c) on the third day of testing.
Figure 23. Waterfall charts for the measuring module (a) on the first day of testing, (b) on the second day of testing, and (c) on the third day of testing.
Machines 12 00555 g023aMachines 12 00555 g023b
Figure 24. Waterfall charts for the drilling module (a) on the first day of testing, (b) on the second day of testing, and (c) on the third day of testing.
Figure 24. Waterfall charts for the drilling module (a) on the first day of testing, (b) on the second day of testing, and (c) on the third day of testing.
Machines 12 00555 g024aMachines 12 00555 g024b
Figure 25. Waterfall charts for the output module (a) on the first day of testing, (b) on the second day of testing, and (c) on the third day of testing.
Figure 25. Waterfall charts for the output module (a) on the first day of testing, (b) on the second day of testing, and (c) on the third day of testing.
Machines 12 00555 g025aMachines 12 00555 g025b
Table 1. FCPL station changeable parameters.
Table 1. FCPL station changeable parameters.
ParameterValue RangeDefault ValueStation
Conveyor speed100 mm/s–50 mm/s100 mm/sAll
Drilling operationLeft Set, Right Set, BothBoth SetsDrilling Module
Gripper horizontal motor speed0 mm/s–800 mm/s60 mm/sOutput Module
Acceptable differential measure limit49 mm–0 mm2 mmMeasuring Module
Pneumatic pressure0 Bar–10 Bar5 BarAll
Table 2. FCPL initial ideal cycle times.
Table 2. FCPL initial ideal cycle times.
StationIdeal Cycle Time (Initial Setup)
Magazine Module3.3 s
Measuring Module4.4 s
Drilling Module10.9 s
Output Module10.1 s
Table 3. Production breakdown by day.
Table 3. Production breakdown by day.
PeriodParts ProducedParts Scrapped
Day 1218713
Day 218437
Day 3298813
Total701833
Table 4. FCPL final ideal cycle times.
Table 4. FCPL final ideal cycle times.
StationIdeal Cycle Time (after Improvements)Difference from Initial Ideal Cycle Time
Magazine Module3.3 s±0.0 s
Measuring Module4.4 s±0.0 s
Drilling Module4.9 s−6.0 s
Output Module6.6 s−3.5 s
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Smith, N.D.; Hovanski, Y.; Tenny, J.; Bergner, S. Digital Performance Management: An Evaluation of Manufacturing Performance Management and Measurement Strategies in an Industry 4.0 Context. Machines 2024, 12, 555. https://doi.org/10.3390/machines12080555

AMA Style

Smith ND, Hovanski Y, Tenny J, Bergner S. Digital Performance Management: An Evaluation of Manufacturing Performance Management and Measurement Strategies in an Industry 4.0 Context. Machines. 2024; 12(8):555. https://doi.org/10.3390/machines12080555

Chicago/Turabian Style

Smith, Nathaniel David, Yuri Hovanski, Joe Tenny, and Sebastian Bergner. 2024. "Digital Performance Management: An Evaluation of Manufacturing Performance Management and Measurement Strategies in an Industry 4.0 Context" Machines 12, no. 8: 555. https://doi.org/10.3390/machines12080555

APA Style

Smith, N. D., Hovanski, Y., Tenny, J., & Bergner, S. (2024). Digital Performance Management: An Evaluation of Manufacturing Performance Management and Measurement Strategies in an Industry 4.0 Context. Machines, 12(8), 555. https://doi.org/10.3390/machines12080555

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop