Process Capability Evaluation Using Capability Indices as a Part of Statistical Process Control

: This study aims to highlight the importance of a systematic approach to process capability assessment and the importance of following a sequence of steps. Statistical process control provides several different ways of assessing process capability. This study evaluates the process capability of crown cap manufacturing through capability indices. In addition to calculating the indices, the evaluation involves extensive data analysis. Before calculating the capability indices, the assumptions for their correct selection and use were also verified. Several statistical tests were used to verify each assumption. The research value of the study lies in pointing out that not all tests led to the same conclusions. It highlights the importance of selecting the appropriate test type for the evaluated process quality characteristics.


Introduction
The intense pressure in the competitive environment forces manufacturers to emphasise the quality of their products and services.Quality is one of the most critical aspects of the customer's decision-making process when choosing between several products.The ISO 9000 standard defines quality as "the degree to which a set of inherent characteristics of a product meets a requirement, i.e., a need or expectation that is stated, generally assumed or mandatory" [1].The product should, therefore, meet the requirements of the customer, who directly defines or expects them.A product's inherent characteristics (distinguishing features) that determine its fitness for purpose are called quality characteristics [2].Each process is characterised by a certain degree of variability in the values of the quality characteristics.This degree from a specific size may no longer be acceptable to the customer and affects their satisfaction.It is essential that producers not only meet customer quality requirements but also produce safely, environmentally and at the lowest possible cost.One way to achieve the desired level of quality at a low cost is to reduce process variability and eliminate the production of nonconforming products.
Statistical quality control plays a vital role in process control.Its application is practical because it allows intervention in the process at the precise time it is needed.Early anticipation and nonconformity prevention reduce material, human resources, energy and control costs.Montgomery [3] describes statistical quality control as one of the most significant technological advances of the 20th century, which is easy to use and can be applied to any process.It is a set of methods to achieve process stability to improve process capability by reducing variability and thus reaching the required product quality and customer satisfaction.The seven basic statistical quality control methods include a histogram, check sheet, Pareto chart, cause-and-effect diagram, defect concentration diagram, scatter diagram and control chart.
The control chart is the most sophisticated of the seven methods.It is used for online process monitoring, estimating process parameters and determining process capability.It provides helpful information for process improvement and is an effective tool for reducing process variability as much as possible [3].The control chart is a preventive tool of quality management [4,5].
W. A. Shewhart published the first control diagrams in 1924 [6].He created them as a tool for distinguishing between natural process variability caused by several unavoidable causes (also known as common/natural/random/inherent/uncontrollable causes) and assignable causes of variability (also known as special/unnatural/systematic/controllable causes) caused by poorly controlled machines, operator errors or faulty inputs to the process.In the first case, the process is referred to as a process in a state of statistical control (or in-control), while in the second case, it is referred to as a process in a state out of statistical control (or out-of-control) [7].Due to the high precision of processes nowadays, classic control charts can be replaced by new-generation control charts or dynamic (adaptive) charts, which adjust the size of samples, sampling periods or locations of control limits to variable process conditions [8].Keeping the process under statistical control enables achieving a stable level of process parameters over time and, thus, a stable level of the quality characteristics of process products.At that point, it is also possible to assess the process capability, i.e., "the ability of the process to produce a product that will fulfil the requirements for that product" [9].Chakraborty and Chatterjee consider process capability analysis one of the most widely used statistical quality control methods today, used for manufacturing and business processes [10].

Manufacturing Process Capability Analysis
Process capability analysis is an integral part of process validation.The process capability assessment aims "to determine the degree of reliability with which the values of the quality attribute under consideration correspond to the specification based on customer requirements" [11].Process capability assessment is essential for the following reasons:

•
It verifies the suitability of the designed process to ensure the required quality attributes of the product; • It enables the likelihood of nonconformity to be estimated and optimises production planning;

•
It is an essential indicator for planning the maintenance of production equipment; • It is the basis for planning corrective actions and assessing their effectiveness; • It provides evidence to the customer that the product has been produced under stable manufacturing conditions that ensure regular compliance with the prescribed quality criteria.
Process capability analysis has its place in product and process design, supply chain management, production planning and maintenance.Additional information is also provided by the capability assessment of the production equipment used [9].The histograms or the probability plots, the control charts, the design of experiments [3], the rate of nonconformities or proportion of nonconforming items [12] and the capability indices are used in process capability analysis.Kuo and Chuang [13] in their study, described the possibility of combining control charts and capability indices.They proposed control charts C pp , C ia and C ip , which can monitor and evaluate process quality and capability differences.

Manufacturing Process Capability Analysis Using the Histogram
The histogram, named by Karl Pearson, is one of the oldest and most widely used graphs to show the distribution of quantitative data [14].Montgomery [15] recommends using a dataset with a minimum of 100 observations when evaluating process capability using the histogram.This data should be divided into bins (intervals) of equal width.Too many bins lead to opacity, and too few bins lead to a coarse view.Therefore, the number of bins should be calculated based on the number of observations.It is possible to use, for example, the Freedman-Diaconis' rule [16] or Sturges' rule [17], but the simplest way is to use the Square-root Rule, i.e., to calculate the number of bins as the square root of the number of observations.Then, the observations are sorted in the generated bins.The data frequencies in the bins are displayed in the histogram as bars on the y-axis, and the values of the midpoints of the bins are placed on the x-axis.The graph then gives a visual overview of the shape of the data distribution, their mean and variability.The advantage of using a histogram is that it also gives an immediate picture of the performance of the process under consideration if both lower and upper specification limits (LSL and USL) and target mean (T) are plotted to it.

Manufacturing Process Capability Analysis Using the Probability Plot
The probability plot introduced by Wilk and Gnanadesikan [18] shows a set of data sorted in ascending order of variation on the x-axis, and plotted on the y-axis are the values of their cumulative frequency.Cumulative frequency may be calculated using the formula 100 (j − 0.5)/n [19], where n is the range of the dataset and j is the ordinal number of individual values, or other methods can be used, such as the Median Rank method, the Herd-Johnson method, the Kaplan-Meier method or the Modified Kaplan-Meier [20].If the assumed distribution (e.g., normal) adequately describes the data, the points plotted on the graph are placed approximately along a straight line inside the confidence interval.The disadvantage of this procedure is that it is not a clear, objective assessment.For this reason, it is often supplemented with objective statistical tests.
Displaying the data using a probability plot allows for assessing the distribution's shape, centre and width.Similar to a histogram with a displayed specification band, this type of graph allows assessing the estimate of nonconforming process products, i.e., products of quality characteristics outside the specification interval.

Manufacturing Process Capability Analysis Using the Control Chart
Montgomery [3] considers the control chart the primary technique for analyzing process capability.He recommends using variable control charts (range control chart or standard deviation control chart and mean control chart) rather than attribute control charts (control chart for count of nonconformity or control chart for number nonconforming) because of their greater power and more comprehensive information about the process.These control charts allow for the analysis of both variability levels.Montgomery points out that capability assessment should only be undertaken if the process is stable; otherwise, the process capability estimate is unreliable.

Manufacturing Process Capability Analysis Using Design of Experiments
Statistical process control methods are the passive methods in which the process is monitored, and information leading to beneficial change in the process is expected.The design of experiments introduced by Fisher [21] is an active statistical method.The changes to the process inputs are monitored based on the changes made to the outputs, and this information can lead to process improvement.The designed experiments help identify the process variables that affect its output by finding their optimal levels to ensure the desired level of the process.They can also be used to estimate different sources of variability in the process.

Manufacturing Process Capability Analysis Using Capability Indices
A graphical representation of process capability using a histogram or other graphs is exciting and valuable.However, a quantitative representation of process capability using capability indices is also helpful and essential.The capability indices simplified the capability assessment.They compare the required accuracy of a process with the variability of the process or compare what the process "should do" with what the process "actually does" [4,22].
The following procedure is recommended to assess process capability analysis based on measurable quality characteristics [23]: 1.

Selection of the quality characteristic
The process capability is evaluated based on a specific quality characteristic of the manufactured or intermediate product.It is necessary to select the characteristic whose value reflects the success of the process under consideration and is decisive for the product.The customer may specify this characteristic or be critical regarding a desired product characteristic or a link to the following production process.

Measurement system analysis
Before data collection, it is necessary to analyse the measurement system of the selected quality characteristics and verify its acceptability, as an unacceptable measurement system may lead to incorrect results of the process capability assessment [24].
Many authors, such as McNeese and Klein [25], Persijn and Nuland [26], Mittag [27], Pearn and Liao [28] and Hsu et al. [29], have pointed out that the capability of the chosen characteristic's measurement system affects the process's calculated capability.

Data collection
When monitoring the capability of an existing process, data collection should be carried out over a sufficiently long period to reflect all the usual sources of variability that affect the process (e.g., operator turnover, feedstock supply, maintenance).During this period, several products shall be chosen from the production process at regular intervals, and the quality characteristics shall be measured.It is recommended that the number of subgroups should be at least 25, with a range of 4-5 values in each subgroup [23].The range of selection is essential because it affects the accuracy of the index calculation.

Verification of the prerequisites of the quality characteristic
Verification of the Normality An indicative assessment of whether the measured values conform to a normal distribution can be obtained from the shape of the histogram constructed (Section 1.1.1)or the probability plot (Section 1.1.2).An accurate way of verifying normality is to use statistical tests, for example, Anderson-Darling's test [30] or the Kolmogorov-Smirnov nonparametric goodness of fit [31], which are based on the empirical distribution function, the Shapiro-Wilk test [32], the Ryan-Joiner test [33] correlation-based test or the Jarque-Bera test [34] based on the measures of skewness and kurtosis.Different tests of normality are based on different principles and have different strengths.Therefore, some tests may reject normality, and others may not.Multiple normality tests and one of the graphical tools, e.g., a probability plot, should be used simultaneously for a comprehensive assessment of normality.If the examined data are not normally distributed, it is possible to transform them to a normal distribution or use modified indices calculations considering the relevant distribution.

Data Independence Verification
The classical approach to process capability analysis assumes that the measured data are independent.An autocorrelation function was used to verify the independence of the data in the examined datasets [19].Some processes, such as biological or chemical processes, in particular, violate the assumption of independence.A procedure for assessing process capability for auto-correlated data can be found in studies, e.g., Zhang [35], Vännman and Kulahci [36], Mohamadi et al. [37] or Sun et al. [38].Verification of the Process Stability Data must be collected from a process in the state of statistical control, in which the variability of the observed quality characteristic is due only to the influence of random causes.Suppose the control charts analysis shows that the process is not in the state of statistical control.In that case, it is necessary to analyse the observed process, detect the definable causes [15] that caused the instability, eliminate it and create conditions such that its influence is not repeated.Pyzdek and Keller [39] point out that applying process capability methods to processes outside of statistical control leads to unreliable estimates of process capability and should never be done.Process capability represents the performance of a process only in a state of statistical control [40].

Calculation of capability indices and comparison with required values
The final step in assessing process capability is to calculate and compare the appropriate capability indices to the required values.The most commonly used indices for quality characteristics from a normal distribution are C p and C pk .The first assesses a process's potential, and the second assesses the actual capability to deliver products that consistently meet tolerance limits.In addition, the C pm and C pmk indices assess the ability of the process to achieve the target mean of the quality characteristic being monitored.Sections 2.3-2.5 deal with these capability indices in more detail.Supplementing the calculated capability indices with graphs, as recommended by Vänman [41,42], improves the assessment of process capability.

The Purpose of the Study
This study aims to highlight the importance of a systematic approach to process capability assessment and the importance of following a sequence of five steps in this approach.
The study evaluates the capability of the crown cap manufacturing process for glass bottles using three selected quality characteristics (Section 2.2).Measurement system analysis is described in Section 3.1.Section 3.2.1 concerns data collection and a review of the measured datasets.Further Section 3.2.2refers to the verification of the prerequisites of the quality characteristics.In Section 3.2.3 are the calculations of the capability indices described in Sections 2.3.2-2.3.7.The calculated values of the capability indices shall be compared with their required value.They are also examined to compare their ability to capture changes in the centring and variability of the production process.Actual and expected process capability is also assessed by calculating the number of nonconforming parts in ppm.The natural variability of the quality characteristics at a given time (short-term process variability) and the variability of the quality characteristics over time (long-term process variability) are assessed.

Materials and Methods
The first subsection describes the history of the origin and the manufacturing process of crown caps.The following subsection deals with describing the capability indices C p , C a , C pk , C pm , C* pm , C pmk and their confidence intervals.

Description of the Crown Cap Manufacturing Process
This article uses data on the manufacturing process of crown caps for glass bottles.The Irish inventor and founder of Crown Holdings, Inc., William Painter, living in Baltimore, USA, was behind their creation [43].This first highly successful single-use product has been used almost unchanged since 1892.The crown cap has replaced the cork, ceramic or metal stoppers formerly used worldwide.It is a closure for a glass bottle to prevent its contents-beer, mineral water and others-from spoiling or being wasted during distribution and storage.Crown caps have also become an interesting artefact for collectors in their 132 years of use.
Technical data on the manufactured closures can be found in EN 17177 [44] and ISO 12821 [45].The international standard ISO 12821 specifies a crown cap with an outer diameter of 26 mm, a height of 6 mm and 21 teeth, based on the Cetie data sheet GME 13.01.Similarly, the European standard EN 17177 specifies the dimensional requirements for this crown cap, which are customer-relevant.Neither the material nor the seal's profile is specified in the standard, as different profiles can be used, considering the crown cap's end use.
The case study uses data obtained from a company that manufactures crown caps.Its customers are producers of beer, soft drinks and mineral waters filled in glass bottles.The company also provides preparation of lithographs of crown caps, external and internal printing of caps, technical service, painting, printing of food-grade sheet metal and distribution of products directly to customers.
The entire production process occurs on progressive equipment, guaranteeing production efficiency.During the entire production process, the crown caps go through an automatic vacuum and visual inspection system, as well as video inspection.
The company places great emphasis on quality, from the introduction of production technologies through the control of input materials and the entire production process to the output control of the product.This output control is carried out in the form of various tests (see Figure 1) to determine the conformity of the properties of crown caps with the required technical standards.
collectors in their 132 years of use.
Technical data on the manufactured closures can be found in EN 17177 [44] and ISO 12821 [45].The international standard ISO 12821 specifies a crown cap with an outer diameter of 26 mm, a height of 6 mm and 21 teeth, based on the Cetie data sheet GME 13.01.Similarly, the European standard EN 17177 specifies the dimensional requirements for this crown cap, which are customer-relevant.Neither the material nor the seal's profile is specified in the standard, as different profiles can be used, considering the crown cap's end use.
The case study uses data obtained from a company that manufactures crown caps.Its customers are producers of beer, soft drinks and mineral waters filled in glass bottles.The company also provides preparation of lithographs of crown caps, external and internal printing of caps, technical service, painting, printing of food-grade sheet metal and distribution of products directly to customers.
The entire production process occurs on progressive equipment, guaranteeing production efficiency.During the entire production process, the crown caps go through an automatic vacuum and visual inspection system, as well as video inspection.
The company places great emphasis on quality, from the introduction of production technologies through the control of input materials and the entire production process to the output control of the product.This output control is carried out in the form of various tests (see Figure 1) to determine the conformity of the properties of crown caps with the required technical standards.In the company, the processes of control activities are divided in terms of their implementation into input, intermediate and output control.Input control focuses on the preliminary inspection of purchased material.The result of the incoming inspection is documented on the customer's delivery note.Deliveries of material that have yet to pass the entry check shall not be used.Inter-operational control ensures that the relevant batch of products is not released for the next operation before it has been checked, i.e., compliance with the established requirements is verified.Output inspection aims to carry out tests according to the specification of EN 17177 [44].The results of the individual tests are recorded on the appropriate forms.Conforming final products that meet the prescribed quality criteria are released and marked on the appropriate form.

Selection of the Quality Characteristic
The height, inner diameter, outer diameter, number of teeth and plastic weight are the quality characteristics for assessing the quality of the crown caps (see Figure 2).In this case study, the data of three of these characteristics-height, outer diameter (hereafter referred to only as the diameter) and weight-were evaluated.The height and diam- In the company, the processes of control activities are divided in terms of their implementation into input, intermediate and output control.Input control focuses on the preliminary inspection of purchased material.The result of the incoming inspection is documented on the customer's delivery note.Deliveries of material that have yet to pass the entry check shall not be used.Inter-operational control ensures that the relevant batch of products is not released for the next operation before it has been checked, i.e., compliance with the established requirements is verified.Output inspection aims to carry out tests according to the specification of EN 17177 [44].The results of the individual tests are recorded on the appropriate forms.Conforming final products that meet the prescribed quality criteria are released and marked on the appropriate form.

Selection of the Quality Characteristic
The height, inner diameter, outer diameter, number of teeth and plastic weight are the quality characteristics for assessing the quality of the crown caps (see Figure 2).In this case study, the data of three of these characteristics-height, outer diameter (hereafter referred to only as the diameter) and weight-were evaluated.The height and diameter of the crown cap were measured using a digital deviation meter, and the plastic weight was measured using digital scales.
eter of the crown cap were measured using a digital deviation meter, and the plastic weight was measured using digital scales.

Process Variability
The process variability can be thought of in two ways [15].It can be the natural (inherent) variability of a quality-critical characteristic at a given time-i.e., instantaneous variability (short-term process variability) or the variability of a quality-critical characteristic over time (long-term process variability).Short-term variability refers to differences between observations within the same subgroup or to differences between observations collected close together over time.Each slight black curve represents instantaneous variability in Figure 3.Long-term variability refers to the differences between observations constructed over the entire data collection period.The one sizeable red curve represents the overall variability for the study in Figure 3.Long-term and short-term variability is the same if the means and variability in the subgroups are not statistically significantly different.In that case, the manufacturing process is in a state of statistical control.Otherwise, the variability of the items within a subgroup will be smaller than the variability of the process as a whole.A significant difference between the long-term and short-term variability may indicate that the process is out of control or may indicate the existence of sources of variability that are not estimated within the capability [20].
The overall standard deviation of a process can be expressed by the formula Ti

Capability Indices and their Calculation 2.3.1. Process Variability
The process variability can be thought of in two ways [15].It can be the natural (inherent) variability of a quality-critical characteristic at a given time-i.e., instantaneous variability (short-term process variability) or the variability of a quality-critical characteristic over time (long-term process variability).Short-term variability refers to differences between observations within the same subgroup or to differences between observations collected close together over time.Each slight black curve represents instantaneous variability in Figure 3.Long-term variability refers to the differences between observations constructed over the entire data collection period.The one sizeable red curve represents the overall variability for the study in Figure 3.
eter of the crown cap were measured using a digital deviation meter, and the plastic weight was measured using digital scales.

Process Variability
The process variability can be thought of in two ways [15].It can be the natural (inherent) variability of a quality-critical characteristic at a given time-i.e., instantaneous variability (short-term process variability) or the variability of a quality-critical characteristic over time (long-term process variability).Short-term variability refers to differences between observations within the same subgroup or to differences between observations collected close together over time.Each slight black curve represents instantaneous variability in Figure 3.Long-term variability refers to the differences between observations constructed over the entire data collection period.The one sizeable red curve represents the overall variability for the study in Figure 3.Long-term and short-term variability is the same if the means and variability in the subgroups are not statistically significantly different.In that case, the manufacturing process is in a state of statistical control.Otherwise, the variability of the items within a subgroup will be smaller than the variability of the process as a whole.A significant difference between the long-term and short-term variability may indicate that the process is out of control or may indicate the existence of sources of variability that are not estimated within the capability [20].
The overall standard deviation of a process can be expressed by the formula Long-term and short-term variability is the same if the means and variability in the subgroups are not statistically significantly different.In that case, the manufacturing process is in a state of statistical control.Otherwise, the variability of the items within a subgroup will be smaller than the variability of the process as a whole.A significant difference between the long-term and short-term variability may indicate that the process is out of control or may indicate the existence of sources of variability that are not estimated within the capability [20].
The overall standard deviation of a process can be expressed by the formula If the long-term and short-term variability is equal, then σ berween equals zero.As stated by Polhemus [12], statistics computed from estimates of σ within express the capability of the process, and statistics computed from estimates of σ overall express the performance of the process.
The long-term process variability is estimated using the sample standard deviation s of all n observations x i , i.e., by the formula (2) For a small number of observations in the dataset, it is recommended that the bias estimate be corrected using the constant C 4 for the construction of the Shewhart control chart dependent on sample size n from ISO 7870-2 [7] by the formula Short-term process variability can be estimated in several ways for k subgroups with size n j , j = 1, 2. .., k more than 1, for example:

•
Using the pooled standard deviation based on a standard one-way analysis of variance without (4) or with a bias correction (5);

•
Using an average subgroup range R i (6);

•
Using an average subgroup standard deviation without a bias correction (7), where C 4 , d 2 and D 3 are the constants for constructing the Shewhart control chart dependent on sample size n from ISO 7870-2 [7], or with a bias correction.
Process capability indices are numerical measures of a process's capability to meet the specifications.In its simplest form, the capability index is the ratio between the allowable spread of the process (the width of the specification limits) and the actual spread of the process (6σ).Some indices also take into account the process average or its target value.Capability indices are unitless, so they help compare the capability of one process with another.
The evolution of the capability indices has moved away from primary indices towards more complex indices designed to address some of the shortcomings of the early indices.
Process capability indices can be divided into three generations.The C p and C a indices belong to the first generation.C pk and C pm belong to the second generation.The index C pmk belongs to the third generation [23].These indices use the within variation to evaluate the short-term process capability.Performance indices P p and P pk used the overall variation to assess the long-term process capability.

Capability Index C p
The first published process capability index was a simple "precision" index C p , authored by Juran [46].He combined process parameters with product specifications and created the concept of process capability indices by introducing the first C p index.Finley refers to this index as CPI, which stands for Capability Potential or Capability Potential Index [47]; on the other hand, Montgomery uses the term PCR for process capability ratio [3,15].Historically, the capability index was first used in the automotive industry at Ford Motor Company [48].
This index expresses a measure of the ability of a process to ensure that the quality attribute under consideration lies within specification limits.To calculate it, it is necessary to specify both specification limits.It expresses the allowable ratio and the values' cumulative variability but does not mention their location within the specification interval [4].It is calculated according to the formula The difference between the upper (USL) and lower specification limit (LSL) represents the prescribed specification interval (two-sided specifications), i.e., the production clearance.An interval of size six standard deviations σ represents an estimate of reality, the range of values in which products are produced-process spread (i.e., the overall process variation).In the case of a normal distribution, 99.73% of the measured values are within the natural tolerance limits of the process ⟨µ − 3 • σ ; µ + 3 • σ⟩.The process standard deviation σ may be estimated using the Formulas (4)- (7).
For a perfectly cantered process lying within the specification interval, the C p index can be used to calculate the percentage of nonconforming items according to the formula where Φ(.) is the standard normal cumulative distribution function.For example, C p = 1.00 corresponds to %NC = 2700 parts per million (ppm), and C p = 1.33 corresponds to %NC = 63 ppm, as reported by Pearn and Kotz [48].
The capability ratio (1/C p ) • 100 is interpreted as the percentage of the width of the specifications used by the process [49].

Capability Index C a
The "accuracy" C a index is used to measure the degree of centring of a process in relation to the manufacturing tolerance [50].It may be calculated by the formula where d = (USL − LSL)/2 is the half specification interval related to the manufacturing tolerance, and m = (USL + LSL)/2 is the midpoint between the upper and lower specification limits.The process mean µ may be estimated via the formula i.e., as the grand average calculated from the k sample means.
A value of C a = 1 means the process is centred at the midpoint between the upper and lower specification limits (µ = m).If 0 < C a < 1, the process mean is between the target mean and one of the specification limits.C a = 0 means the process mean is at one of the specification limits, and a value of C a < 0 means that µ > USL or µ < LSL, i.e., the process is significantly off-centre.
The C a index complements the original Japanese index k, which describes the process capability in terms of the deviation of the process mean from the midpoint m.It provides a quantified measure of the extent to which the process is off-centre and is defined as k = 1 − C a .2.3.4.Capability Index C pk Kane [51] published a capability index C pk , which, unlike the C p index, takes into account not only the variability but also the location of the values of the quality trait of interest in the specification interval.Thus, it characterises the actual capability of the process to comply with the prescribed specification limits (two-sided specifications).It is one of the most widely used characteristics of manufacturing process capability.It is calculated according to the relation The specification interval from LSL to USL is divided into two parts by the mean value µ.
If the centre of the distribution of measured values is in the middle of the specification interval, the widths of these parts coincide, and the values of the indices C p and C pk coincide.In the case of a mismatch, the values of the C pk index are smaller, so the index C pk is more stringent than the C p index.This index can also be written as the product of the indices C p and C a via the formula The disadvantage of the C pk index is that if µ moves away from T and σ does not change, C pk deteriorates, but if a change in σ compensates the deviation of µ from T, then the C pk index does not detect the failure to meet the target value of T. Another disadvantage of the C pk index is that if the target value T does not lie in the middle of the specification interval, it will not capture its deviation from the actual mean value of the process.
In manufacturing, it is sometimes essential to control only one specification limit.In such cases of one-sided specifications, the C pkL or C pkU indices are used, which form the basis of the C pk index formulation.
The capability index C pkL is used when only the lower specification limit LSL is specified, i.e., when it is essential in controlling the production process not to reach a value less than the LSL, i.e., not to underrun this limit.The size of the upward deviation is not important.The capability index C pkU is used if only the upper specification limit USL is specified, i.e., if in the control of the production process, it is essential that a value greater than USL is not reached, i.e., that this limit is not exceeded.The size of the downward deviation is not important.
Oakland [52] recommends interpreting the C pk values as follows: • If C pk < 0, the process is centred outside the specification limits and produces nonconforming outputs;

•
If C pk = 0, the process is centred on one of the specification limits; • If C pk = 2.0, high confidence level of conforming process output, assuming control charts are used regularly.
Montgomery [3] suggested that a process is capable if the mean value is within 4σ of the specification limits, i.e., C pk = 1.33.In this situation, the number of nonconformities per million opportunities for the two-side specification is less than 90.A process with C pk = 2.0 is called a six-sigma process because the distance from the process mean to the nearest specification is six standard deviations.There are only 3.4 nonconformities per million opportunities in this situation.Montgomery et al. stated that such an extensive process capacity is required because it is challenging to keep the process mean at the centre of the specification for long periods [53].
Pearn and Kotz call a process "capable" if 1.00 < C pk < 1.33, i.e., some process control is needed, "satisfactory" if 1.33 < C pk < 1.50, i.e., the quality of the process is adequate, and no rigorous quality control is needed, "excellent" if 1.50 < C pk < 2.00, and "super" if C pk > 2.00 [48].
The value of the C pk index can be used for other important calculations, for example, the percentage of nonconforming items can be calculated according to the formula where Φ(.) is the standard normal cumulative distribution function.
The C p and C pk indices are suitable measures for identifying quality improvement when the primary measure of success is reducing process variability.However, they do not consider customer requirements for achieving the target mean T.

Capability Index C pm
The C pm capability index proposed by Hsiang and Taguchi [54] and later independently by Chan et al. [55] removes some of the shortcomings of the C p and C pk indices and retains the suitable properties.The "loss-based" or Taguchi index was proposed in the context of the loss function used in Taguchi's approach to quality assessment.It compares the maximal allowable variability of the observed quality trait, determined by the width of the specification interval, with its actual variability around a target mean T. It considers the variability of the values of the quality attribute of interest and the rate of attainment of the optimum value [56,57].It is calculated according to the formula The expression σ 2 + (µ − T) 2 in Formula (15) measures the root mean square loss due to non-compliance with quality production conditions.As the standard deviation increases and/or the mean moves away from the target value, the denominator of the index increases, and the C pm value decreases.As can be seen from formula (16), the third-generation index C pm differs from the first-generation index C p only in the degree of process variation.
The connection between C p , C pk and C pm indices, if T = m [58], shows the formula The advantage of the C pm index is its ability to record also changes in the mean, which are "compensated" in the C pk index by a change in the standard deviation.The index is used for two-sided tolerances and a target value of T in the middle of the specification interval.
The C pm index can be used with the C a index to calculate the percentage of nonconforming items in case T = m [48] according to the formula where Φ(.) is the standard normal cumulative distribution function.

Capability Index C* pm
This index is a generalisation of the C pm index.It is used when the target value is outside the middle of the specification interval or when only one specification limit is known [55].It is calculated according to the formula If the mean value of the observed quality characteristic µ is equal to the target mean T, and it is also the centre of the specification interval, the capability indices C pk , C pm and C* pm are equal.The disadvantage of the indices C pm and C* pm is that if their calculated values indicate process incompetence, they do not provide information about its causes.

Capability Index C pmk
Pearn et al. proposed calculating this index according to relation (11) by combining the C pk index's numerator and the C pm index's denominator [59] via the formula The third-generation capability index has the advantage of higher sensitivity to variability around the target mean T. It compares the distance of the mean value of the quality characteristics of interest to the closer specification limit with half the variability around the target value.In the case of two-sided symmetric specifications, the relationship between capability indices holds [40] The C pmk can also be used to calculate the percentage of nonconforming items by using the formula where Φ(.) is the standard normal cumulative distribution function.If the process mean is centred at the midpoint between the upper and lower specification limits (µ = m), then formula (22) becomes the simple shape %NC = 2Φ −3 • C pmk , and C a = 1.

Confidence Intervals for Capability Indices
Since the parameters of the underlying ensemble µ and σ are usually not available when calculating capability indices for quality characteristics coming from a normal distribution, they are replaced by their sample estimates μ (see Formula (11)) and σshort−term (see Formulas ( 4)-( 7)).These estimates are then used to calculate point estimates of the indices Ĉp , Ĉpk , Ĉpm , Ĉpmk .Therefore, several authors, for example [19,40,60,61], recommend evaluating the capability of the investigated process based on the lower bound of the confidence interval for the estimated capability indices.
Nagata and Nagahata [62] proposed calculation of the two-sided confidence interval for the C pk index where z α/2 is the upper α/2 quantile of the standard normal distribution.Marcucci and Beazley [63] suggest a very easily calculated lower confidence bound for C pm index: where χ 2 n,1−α is the α percentage points on the Pearson chi-squared distribution with n degrees of freedom [40].
Chen and Hsu [64] derived an 100(1 − α)% confidence interval for the C pmk index as where σpmk is calculated via formula and δ = (x − T)/s.Another way of solving the problem with unknown values of the parameters µ and σ is to use control charts to control the stability of the process under study.Chatterjee and Chakraborty [65] described this procedure for the C pkL and C pkU indices, i.e., for the cases of one-sided tolerance limits.Ahmad and Cheng [66] developed the X and R fuzzy control chart based on fuzzy process capability indices Cp and Cpk using triangular fuzzy numbers for unknown population parameters µ and σ.

Summary of the Development of the Capability Indices
In recent years, many authors have presented their newly proposed capability indices, which they have used to assess the capability of processes in different industries.Summary articles provide an overview of univariate and multivariate indices.
The article by Palmer and Tsui [67] compares the response of various capability indices to changing process conditions.It identifies their helpful characteristics, areas of application and recommendations for selecting indices in the context of the current level of process performance.The authors group the indices according to the loss functions used in their interpretation.The characteristics of the various indices are discussed.
In their paper, Kotz and Johnson [68] developed a review of process capability indices published between 1992 and 2000 in approximately 170 publications.
The prerequisites for applying standard capability indices are not met in many processes.Anis [61] provides an overview of eligibility indices for normally distributed quality characteristics and maps eligibility indices accounting for the effect of non-normality, asymmetry or auto-correlated data.Pearn et al. [69], Kashif et al. [70], Safdar et al. [71], Chen et al. [72], Aytaço glu and Genç [73] and Erfanian et al. [74] proposed methodologies to address deviations from normality.
Spiring et al., in their study [75], reviewed articles on process capability assessment from 1990-2002.In many cases, asymptotic tolerances are used for quality characteristics.How to go about evaluating the process capability in such a case is given by, e.g., Pearn and Wu [78], Grau [79].
Kaya and Çolak [80] reviewed studies that use fuzzy capability indices.These indices use fuzzy mean, fuzzy variance and upper and lower specification limits expressed by linguistic variables, which makes them more sensitive and flexible than traditional ability indices.
Regarding univariate process capability indices, Yum mapped 1080 journal papers and books for the period 2010-2021 [81], and he also reviewed the literature on multivariate process capability indices [82].
Similarly, de-Felipe and Benedito [83] discussed univariate and multivariate capability indices in their article.These authors created three classes of characteristics, which they used to break down the indices published from 1986 to 2016.In the first class, they placed characteristics related to information about the input process data; in the second class, they included characteristics related to the approach used to compute the indices; and in the third class, they placed characteristics related to the information provided by the indices.The authors used four criteria (complexity of computation, globality of the index, relation to the proportion of nonconforming parts and robustness of the index) to identify the strengths and weaknesses of each multivariate index.

Results and Discussion
The first subsection focuses on analysing the measurement systems used to show that the observed variability is mainly due to process variability and only to a minimal extent to the variability of the measurement systems used to measure the observed quality characteristics.The assumptions for using the selected capability indices are further validated, and the calculation and evaluation of capability indices follow this.

Measurement System Analysis
The results of measuring process products are usually accompanied by deviations, which can be caused by differences between the individual parts and imperfections in the measurement system used.A measurement system that analyses a process is generally acceptable if the percentage of repeatability and reproducibility (%R&R) is less than 10%.
The system may be acceptable in the 10% to 30% interval; above 30%, it is considered unacceptable [40,84].Considering %R&R = 26.61%, it can be concluded that the evaluated measurement system may be acceptable.
The final step of the measurement system analysis is determining the number of different categories the measurement system can reliably distinguish.The number of distinct categories (ndc) for this study equals 5.05.An ndc value greater than 5.00 means the measurement system is adequate according to the AIAG (Automotive Industry Action Group, Southfield, MI, USA) rules [84].

Data Collection and Review of Measured Datasets
Measurements of eight randomly selected crown caps were taken at set regular intervals twenty-five times.Data for the three parameters studied were recorded chronologically in a table (Table A1).From these, further calculations were possible to evaluate the capability of the manufacturing process.
Table 1 contains the three processed dataset basic statistical measures of location, variation and shape.The dot plots and Tukey's box-and-whisker plots [85] for these datasets are in Figure 4. extent to the variability of the measurement systems used to measure the observed quality characteristics.The assumptions for using the selected capability indices are further validated, and the calculation and evaluation of capability indices follow this.

Measurement System Analysis
The results of measuring process products are usually accompanied by deviations, which can be caused by differences between the individual parts and imperfections in the measurement system used.A measurement system that analyses a process is generally acceptable if the percentage of repeatability and reproducibility (%R&R) is less than 10%.The system may be acceptable in the 10% to 30% interval; above 30%, it is considered unacceptable [40,84].Considering %R&R = 26.61%, it can be concluded that the evaluated measurement system may be acceptable.
The final step of the measurement system analysis is determining the number of different categories the measurement system can reliably distinguish.The number of distinct categories (ndc) for this study equals 5.05.An ndc value greater than 5.00 means the measurement system is adequate according to the AIAG (Automotive Industry Action Group, Southfield, MI, USA) rules [84].

Data Collection and Review of Measured Datasets
Measurements of eight randomly selected crown caps were taken at set regular intervals twenty-five times.Data for the three parameters studied were recorded chronologically in a table (Table A1).From these, further calculations were possible to evaluate the capability of the manufacturing process.
Table 1 contains the three processed dataset basic statistical measures of location, variation and shape.The dot plots and Tukey's box-and-whisker plots [85] for these datasets are in Figure 4   The Tukey's box-and-whisker plots have no outside points; the whiskers have approximately similar lengths on the left and right sides.The calculated negative skewness values for the diameter and height datasets (−0.088; −0.205) show a slightly right-skewed distribution; conversely, the calculated positive skewness value for the weight dataset (0.350) shows a slightly left-skewed distribution.All three datasets have negative kurtosis values (−0.609; −0.308; −0.420), indicating a slightly platykurtic distribution.

Verification of Prerequisites for the Selection of Appropriate Capability Indices
Process capability was evaluated using capability indices assuming a normal distribution of the values of the observed quality characteristics obtained from the process in the state in statistical control.The verification of these assumptions was, therefore, the first step.

Verification of the Normality
Estimated cumulative probability in probability plots was calculated by the Median Rand method using the formula where n equals the number of observations, and i equals the rank order of each observation, i = 1 for the smallest value, and i = n for the largest.The pair of values {x(i); CDF i } plotted in probability plots (Figure 5) fall along a straight blue line.The Tukey's box-and-whisker plots have no outside points; the whiskers have approximately similar lengths on the left and right sides.The calculated negative skewness values for the diameter and height datasets (−0.088; −0.205) show a slightly right-skewed distribution; conversely, the calculated positive skewness value for the weight dataset (0.350) shows a slightly left-skewed distribution.All three datasets have negative kurtosis values (−0.609; −0.308; −0.420), indicating a slightly platykurtic distribution.

Verification of Prerequisites for the Selection of Appropriate Capability Indices
Process capability was evaluated using capability indices assuming a normal distribution of the values of the observed quality characteristics obtained from the process in the state in statistical control.The verification of these assumptions was, therefore, the first step.

Verification of the Normality
Estimated cumulative probability in probability plots was calculated by the Median Rand method using the formula where n equals the number of observations, and i equals the rank order of each observation, i = 1 for the smallest value, and i = n for the largest.The pair of values {x(i); CDFi} plotted in probability plots (Figure 5) fall along a straight blue line.The plot visually repeats the same observations' values (red dots).Most points lie within the 95% confidence interval (blue curves) and would pass the fat pencil.Therefore, the hypothesised normal distribution is appropriate for all datasets.
The null hypothesis (H0), "data follow a normal distribution", and the alternative hypothesis (H1), "data do not follow a normal distribution", were tested via the Ryan-Joiner test, the Jarque-Bera test and the Anderson-Darling test.Their use led to different conclusions.
The Ryan-Joiner test [33] used a correlation coefficient Rp to indicate the correlation between measurement data xi, i = 1, 2,..., n and the normal scores of measurement data bi The plot visually repeats the same observations' values (red dots).Most points lie within the 95% confidence interval (blue curves) and would pass the fat pencil.Therefore, the hypothesised normal distribution is appropriate for all datasets.
The null hypothesis (H 0 ), "data follow a normal distribution", and the alternative hypothesis (H 1 ), "data do not follow a normal distribution", were tested via the Ryan-Joiner test, the Jarque-Bera test and the Anderson-Darling test.Their use led to different conclusions.
The Ryan-Joiner test [33] used a correlation coefficient R p to indicate the correlation between measurement data x i , i = 1, 2, ..., n and the normal scores of measurement data b i i = 1, 2, ..., n.If the correlation coefficient calculated via Formula ( 29) is near 1, the data fall close to the normal probability plot.The null hypothesis cannot be rejected if the p-value is less than the chosen significance level α.
Jarque-Bera's test [34] is a good-of-fit test that determines whether the data's skewness (S) and kurtosis (K) conform to a normal distribution.The test statistic (JB) can be calculated via Formula (30).The null hypothesis cannot be rejected if the p-value is less than the chosen significance level α or the test statistic is less than the critical value from a chi-squared distribution with two degrees of freedom.
Anderson-Darling's test is a goodness-of-fit test and is widely believed to be one of the best statistics of this type available, even with relatively small sample sizes.It is similar to Kolmogorov The hyphen in the title of a test that is named after two authors must not be deleted.Smirnov's test.Anderson-Darling's statistics, calculated via Formula (31), measure the area between the fitted line (based on the chosen distribution) and the nonparametric step function (based on the plot points).It is a squared distance weighted more heavily in the distribution's tails, while Φ(.) is the cumulative distribution function of the normal distribution.
The results of three normality tests are presented in Table 2. Ryan-Joiner's test p-values (<0.005, 0.012, <0.005) are smaller than the chosen significance level α = 0.05, so the null hypothesis was rejected for all datasets.Jarque-Bera's tests confirm the opposite results, i.e., the calculated JB statistics are less than χ 2 (2), so the null hypothesis was not rejected for any datasets.The results are consistent with the probability graphs in Figure 5. Anderson-Darling's test results confirm the results of Jarque-Bera's tests.This test is sensitive to outliers and the occurrence of the same values.Although no outliers exist in the examined datasets (see Figure 4), the same values occur in datasets (see Figure 5).If Anderson-Darling's test is significant and Ryan-Joiner's test or Jarque-Bera's test is not, Jarošová and Noskievičová [11] recommend a decision in favour of the null hypothesis.

Data Independence Verification
An autocorrelation function was used for the verification of the independence data of the examined datasets [19].The values of the autocorrelation function, ACF(k), express the strength of the relationship between a value x t and lagged values x t−k via the formula Using the general rule for the number of lags, k = n/4 lags; in this study, k = 200/4 = 50 lags, as shown in Figure 6.The autocorrelation function plots for three quality characteristics with 5% significance limits (red dotted curve) for the autocorrelations (blue columns) show that the observed data are not auto-correlated, i.e., they are independent.6.The autocorrelation function plots for three quality characteristics with 5% significance limits (red dotted curve) for the autocorrelations (blue columns) show that the observed data are not auto-correlated, i.e., they are independent.

Verification of the Process Stability
One way to verify that the monitored process is in a state of statistical control is with control charts.Eight measurements in twenty-five subgroups were performed for three quality characteristics under study.Shewhart mean control charts (see Formula (33)), and standard deviation control charts (see Formula ( 34)) were used.
UCL is the upper control limit; CL is a central line; LCL is a lower control limit; x is the grand average of subgroups; s is the average standard deviation; and 3 A is the constant for the construction control chart dependent on the subgroups range [7].For

Verification of the Process Stability
One way to verify that the monitored process is in a state of statistical control is with control charts.Eight measurements in twenty-five subgroups were performed for three quality characteristics under study.Shewhart mean control charts (see Formula ( 33)), and standard deviation control charts (see Formula (34)) were used.
UCL is the upper control limit; CL is a central line; LCL is a lower control limit; x is the grand average of subgroups; s is the average standard deviation; and A 3 is the constant for the construction control chart dependent on the subgroups range [7].For n = 8, A 3 = 1.099.B 3 , B 4 are the constants for the construction control chart dependent on the subgroup range [7].For n = 8, B 3 = 0.185, and B 4 = 1.815.The calculated control limits are shown in Figure 7, which contains control charts created by the statistical package Minitab 15.
Vardeman and Jobe [85] described several patterns that can occur in control charts, such as changes in level, mixtures, trends, freaks, grouping or bunching.However, none of the control charts created showed these patterns.Eight tests for assignable causes were performed on the Xbar charts by standard ISO 7870-2.These tests are the Western Electric [86] or the AT&T rules.
The occurrence of the patterns suggests an assignable cause of the variation that should be investigated.On the other hand, it is relevant to note [7] that some of these causes may appear as a consequence of the type of the process itself, and it may not be a process in which inconsistent outputs occur.Therefore, these pattern tests are used only as a kind of guideline, not as strictly specified rules.ISO 7870-2 draws attention to a different approach to using these tests in Phase I and II of statistical control.While in Phase I, the goal is to quickly uncover all definable causes and bring the process from a state out of statistical control to a state of statistical control, in Phase II, the goal is to maintain the process in a state of statistical control.Then, a minimal probability of error of the first type is required, and the standard recommends avoiding the joint use of some tests.Test number 1 is a fundamental rule of the Shewhart control chart but an omnibus test.If there is a relatively small shift and trend in the process's mean, then using a complementary rule, such as Test number 5, is applicable.Neither test was positive for the Xbar and s control charts.
The generated control charts' conclusions about the process's stability were further supported by homoscedasticity and constant mean tests for the 25   Vardeman and Jobe [85] described several patterns that can occur in control charts, such as changes in level, mixtures, trends, freaks, grouping or bunching.However, none of the control charts created showed these patterns.Eight tests for assignable causes were performed on the Xbar charts by standard ISO 7870-2.These tests are the Western Electric [86] or the AT&T rules.
The occurrence of the patterns suggests an assignable cause of the variation that should be investigated.On the other hand, it is relevant to note [7] that some of these causes may appear as a consequence of the type of the process itself, and it may not be a process in which inconsistent outputs occur.Therefore, these pattern tests are used only as a kind of guideline, not as strictly specified rules.ISO 7870-2 draws attention to a different approach to using these tests in Phase I and II of statistical control.While in Phase I, the goal is to quickly uncover all definable causes and bring the process from a state out of statistical control to a state of statistical control, in Phase II, the goal is to maintain the process in a state of statistical control.Then, a minimal probability of error of the first type is required, and the standard recommends avoiding the joint use of some tests.Test number 1 is a fundamental rule of the Shewhart control chart but an omnibus test.If there is a relatively small shift and trend in the process's mean, then using a complementary rule, such as Test number 5, is applicable.Neither test was positive for the Xbar and s control charts.
The generated control charts' conclusions about the process's stability were further supported by homoscedasticity and constant mean tests for the 25 subgroups of each dataset.
Various tests can be used to check the homogeneity of variances, for example, Cochran's [87], Hartley's [88] and Bartlett's [89] tests, which are based on within-group variances, or Levene's [90] and Brown-Forsyth's [91] tests, which use the absolute deviations of the measured values from the mean or median of the group.The tests were used to test the null hypothesis, "the variance among k samples is equal", versus the alternative hypothesis, "at least one group has a variance that is not equal to the rest".Bartlett's and Levene's tests were used in this study.Various tests can be used to check the homogeneity of variances, for example, Cochran's [87], Hartley's [88] and Bartlett's [89] tests, which are based on within-group variances, or Levene's [90] and Brown-Forsyth's [91] tests, which use the absolute deviations of the measured values from the mean or median of the group.The tests were used to test the null hypothesis, "the variance among k samples is equal", versus the alternative hypothesis, "at least one group has a variance that is not equal to the rest".Bartlett's and Levene's tests were used in this study.
Barlett's test is appropriate if the data come from a normal distribution.The test statistic B is calculated using the formula where s 2 h is the pooled variance, n h is the number of observations in sample h (the number in groups cannot be equal), and c is calculated as On the other hand, Levene's test is used for data from an arbitrary continuous distribution.The test statistic L used averages for h sample z a h = x i,h − x h , for h = 1, 2, . . ., k and the overall average z a via the formula Table 3 shows the results of two different homogeneity of variance tests.All p-values are less than the chosen significant level of 0.05.The null hypothesis was not rejected once; therefore, it can be concluded that the variance among 25 samples is neither statistically different nor in one of the three investigated datasets.The next step of the study was to test the null hypothesis, "the difference between the means of k samples is not statistically significant", versus the alternative hypothesis "at least one mean is not statistically significantly different from the others".Two tests were used: parametric one-way analysis of variance [92] and nonparametric Kruskal-Wallis test [93].
The statistic F of one-way analysis of variance is calculated as the proportion of the mean squared difference between subgroups and the error mean square; that is, it compares the variability between subgroups with the variability inside subgroups.It could be written as where x hi represents the ith observation in the hth sample, x h is the average of the hth sample, and x is the total average of the data from k samples.Test statistic H of Kruskal-Wallis's test uses sums order of the k compared samples R h , j = 1, 2, . . ., n j via the formula The results of these tests are shown in Table 4.The null hypothesis was not rejected in any of the tests conducted.No sample mean from the 25 samples is statistically significantly different from the others at the significance level of 0.05, not in any of the three investigated datasets.The implemented tests confirmed that the selected quality characteristics come from a normal distribution and the observed process is stable and that the selected quality characteristics come from a normal distribution and the observed process is stable.Therefore, it was possible to continue the evaluation of the process capability using the capability indices C p , C a , C pk , C pm and C pmk .

Calculation of the Capability Indices and Comparison with the Required Value
Table 5 contains the calculated dataset means and standard deviations (within variation and overall variation) on the left side.On the right side of the table are given target means, tolerance and lower and upper specification limits, which are necessary for calculating the capability indices.representing the potential capability of the process, i.e., indices show if the process would be capable of if the process did not have shifts and drifts in the subgroup means.On the right side of the table are the performance indices that represent the overall capability of the process.When calculating these indices, σ overall is used to consider the variation for the whole dataset.The variation between measurements within subgroups is due to a common cause.Variation between subgroups may be due to specific causes.In the next part, the values of the calculated indices are analysed for the three monitored quality characteristics and compared with the required value of 1.33.The graphs in Figure 8 show the three capability histograms with normal curves with overall and within standard deviation.
In the next part, the values of the calculated indices are analysed for the three monitored quality characteristics and compared with the required value of 1.33.The graphs in Figure 8 show the three capability histograms with normal curves with overall and within standard deviation.Table 7 provides a summary of the estimates of the incidence of nonconforming products below the lower specification limit (LSL) and above the upper specification limit (USL), as well as the total number of nonconforming products in parts per million (ppm) for three crown cap parameters.The table contains data on observed, expected within and expected overall process performance.The observed performance LSL NC < is the number of parts out of one million that have measurements less than the lower specification limit; the observed performance USL NC > is the number of parts out of one million that have measurements greater than the upper specification limit; Total NC is the number of parts outside the specification interval.Formulas (40) were was used to calculate the number of nonconforming parts (NC) for expected performance.
1,000,000 1 LSL (.) Φ is the standard normal cumulative distribution function.For the calculation of the number of nonconforming parts for expected within performance σ σ = ˆˆwithin and for the Table 7 provides a summary of the estimates of the incidence of nonconforming products below the lower specification limit (LSL) and above the upper specification limit (USL), as well as the total number of nonconforming products in parts per million (ppm) for three crown cap parameters.The table contains data on observed, expected within and expected overall process performance.The observed performance NC <LSL is the number of parts out of one million that have measurements less than the lower specification limit; the observed performance NC >USL is the number of parts out of one million that have measurements greater than the upper specification limit; NC Total is the number of parts outside the specification interval.Formulas (40) were used to calculate the number of nonconforming parts (NC) for expected performance.
Φ(.) is the standard normal cumulative distribution function.For the calculation of the number of nonconforming parts for expected within performance σ = σwithin and for the calculation of the number of nonconforming parts for expected overall performance σ = σoverall .The value of the index C p = 1.78 of quality characteristic Diameter is higher than 1.33.The degree of ability of the process to ensure that the quality characteristic is within the limits of the specification is very high.The capability ratio (1/C p ) • 100 shows that the specification interval is only used at 56.09%.Index C a = 0.89 shows the process mean between the target value and the lower specification limit.Figure 8a shows that the process lies inside the specification interval <31.90, 32.30> mm.The process's centring could be better because the actual mean value of 32.045 mm is less than the target mean of 32.10 mm.This difference is captured by the C pk index, whose value of 1.29 is less than the desired value of 1.33 for the Diameter quality characteristic.Consistent with Oakland [52], the process can be considered suitable for standard products; according to Pearn and Kotz [48], the process can be described as a "capable" process that requires some control.The shift from the target mean to the left is also visible in the significantly different values of the index C pkL = 1.29 and index C pkU = 2.27.The difference between the actual and target mean is better expressed by the index C pm = 1 and the best by index C pmk = 0.73.Their values did not reach the required level of 1.33.The results show that although the process's potential ability to satisfy the specification limits is high, centring it to the left of the target mean reduces its capability.Figure 8a shows that the difference between within and overall variability expressed with red and black dashed curves is very small, i.e., the process is under control but needs to be improved by centring on the target.Table 7 "Observed performance" shows 0.00 ppm nonconforming parts, i.e., all measured parts lie inside the specification interval.The number of parts whose Diameter is less than the lower specification limit by calculated "Expected within performance" is 52.77ppm and 66.36 ppm by "Expected overall performance".No parts are expected to be greater than the upper specification limit.

Process Capability Assessment for the Quality Characteristic Height
The values of the index C p = 1.58 of Height quality characteristics is greater than 1.33.The degree of ability of the process to ensure that the quality attribute under consideration is within the limits of the specification is very high.The capability ratio shows that using the specification interval is only 63.41%.Index C a = 0.73 shows that the process mean of 5.894 mm is between the target value of 36.00 mm and the lower specification limit of 5.85 mm.The difference between the actual mean value of 5.984 mm and the target mean of 6.00 mm is slight; the process lies within the specification interval <5.85, 6.15> mm.The value of the C pk = 1.41 exceeds the target value of 1.33.Similarly, as for quality characteristic Diameter, the process can be considered suitable for standard products [52], a process that is a "satisfactory" process but requires some control [48].The slight difference from the target value to the left is visible in the difference between index C pkL = 1.41 and index C pkU = 1.75.The C pm index captured the difference between the actual and target mean better than the C pm index.Index C pm = 1.39 exceeded the desired value of 1.33; the C pmk = 1.23 index approached it but did not reach it.Figure 8b shows no difference between the within and overall variability curve."Observed performance" shows 0.00 ppm nonconforming parts, i.e., all measured parts lie inside the specification interval.The number of parts whose Diameter is less than the lower specification limit by calculated "Expected within performance" is only 12.22 ppm and 16.17 ppm by "Expected overall performance".Of 0.08 or 0.12 ppm, nonconforming parts are expected to exceed the upper specification limit.Although the results of the capability analysis for the Height quality characteristic are the best of the three evaluated characteristics, it must be stated that there is room for improvement, especially by better centring the process on the target mean.

Process Capability Assessment for the Quality Characteristic Weight
The C p = 1.74 index of quality characteristic Weight values is greater than 1.33.The capability ratio shows that the specification interval <0.150, 0.180> g is only used at 57.45%.Also, for this quality characteristic, Figure 8c shows the difference between the actual mean value of 0.163 g and the target mean value of 0.165 g.Based on C pk = 1.49, the process can be evaluated as a "satisfactory" process that requires some control and is suitable for standard products.The different values of C pkL = 1.49 and C pkU = 1.99 show that the actual mean value lies to the left of the target value 0.165 g.The value of C pk = 1.49exceeds the target value of 1.33.Also, in this case, the index C pm = 1.39 exceeded the desired value of 1.33; the C pmk = 1.19 index did not reach it because this index is more sensitive to poor process centring.Figure 8c shows that the difference between within and overall variability expressed with red and black dashed curves is very small but more significant from the three ones.On the other hand, the number of nonconforming parts whose Weight is less than the lower specification limit is only 4.44 ppm by expected within performance and 7.35 by extended overall performance, the smallest values from the three datasets.Moreover, 0.00 ppm parts have Weights bigger than the upper specification limit, as shown in the fifth and eighth columns in Table 7.
The assessment of the crown cap manufacturing process's capability demonstrated that the process could produce products whose selected quality characteristics lie within specification limits.It also shows that there are differences between the actual process means and target means.Therefore, improving activity is necessary to centre the process better so that critical quality criteria-the required mean value and the variability around it-are fulfilled.

Conclusions
This study highlights the importance of a systematic approach to capability assessment, particularly through capability indices.It emphasises the necessity of adhering to a welldefined sequence of steps in this approach.The data used in this study pertain to the crown cap manufacturing process, providing a practical context for our findings.
For the correct application and interpretation of the capability indices, verifying the input assumptions, including process stability, normality and data independence, was essential.The contribution of this study is the data analysis, which consists of applying multiple tests based on different approaches to verify the same assumptions.Precisely based on the fact that the results of not all tests are uniform in their conclusions, it was confirmed that it is essential to pay attention to the choice of the appropriate statistical test given the nature of the data being processed.It is equally important to confront objective test results with appropriate visual tools-different types of graphs.The present study confirmed that traditional capability indices can also benefit the capability assessment of some current manufacturing processes.
Further research could aim at formulating guidelines for selecting appropriate statistical tests to verify assumptions for applying capability indices, depending on the nature of the quality characteristics being evaluated.Certain groups of production processes have similar representative data.However, for this purpose, a large amount of data representing qualitative characteristics of many processes from different production areas must be collected, which can be considered a limitation for further research.
As de-Felipe et al.
[83] stated, most companies use univariate process capability indices to evaluate process capability for selected quality characteristics.This procedure can lead to incorrect interpretation because it does not consider the simultaneous influence of individual characteristics on the properties of the multidimensional process and the resulting products.Multivariate process capability indices can overcome this deficiency and express the capability of the process assessed based on several quality characteristics with a single index.Some multivariate indices consider the correlation between quality characteristics, while others do not and are derived from univariate capability indices.Further research could verify the presence of a correlation between the quality characteristics investigated and then investigate the possibility of using appropriate multidimensional capability indices for assessing the capability of the crown cap manufacturing process.
Using capability indices, it is possible to objectively assess how well the process meets the customer's expectations.Thanks to them, it is then possible to increase the accuracy and efficiency of production processes and prevent errors and low-quality production.These are concrete, practical benefits of their use in managing production and non-production processes.Their future and use can be seen in their integration with modern Industry 4.0 technologies, the primary trend of which is the digitalisation and automation of industrial production.

Figure 1 .
Figure 1.The crown cap quality inspection demonstration.

Figure 1 .
Figure 1.The crown cap quality inspection demonstration.
Yum and Kim [76] compiled a list of approximately 530 journal articles and books on process capability assessment published between 2000 and 2009.The development between 2002 and 2006 was described in their paper by Wu et al. [77].

Figure 4 .
Figure 4. (a) Dot plot for the Diameter dataset.(b) Dot plot for the Height dataset.(c) Dot plot for the Weight dataset.(d) Tukey's box-and-whisker plots for the Diameter dataset.(e) Tukey's boxand-whisker plots for the Height dataset.(f) Tukey's box-and-whisker plots for the Weight dataset.

Figure 4 .
Figure 4. (a) Dot plot for the Diameter dataset.(b) Dot plot for the Height dataset.(c) Dot plot for the Weight dataset.(d) Tukey's box-and-whisker plots for the Diameter dataset.(e) Tukey's box-and-whisker plots for the Height dataset.(f) Tukey's box-and-whisker plots for the Weight dataset.

Figure 5 .
Figure 5. (a) Probability plot for the Diameter dataset.(b) Probability plot for the Height dataset.(c) Probability plot for the Weight dataset.

Figure 5 .
Figure 5. (a) Probability plot for the Diameter dataset.(b) Probability plot for the Height dataset.(c) Probability plot for the Weight dataset.


Using the general rule for the number of lags, k = n/4 lags; in this study, k = 200/4 = 50 lags, as shown in Figure

Figure 6 .
Figure 6.Auto-correlation function: (a) for the Diameter dataset; (b) for the Height dataset; (c) for the Weight dataset.
the constants for the construction control chart dependent on the subgroup range [7control limits are shown in Figure 7, which contains control charts created by the statistical package Minitab 15.

Figure 6 .
Figure 6.Auto-correlation function: (a) for the Diameter dataset; (b) for the Height dataset; (c) for the Weight dataset.

Figure 7 .
Figure 7. Shewhart control charts: (a) for the sample mean of the Diameter dataset; (b) for the sample mean of the Height dataset; (c) for the sample mean of the Weight dataset; (d) for the sample standard deviation of the Diameter dataset; (e) for the sample standard deviation of the Height dataset; (f) for the sample standard deviation of the Weight dataset.

Figure 7 .
Figure 7. Shewhart control charts: (a) for the sample mean of the Diameter dataset; (b) for the sample mean of the Height dataset; (c) for the sample mean of the Weight dataset; (d) for the sample standard deviation of the Diameter dataset; (e) for the sample standard deviation of the Height dataset; (f) for the sample standard deviation of the Weight dataset.

Figure 8 .
Figure 8.The capability histogram overlaid with a normal curve with overall and within standard deviations: (a) for the Diameter dataset; (b) for the Height dataset; (c) for the Weight dataset.

Figure 8 .
Figure 8.The capability histogram overlaid with a normal curve with overall and within standard deviations: (a) for the Diameter dataset; (b) for the Height dataset; (c) for the Weight dataset.
the process is unable to comply with the prescribed values;• If C pk ≥ 1.25, the process is a fit for normal products; • If C pk ≥ 1.45, the newly established manufacturing process or established manufacturing process for safety-related products is capable; • If C pk ≥ 1.67, the newly established manufacturing process for safety-related products is well capable, nonconforming output may occur, but the chances are excellent that it will be detected;

Table 1 .
Numerical summary of the datasets. .

Table 2 .
The results of the normality tests of three datasets (chosen significance level α = 0.05).

Table 3 .
The results of the homogeneity of variance tests for three datasets (chosen significance level α = 0.05).

Table 4 .
The results of the constant means tests for three datasets (chosen significance level α = 0.05).

Table 5 .
Calculated characteristics and specification limits for the three datasets (chosen significance level α = 0.05).

Table 6
contains the calculated indices for crown cap parameters.The table is vertically divided into two parts.On the left part are shown values of the capability indices

Table 6 .
Summary of the calculated capability indices for three crown cap parameters.

Table 7 .
Summary of the estimate of the incidence of nonconforming parts in ppm (parts per million) for three crown cap quality characteristics.