1. Introduction
Quality control in electronic assembly is both a statistical problem and a cost-management problem. Incoming parts vary across suppliers and lots, assembly operations introduce additional defects, and escaped failures generate exchange, logistics, and reputation-related losses after sale. Inspection is valuable only when the expected loss it avoids is larger than the cost of performing it. This trade-off has been emphasized in cost-of-quality research and in recent studies on inspection-technology investment for manufacturing systems [
1,
2].
Acceptance sampling provides a statistical basis for deciding whether an incoming lot should be accepted, rejected, or subjected to stricter checking when the true defective rate is unknown. In a standard attributes plan, producer’s risk, consumer’s risk, the operating characteristic curve, the acceptable quality level, and the lot tolerance percent defective jointly describe how the plan protects the supplier and the customer [
3,
4]. Recent studies have further improved lot disposition rules through adaptive mechanisms, adjustable two-plan systems, process-loss-aware sentencing, variables inspection, and skip-lot sampling [
5,
6,
7,
8,
9,
10,
11,
12]. Dynamic sampling and Bayesian lot updating provide related ways to revise lot-risk evidence as production data accumulate [
5,
13]. These methods are useful because they turn limited sample evidence into a lot-level decision. For an assembler, however, that decision does not settle the rest of the quality control problem. A risky lot still raises downstream questions about whether to inspect incoming parts, inspect finished products, absorb exchange losses, or recover parts through disassembly.
Inspection allocation and production control research deals with these economic choices after materials enter production. Mixed-integer models can allocate inspection stations, inspection technologies, and sampling rates across a manufacturing process [
14]. Joint production, maintenance, and quality control models examine how inspection interacts with preventive maintenance, buffer allocation, and imperfect detection [
15,
16,
17]. Other cost-based quality control models evaluate whether additional inspection is justified by lower internal and external failure costs, especially under zero-defect or component-level inspection settings [
2,
18,
19]. These models connect quality actions with cost consequences, but many of them treat defect rates as predetermined inputs or focus on only one part of the production process. In batch assembly, this leaves a gap between finite-sample evidence and the cost calculation used to select practical inspection policies.
Disassembly and recovery decisions add another cost dimension to quality control. When a defective product is detected, the manufacturer must compare disposal, replacement, and part recovery costs with the value of recovered components. Studies on disassembly and remanufacturing show that recovery decisions depend on labor, handling, environmental, and economic factors [
20,
21]. Multi-stage and multi-component quality models further show that the preferred inspection action can differ across components, semi-finished products, and final assemblies because defect risk and inspection cost are not evenly distributed across the process [
22]. Existing sampling methods can support lot acceptance or rejection, but they do not fully connect sampling evidence with downstream inspection, disassembly, replacement, and warranty-related cost decisions in electronic assembly. The present study addresses this applied decision gap by evaluating lot evidence, component inspection, finished product inspection, exchange loss, and disassembly cost within one cost-based framework.
The contribution of this study is not the development of a new acceptance sampling theory or a new optimization algorithm. Instead, the study integrates established one-sided sampling, expected-cost evaluation, and multi-stage inspection decision modeling into a tractable decision-support framework for electronic assembly quality control. First, it connects a one-sided sampling design with a binary expected-cost model for a two-component assembly process. Second, it carries that inspection logic into a multi-stage, multi-component setting, allowing stage interactions to be evaluated under the same cost structure. Third, it uses confidence-interval correction and electrical-test data to test how far policy recommendations change when defect rates are estimated from finite samples.
In this study, the sampling rule is used as a practical lot-risk screening step rather than as a complete industrial acceptance sampling standard. The main modeling focus is the downstream decision problem, where sampling-based defect estimates affect component inspection, finished-product inspection, exchange loss, and disassembly decisions. This positioning resolves the difference between the simple one-sided normal approximation used for sample-size planning and the broader cost-based inspection problem addressed in the paper.
The rest of the paper is arranged as follows.
Section 2 describes the problem setting, mathematical formulation, and empirical data processing procedure.
Section 3 presents the baseline and calibrated results.
Section 4 discusses the numerical findings, their relation to prior work, and the main limitations.
Section 5 closes with the conclusions.
2. Materials and Methods
2.1. Problem Setting
The baseline case is an enterprise that buys two components and assembles them into an electronic product. A defective component makes the final product defective, and assembly can also introduce failures even when both components are acceptable. Defective finished products can either be discarded or disassembled to recover components, with disassembly incurring an additional handling cost. Four decisions are linked in the model. Baseline scenario parameters are taken from the 2024 Contemporary Undergraduate Mathematical Contest in Modeling and used as standardized case settings [
23]. The electrical-test records discussed later are used only for empirical calibration of product-level defect estimates and lot-to-lot variation.
First, component acceptance is treated as a statistical decision. The supplier claims that the defective rate of each component does not exceed a nominal value, here taken as 10%. The manufacturer uses sampling inspection to check this claim and seeks the smallest sample size that still meets the required confidence level. At the 95% confidence level, a lot is rejected when the sample indicates that the defective rate exceeds the nominal value. At the 90% confidence level, a lot is accepted when the sample supports the claim that the nominal rate has not been exceeded.
Second, production decisions are optimized once component and finished product defect rates are specified. The enterprise must decide whether to inspect each component, whether to inspect finished products, whether to disassemble defective units detected during inspection, and how to treat defective products returned by customers. Uninspected components go directly to assembly; components found defective are discarded. Uninspected finished products enter the market directly, whereas inspected finished products are screened before shipment. Customer returns are replaced unconditionally, so exchange loss includes logistics and reputation-related cost. The six single-process scenarios used in this study are listed in
Table 1.
Third, the same logic is extended to a multi-stage, multi-component setting. The illustrative process contains several stages and eight input components. Each stage may output a semi-finished product or the final product, each with its own defect rate and cost parameters. The enterprise therefore faces inspection and handling decisions at both component and stage level. Because the number of binary policy variables grows rapidly with process size, exhaustive enumeration is no longer a useful primary description, and a general optimization formulation is preferable [
14,
15,
17]. The component data are listed in
Table 2, and the semi-finished and final-product parameters are reported in
Table 3.
Fourth, sampling uncertainty is brought back into the optimization model. In production, defect rates are estimated from finite samples rather than known exactly. The single-process and multi-stage problems are therefore re-solved with sampled estimates and confidence intervals to test whether the preferred inspection policy remains stable under plausible parameter variation.
2.2. Model Assumptions and Scope
The baseline formulation assumes independent component defects, stable scenario-level defective rates, and perfect inspection. Supplier batch effects, process drift, correlated defects, inspection misclassification, and repeated rework cycles are not explicitly modeled. Misclassification costs associated with false acceptance or false rejection are therefore not optimized in the baseline objective. These assumptions make the decision model tractable, but they also define the scope of the results.
In the present framework, disassembly is modeled as a cost-related decision that affects the expected downstream cost after a defective product is identified. The model does not attempt to represent a full remanufacturing or inventory-return system. Recoverable component value, scrap value, re-inspection after disassembly, quality degradation during reassembly, and inventory return flows are therefore treated as possible extensions rather than as explicit state variables in the baseline model.
2.3. Model Formulation
2.3.1. Sampling Inspection Model and Confidence Interval Analysis
The sampling inspection model starts from the nominal defective rate
. Let
be the observed defective rate in a sample of size
n. For large samples, the test statistic
Z can be approximated by
To make the sample-size rule explicit, a minimum detectable deviation
from the nominal defective rate is specified. The required sample size for a one-sided test is approximated by
If the enterprise also needs to control the detection power against an actionable alternative
, the design can be refined to
where
denotes the target test power. This power-adjusted version is useful when the probability of missing an actionable defect increase must be set explicitly. The calculations that follow use the simpler expression to keep the scenario comparisons easy to trace.
Under the 95% confidence rule, a batch is rejected if the test indicates a defective rate above the nominal value. The null hypothesis is
, and the alternative is
. With
,
, and significance level
, the rejection region is
[
3]. The calculation gives
.
For the 90% criterion,
, so the corresponding critical value is smaller and the required sample size decreases to
. Therefore, with
and a minimum detectable deviation of three percentage points, the enterprise needs to inspect 271 units under the 95% rejection criterion, but only 165 units under the 90% acceptance-oriented criterion. Recent acceptance sampling studies show how related risk targets can be built into adjustable criteria, process-loss-aware sentencing, variables inspection, adaptive resampling, and skip-lot designs [
6,
7,
8,
9,
10,
11,
12].
2.3.2. Sampling-Risk Interpretation and Operating-Characteristic Analysis
The preceding sample-size rule can be interpreted in the language of acceptance sampling. Let the acceptable quality level be
, and let the lot tolerance percent defective be
. Producer’s risk is the probability of rejecting a lot when the true defective rate is at the acceptable level. Consumer’s risk is the probability of accepting a lot when the true defective rate has deteriorated to the lot-tolerance level. Test power is the probability of rejecting the lot at that deteriorated level [
3,
4].
For a count of defectives
X, the operating characteristic gives the probability that a lot is accepted at any true defective rate
p. If
c is the acceptance threshold implied by the sampling rule, then
For the 95% one-sided rule used above, the normal threshold implies , so the lot is rejected when for . For the 90% rule, the corresponding threshold is , so the lot is rejected when for . The value is used as a scenario-based engineering threshold. For the benchmark setting , it corresponds to an increase from a 10% to a 13% defective rate. This three-percentage-point deterioration is large enough to influence downstream inspection and exchange-loss decisions in the cost model. It is therefore used as an actionable detection margin rather than as a universal industrial tolerance.
The probabilities in
Table 4 are exact binomial tail probabilities computed from the integer acceptance thresholds, while the sample sizes and thresholds are derived from the one-sided normal approximation. The table shows that the simple confidence-level rule controls producer’s risk near the nominal rate, but it gives only moderate power at
. A full industrial acceptance sampling plan would set producer’s risk, consumer’s risk, test power, AQL, LTPD, and misclassification costs jointly. In this paper, the sampling rule is therefore used as a screening input for downstream expected-cost optimization, not as a substitute for a complete acceptance sampling standard.
2.3.3. Single-Process Inspection and Handling Optimization
The single-process model includes the purchase and inspection of components, assembly, inspection of finished products, treatment of detected defects, and handling of customer returns. With the selling price fixed, profit maximization reduces to minimizing expected cost. The cost terms follow the cost-of-quality view: inspection costs are weighed against failure losses, and disassembly is represented as a decision rather than treated as an after-the-fact shop-floor response [
1,
2,
20]. The five binary choices create
candidate plans. This small case can be enumerated, but the 0–1 formulation is kept because the same structure applies to larger instances where the policy count grows combinatorially [
14,
15,
16].
Let
and
denote whether Component 1 and Component 2 are inspected, respectively. Let
y denote whether finished products are inspected,
z denote whether detected defective finished products are disassembled, and
r denote whether returned defective products are disassembled. A value of 1 means that the corresponding action is taken. The total cost to be minimized is
The defect propagation and expected cost terms are written as
subject to
To expose the economic threshold behind each binary decision, define the downstream penalty generated by one defective finished product under a given inspection–handling policy as
The objective can then be written more compactly as
For the two-component case, let
and
. The discrete marginal effect of inspecting component
is
Component
i should therefore be inspected whenever
The marginal effect of finished product inspection is
Finished product inspection is preferred when
These threshold inequalities give the model a clear economic interpretation. A switch from component-only inspection to joint component–product inspection occurs when the expected exchange loss avoided by finished product testing exceeds the added testing burden. In this sense, the model does more than return a binary vector; it also shows why a local inspection action is or is not justified by downstream loss exposure.
With production scale set to , the model is solved in Python 3.11 (Python Software Foundation, Wilmington, DE, USA) using Pyomo 6.7 (Sandia National Laboratories, Albuquerque, NM, USA) with CBC 2.10 branch-and-bound (COIN-OR Foundation, Birmingham, AL, USA); OR-Tools CP-SAT 9.8 (Google LLC, Mountain View, CA, USA) is used as a consistency check. For Scenarios 1, 2, 5, and 6, the optimal decision vector is , meaning that only Component 2 is inspected. The corresponding minimum costs are 32,600, 33,200, 31,000, and 32,500 yuan. For Scenarios 3 and 4, the optimal vector becomes , so finished product inspection is added when exchange loss is sufficiently large. The corresponding minimum costs are 34,700 and 31,600 yuan. Enumeration of all 32 candidate plans is used only as a small-instance check; the binary formulation is retained because it extends directly to the larger multi-stage problem.
2.3.4. Inspection and Disassembly Optimization in a Multi-Stage, Multi-Component Environment
The multi-stage extension adds intermediate processing stages to the assembly system. Each stage receives one or more components and produces a semi-finished product that may itself be inspected before it moves downstream. Recent studies have shown that stage interaction and inspection imperfection can materially affect the preferred process configuration [
16,
17,
22]. Here, the single-process inspection logic is extended to a stage-coupled cost model so that component inspection, stage inspection, disassembly, and final-product loss can be evaluated within one objective function.
In the multi-stage setting, defect risk propagates from components to semi-finished products and then to the final product. If a component or semi-finished product is not inspected, its defective probability is carried into the downstream assembly stage. If inspection is selected, detected defective units are removed from the production flow, which reduces the effective defect risk entering the next stage under the baseline perfect-inspection assumption. Inspection therefore does not repair defective units; it changes the risk composition of the units that continue downstream. Disassembly is considered when a defective downstream unit is identified, and it is evaluated as an economic handling option.
Let denote whether component i is inspected before it enters its assigned stage, let denote whether the output of stage j is inspected, and let denote whether a detected defective output at stage j is disassembled. A value of 1 means that the corresponding action is taken. For stage j, let be the set of parts entering the stage, let be the process-induced defect probability, and let , , and be the assembly, inspection, and disassembly costs.
The residual defect rate of component
i after optional incoming inspection is
and the defect probability created at stage
j is
The quantity
is the pre-inspection defect probability at stage
j. For intermediate stages, the effective defect probability passed to the next stage is
Introducing the qualified-yield recursion,
where
is the probability that the stage-
j output is qualified after any selected screening at that stage. This recursion makes the role of early-stage inspection explicit: lowering
or setting
improves the qualified yield carried into every downstream stage. The final product defect probability
is evaluated before final product screening, while the exchange loss term below is multiplied by
because inspected defective final products are not shipped.
A compact expected cost representation is
subject to
. The general formulation contains
binary decisions, so exhaustive enumeration would require evaluating
candidate policies. That count is acceptable only for very small instances, which is why the integer-programming representation is retained as the scalable description of the problem.
With production quantity , the optimal strategy for the two-stage, eight-component setting is to forgo component inspection, semi-finished-product inspection, final product inspection, and disassembly. The resulting minimum cost is 67,600 yuan.
2.3.5. Sampling-Based Parameter Correction and Stability Screening
Let
,
, and
denote the numbers of defective components, semi-finished products, and finished products observed in sampling, and let
,
, and
denote the corresponding sample sizes. The defective rate estimators are
The approximate 95% confidence interval for a defective rate is
This Wald-type interval is used as a transparent correction layer rather than as a claim of a new estimator. Its role is to prevent sampled defect rates from being interpreted with false precision when the inspection policy depends on them. Recent work on binomial-proportion inference has shown that local coverage behavior can be improved by examining interval properties more carefully [
24].
To transfer interval uncertainty into the optimization model, define the uncertainty set
where each coordinate corresponds to the confidence interval of one component, semi-finished product, or finished-product defect rate. A conservative policy can then be obtained from the screening problem
Because the total cost function is nondecreasing in each defect rate, the inner maximizer is attained at the upper confidence bound of each coordinate, so the conservative screening problem reduces to . In the numerical analysis, this is complemented by a three-point sweep so that the structural stability of the preferred policy can be observed directly.
For lot-by-lot robust updating, a Beta-Binomial posterior is also used, consistent with recent Bayesian acceptance sampling formulations [
13]:
with posterior mean
Table 5 reports the corrected defect rates, minimum costs, and optimal decisions for the six single-process scenarios. In five cases, interval-based correction leaves the preferred decision unchanged. Scenario 4 is the only case in which the policy switches: after the rates are adjusted to 18%, 20%, and 22%, finished product inspection is no longer selected and the optimum becomes
. Across all six scenarios, Component 2 inspection is retained, Component 1 inspection and both disassembly decisions remain unselected, and the corrected costs range from 31,100 to 34,670 yuan.
For the multi-stage setting, the defect rates of Components 1 to 8, semi-finished Products 1 to 3, and the final product are all initialized at 10%. With sample size 1000, the corresponding 95% confidence interval is . Using the midpoint as the corrected input leaves the optimal policy unchanged: no component inspection, no semi-finished product inspection, no final product inspection, and no disassembly, with minimum cost 67,600 yuan.
2.3.6. Empirical Electrical Test Data Processing
Empirical calibration is based on the file Electrical_Test_Report.csv. The file is encoded in UTF-16 and contains repeated header rows embedded in the data body. These rows are removed by filtering records whose first field equals the literal header value “Time”. After cleaning, the dataset contains 80,000 test records from 866 production lots spanning 4 October 2019 to 20 December 2019. Before analysis, the status variables and measurement variables to are converted to numeric form. Data preprocessing, estimation, and optimization are implemented in Python 3.11 with pandas 2.2, numpy 1.26, scipy.stats 1.12, pyomo 6.7, and ortools 9.8.
Product quality was read from the
Result field. Records with
were treated as qualified, while records with
or
were treated as defective. The empirical defective rate is therefore
where
is the number of cleaned records and
is the number of defective final products. For lot
ℓ, the lot-level defective rate is
Lot summaries are computed for both the full set of lots and the subset with at least 100 observations. This treatment reduces the influence of very small lots, for which defect-rate estimates are typically unstable, and provides a more reliable view of recurring production risk. The continuous electrical variables retained in the dataset are not analyzed further in the present study and are reserved for future work on process diagnosis.
3. Results
The sampling analysis first determines the inspection effort required before the production policy is optimized. For a nominal defective rate of 10% and a minimum detectable deviation of three percentage points, the 95% one-sided criterion requires 271 samples, whereas the 90% criterion requires 165 samples. This gap is operationally meaningful because it directly affects labor demand and test capacity before costs related to purchasing, assembly, exchange loss, or disassembly are considered.
Figure 1 shows how the required sample size changes with both the confidence level and the minimum detectable deviation. For a fixed
, the sample requirement increases nonlinearly as confidence rises, which means that a limited increase in assurance can still generate a substantial increase in inspection workload. For a fixed confidence level, the curve for
stays above that for
, indicating that smaller deviations from the nominal defective rate are more difficult to identify and therefore require more samples. In this sense, the sampling equation can be interpreted not only as a statistical design rule, but also as a practical planning tool for labor allocation, testing capacity, and cycle-time control.
The electrical test data place these decision rules in a production context. After removing 15 repeated header rows, the cleaned dataset contains 80,000 records from 866 lots. Among them, 78,932 records are classified as pass and 1068 as fail. The empirical observed defective rate is therefore 1.335%, or about 1.3%, with a 95% confidence interval of
. This observed rate is substantially lower than the nominal 10% adopted in the baseline scenarios. It should not replace the stylized inputs one-for-one, but it is a useful reference for deciding when broad inspection is likely to pay for itself.
Table 6 summarizes the cleaned dataset.
Lot-level estimates show why a single average defect rate is not sufficient for operational use. Across all 866 lots, 510 have no observed final product defect and the median lot-level defective rate is 0%. The distribution is right-skewed because most lots have low or zero observed defects, while a smaller upper-tail group has much higher rates. In the full set of lots, the 90th percentile is 4.38% and the 95th percentile reaches 6.67%. Because very small lots can produce extreme rates, a second analysis is carried out for lots with at least 100 observations. In that subset of 202 lots, the median rate is 0.83%, the 95th percentile is 5.36%, and the maximum observed rate is 8.77%. Twelve stable lots exceed 5%, although none exceed 10%.
Table 7 therefore indicates a low–average process with a visible upper tail of batch-specific risk. These upper-tail lots provide the main empirical basis for tighter inspection, rather than a uniform increase in inspection across all lots.
Figure 2 turns the lot distribution into a decision chart by placing central tendency and risk thresholds on the same scale. The mean is low, but the upper tail reaches a warning band near 5%. A process average can therefore hide lot-level escalation risk. A single inspection intensity is unlikely to fit all lots: routine lots can be handled with lighter checks, while upper-tail lots may warrant expanded sampling or added downstream product inspection.
Figure 3 summarizes the calibration. Panel A places the nominal 10% scenario rate next to the empirical mean and the upper tail of the stable lot distribution. Panel B displays the stable lot distribution and marks the empirical mean and a 5% risk threshold. The contrast makes clear how far the stylized baseline rates can stand from the observed production data.
For the single-process model, all 32 decision vectors generated by the 5 binary variables are evaluated under each scenario. This full enumeration confirms that the preferred action is selective rather than broad. In Scenarios 1, 2, 5, and 6, the lowest-cost policy is to inspect Component 2 only; Component 1, finished products, and both disassembly actions are left unused. The corresponding minimum costs are 32,600, 33,200, 31,000, and 32,500 yuan. In Scenarios 3 and 4, the exchange loss is higher, and finished product inspection becomes economical. The optimum then becomes , with minimum costs of 34,700 and 31,600 yuan. The policy change is small in combinatorial terms but economically meaningful because finished product inspection is justified only when the expected downstream loss avoided is large enough to cover the extra testing cost.
Figure 4 summarizes the optimal binary decisions across the six scenarios. In most low-cost settings, neither extensive inspection nor disassembly is selected. Inspection is introduced only when it can reduce expected downstream losses enough to offset its own cost. The repeated selection of Component 2 inspection suggests that the economic benefit of inspection is closely tied to where it is placed within the assembly process.
Figure 5 presents the corresponding cost composition in yuan. Under moderate-risk conditions, the sampling-based correction mainly redistributes cost across categories and, in most cases, does not alter the optimal policy. When exchange loss becomes large, however, the cost associated with downstream failure rises rapidly, making additional inspection economically justified. The results show that policy adjustments occur when the avoidable downstream loss becomes greater than the extra inspection cost required to control it.
For the extended two-stage, eight-component model, the baseline optimum is obtained without inspection or disassembly, with a minimum total cost of 67,600 yuan. The empirical calibration supports the same conclusion at the average process level. With an average observed defective rate of about 1.3%, broad inspection is difficult to justify under the baseline assumptions unless exchange loss, safety requirements, or customer penalties increase substantially. At the same time, the right-skewed lot distribution suggests that a uniform no-inspection policy may be too rigid. A batch-sensitive strategy, in which inspection is strengthened only for higher-risk upper-tail lots, would better reflect the observed variation in lot quality.
Figure 6 illustrates this result from the perspective of process structure. Defect risk propagates from components to semi-finished products and then to the final product, whereas the value of inspection is concentrated at only a limited number of decision points. This pattern helps explain why selective intervention is often more economical than intensive inspection throughout the entire process. Once the nodes with the greatest impact on downstream quality are identified, extending inspection to low-impact branches contributes little additional risk reduction relative to its cost.
After the interval-based correction was introduced, the single-process and multi-stage models were solved once more. For nominal defect rates of 10%, 20%, and 5%, the corresponding 95% confidence intervals are
,
, and
, respectively. As shown in
Table 5, the main qualitative trend remains unchanged. Component 2 continues to be the inspection point selected most often, whereas inspecting Component 1 and choosing disassembly are still not cost-effective under the corrected ranges. The main adjustment appears in Scenario 4, where finished product inspection is no longer selected after the correction is applied. From a practical viewpoint, these intervals are useful because they separate routine lots from those that may require closer inspection.
4. Discussion
The scenario results should be interpreted as conditional decisions under specific cost and quality settings, rather than as a general argument against inspection. Finished product inspection becomes worthwhile only when the reduction in exchange loss is large enough to offset the additional inspection expense. Component 2 is selected more frequently because, under the parameter settings considered here, its inspection cost is relatively low while its influence on downstream failure risk is comparatively large. Disassembly does not appear in the baseline solutions because the associated handling cost is not sufficiently compensated by the recoverable component value. For manufacturing decisions, these cost thresholds are more informative than the binary decision vectors alone.
The empirical calibration also changes how the baseline scenarios should be read. In the electrical test dataset, the observed defective rate of the final product is 1.335%, and most lots contain no detected defect at all. This means that a uniform inspection policy derived only from the nominal 10% scenario would overestimate the average production risk for the dataset examined here. At the same time, the lot distribution is right-skewed, and the upper tail among otherwise stable lots still reaches levels at which additional inspection may be justified. In practice, this suggests a batch-sensitive policy, with lighter inspection for routine lots and tighter control when sampled defect rates move toward the upper tail.
The results can also be interpreted through the Pareto–Lorenz rule. Although the present study does not perform a formal Pareto decomposition, the numerical results show a similar concentration pattern. A limited number of factors account for most of the change in the optimal policy. In the single-process scenarios, the inspection of Component 2 and the finished product inspection decision under high exchange loss settings are the main drivers of the cost reduction. In the empirical data, most lots have very low or zero observed defects, whereas a small upper-tail group contributes disproportionately to operational risk. This pattern is consistent with the Pareto–Lorenz view that quality control effort should be concentrated on the few lots, components, or cost drivers that create the largest share of expected loss, rather than distributed uniformly across all inspection points.
The proposed framework combines three related parts, each serving a different purpose. The sampling part provides lot-level evidence. The optimization part determines whether inspection, acceptance of exchange loss, or disassembly is economically justified under that evidence. The multi-stage extension then examines whether the same decision logic remains valid when interactions between stages are taken into account. Empirical calibration adds a final step by comparing the nominal scenario settings with observed production data. Taken together, these parts offer a more direct basis for shop-floor decision-making than either sampling rules alone or deterministic cost optimization applied in isolation.
Compared with the existing literature, the contribution of this study is more focused in scope. Recent acceptance sampling studies have mainly aimed to refine lot disposition rules [
5,
9,
11]. Research on inspection allocation and manufacturing control has concentrated on optimizing different segments of the production system [
14,
15,
16,
17]. Other studies have examined broader production interactions, including stage-coupled inspection and disassembly-oriented settings [
20,
21,
22]. In contrast, the present work focuses on linking sampling evidence to downstream action selection within a single cost-based decision framework for electronic assembly.
Several limitations should also be noted. First, the baseline model assumes independent defects, stable defective rates within each scenario, and perfect inspection. It does not explicitly model correlated defects, supplier batch effects, process drift, inspection misclassification, or repeated rework cycles. Second, disassembly is represented as an economic handling decision, not as a full remanufacturing system. Recoverable value, scrap value, re-inspection after disassembly, quality degradation during reassembly, and inventory return flows are therefore outside the baseline formulation. Third, the empirical calibration treats the final
Result code as the product-level outcome, without explicitly modeling how intermediate electrical measurements propagate into final failure. Future studies may combine the present framework with Bayesian updating or sequential control-chart methods [
13], and may also examine larger assembly systems that require decomposition strategies, heuristic solution methods, or rolling horizon control.
5. Conclusions
This study should be interpreted as an integrated decision support framework rather than as a new acceptance sampling theory or a new optimization algorithm. The one-sided sampling rule provides lot-risk evidence under finite samples, and the expected cost model translates that evidence into decisions on component inspection, finished product inspection, exchange loss exposure, and disassembly. Across the reported single-process scenarios, inspection is chosen only when the downstream loss it can prevent exceeds the added inspection cost.
The reported results support different policies under low and high exchange loss settings. In Scenarios 1, 2, 5, and 6, the lowest cost policy is to inspect Component 2 only. In Scenarios 3 and 4, where exchange loss is higher, finished product inspection is added in the original parameter setting. After interval-based correction, Component 2 remains the most stable inspection point, while the corrected Scenario 4 result returns to component-only inspection. These findings show that product inspection should be added only when the expected reduction in exchange loss is large enough to justify its cost.
The empirical calibration further shows that nominal scenario rates should not be applied mechanically. The electrical test data indicate an observed defective rate of about 1.3%, which is much lower than the 10% benchmark used in the stylized scenarios. Under such average conditions, routine blanket inspection of finished products is unlikely to be economical. Even so, the right-skewed lot distribution shows that tighter inspection may still be justified when a sampled lot falls into the upper tail. The practical implication is that inspection effort should be targeted toward higher-risk lots, components, and cost drivers rather than applied uniformly across the whole process.
The findings remain conditional on the baseline assumptions of independent defects, stable defective rates, perfect inspection, and simplified disassembly. These assumptions limit direct industrial application when supplier batch effects, process drift, inspection errors, correlated defects, repeated rework cycles, recoverable component value, scrap value, re-inspection after disassembly, quality degradation during reassembly, or inventory return flows are important. Future work should add these factors and compare the resulting policies with dynamic sampling, Bayesian lot updating, and larger inspection allocation models.