Next Article in Journal
A Multidisciplinary Robust Collaborative Optimization Method Under Parameter Uncertainty Based on the Double-Layer EI–Kriging Model
Previous Article in Journal
Synergistic Role of Low-Strength Ultrasound and Co-Digestion in Anaerobic Digestion of Swine Wastewater
Previous Article in Special Issue
Development and Optimization of an Automated Industrial Wastewater Treatment System Using PLC and LSTM Neural Network
 
 
Article
Peer-Review Record

Computer Vision-Enabled Construction Waste Sorting: A Sensitivity Analysis

Appl. Sci. 2025, 15(19), 10550; https://doi.org/10.3390/app151910550
by Xinru Liu, Zeinab Farshadfar and Siavash H. Khajavi *
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Reviewer 4:
Appl. Sci. 2025, 15(19), 10550; https://doi.org/10.3390/app151910550
Submission received: 26 August 2025 / Revised: 25 September 2025 / Accepted: 26 September 2025 / Published: 29 September 2025

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors
  • Although the source of the data is indicated, the specific data collection methods and sample sizes are not clearly described. It is recommended to provide more detailed information about how the original data were collected.

 

  • The throughput of the CS and CVAS methods differs significantly. Is it reasonable to apply a direct scaling factor of 2.3 in the actual calculations without further justification or sensitivity testing?

 

  • While maintenance costs are monetized and compared, the analysis does not account for the impact of machine failure rates or system downtime on overall cost. These factors could significantly affect the economic viability of each method.

 

  • In Section 3.1.1, the relationship between workforce size and throughput is assumed to be linear, and parameter values are adjusted proportionally. Is this assumption valid? Could there be nonlinear effects or economies/diseconomies of scale?

 

  • Table 4 lists two different types of machinery for CVAS. What are their functional differences in practice? Could these differences influence other parameters such as labor wages or training costs?

 

  • Taking China as an example, although annual wages are low, the cost difference between CS and CVAS shown in Figure 5 is not substantial. Moreover, under low discount rates, CVAS becomes more competitive. Could a multi-parameter analysis be added to better support decision-making between CS and CVAS in such contexts?

 

  • Figures 7 and 8 only show a single trend line for CVAS. Does this imply that the two different machine types have identical cost sensitivity slopes? If not, separate curves should be presented to reflect their distinct behaviors.

Author Response

Comment 1: Although the source of the data is indicated, the specific data collection methods and sample sizes are not clearly described. It is recommended to provide more detailed information about how the original data were collected.

Response 1: We thank the reviewer for this observation. The original case data were collected and validated in a prior empirical study (Farshadfar et al., 2025). Data collection combined both primary and secondary sources. Primary data included eight semi-structured interviews (60–90 minutes each) with managers, scientists, and professionals from the Recycling Facility (Finland) and ZenRobotics, complemented by two facility site visits (90 minutes each).

Secondary data were obtained from 25 industry-related documents, company reports, and four YouTube videos (77 minutes total). This mixed-methods approach provided a robust empirical foundation, and the resulting process maps and cost model assumptions were cross-validated with interviewees and industry benchmarks. To address the reviewer’s concern, we have revised the manuscript to include a clearer description of these data collection methods and the case study scope. Revised in Section 2.2.

Comment 2: The throughput of the CS and CVAS methods differs significantly. Is it reasonable to apply a direct scaling factor of 2.3 in the actual calculations without further justification or sensitivity testing?

Response 2: We appreciate this important observation. In our study, the scaling factor of 2.3 was applied to normalize the cost comparison between conventional sorting (CS, 70 tons/hour) and computer vision-enabled automated sorting (CVAS, 30 tons/hour). This ratio directly reflects the operational throughput difference reported by the case company and has also been documented in prior research on the same facility (Farshadfar et al., 2025).

Furthermore, our sensitivity analysis already explores key cost drivers, including machinery, wages, training, and discount rates, which indirectly capture the implications of scaling variations on competitiveness. Nevertheless, we acknowledge that throughput variability could influence financial outcomes. To clarify this point, we have revised the manuscript to explicitly note that the 2.3 factor is case-specific and may vary across facilities and waste streams, and that future work should further investigate scaling dynamics in different operational contexts. Revised in Section 2.2 & Section 4.1.

Comment 3: While maintenance costs are monetized and compared, the analysis does not account for the impact of machine failure rates or system downtime on overall cost. These factors could significantly affect the economic viability of each method.

Response 3: We thank the reviewer for highlighting this important aspect. We fully agree that machine failure rates and unplanned downtime can have a substantial impact on system-level economic performance. In the present study, maintenance costs were included in the cost model as an aggregated expense item, which does not explicitly separate failure-related downtime effects.

Due to data limitations, detailed reliability statistics (e.g., mean time between failures, repair times, or downtime penalties) were not available from the case facility. Nevertheless, we have revised the manuscript to acknowledge this limitation explicitly in the Section 4.1 and to note that failure rates and downtime effects are likely to be critical factors in long-term competitiveness.

Comment 4: In Section 3.1.1, the relationship between workforce size and throughput is assumed to be linear, and parameter values are adjusted proportionally. Is this assumption valid? Could there be nonlinear effects or economies/diseconomies of scale?

Response 4: We appreciate the reviewer’s considerable observation. This is because the cost model presented in Section 3 is applied. The two scenarios represent different levels of automation, which is the main reason for the expected non-linear relationship between workforce and throughput, a pattern that does not appear when each scenario is analyzed in isolation without changes in the automation level. We have already revised in Section 3.1.1.to explain it.

Comment 5: Table 4 lists two different types of machinery for CVAS. What are their functional differences in practice? Could these differences influence other parameters such as labor wages or training costs?

Response 5: We appreciate this valuable comment. In the revised manuscript, we have added an explanation that the two CVAS machine types serve distinct functions: the optical/NIR sensor units perform material detection and classification, while the robotic sorting arms execute the pick-and-place of identified fractions. Revised in Section 3.2.1.

At the case facility, the number of each machine type required is fixed for a given throughput level, which means that their functional differences do not lead to additional variation in labor wages or training costs beyond what is already captured. Specifically, the higher labor and training parameters for CVAS (compared with CS) already reflect the need for more skilled technical staff to operate and maintain the integrated system as a whole.

Comment 6: Taking China as an example, although annual wages are low, the cost difference between CS and CVAS shown in Figure 5 is not substantial. Moreover, under low discount rates, CVAS becomes more competitive. Could a multi-parameter analysis be added to better support decision-making between CS and CVAS in such contexts?

Response 6: We thank the reviewer for raising this point. We would like to clarify that our sensitivity analysis follows the conventional one-factor-at-a-time approach, where each parameter is varied individually while holding others constant. This allows the influence of each cost driver to be transparently observed.

We fully agree that multi-parameter scenarios (e.g., simultaneously varying wages and discount rates) could provide further insights into decision-making. Such approaches are closer to scenario analysis, and require additional data and assumptions beyond the scope of the present study.

Comment 7: Figures 7 and 8 only show a single trend line for CVAS. Does this imply that the two different machine types have identical cost sensitivity slopes? If not, separate curves should be presented to reflect their distinct behaviors.

Response 7: We thank the reviewer for this comment. To clarify, we did not assume that the two CVAS machine types have identical cost sensitivity slopes. Rather, as indicated in Section 3.2.1 and in the figure legends, we applied the average equipment price to represent the overall CVAS system configuration. This approach reflects the fact that both types of machines are used together as a single integrated line and are not considered as alternatives.

Accordingly, Figures 7 and 8 present one aggregated CVAS curve based on the averaged machine cost. To avoid any misunderstanding, we have further clarified this point in the revised Results text and in the figure captions.

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

The paper divides total CS costs by 2.3 (the throughput ratio 70/30) to “ensure a fair comparison.” This is a strong—and likely inappropriate—assumption because many cost components (labor, facility, capex, maintenance) are not strictly linear in throughput, and because time/shift structures differ across systems.

This single step can materially bias the comparison in favor of CVAS or CS depending on scale effects, fixed vs variable cost splits, and operational scheduling.

Reframe all results per ton processed (or per ton per year) instead of dividing total costs ex post. Model (i) fixed vs variable cost components, (ii) shift counts, (iii) utilization/downtime, and (iv) scaling rules (e.g., multiple robots in parallel vs longer hours). Provide both levelized cost per ton and total NPV to support apples-to-apples comparisons.

The baseline “discount rate” of 4.5% appears derived from macro policy rates, and the sensitivity spans extreme values up to 80%.

The discussion later compares countries using policy/central-bank rates rather than project-specific WACC or sector risk-adjusted rates

Project evaluation should use WACC (or at least a reasoned real discount rate) reflecting industry risk, capital structure, and inflation. An 80% rate is not decision-relevant and distracts from credible ranges.

Table 1 lists facility costs of €5.0m (CVAS) vs €11.5m (CS) with no justification/source

Provide source, rationale, and a breakdown (land, building, conveyance, power distribution, ventilation, safety, permitting) for both cases. Run a targeted sensitivity on facility cost and utilization.

Use sector-specific wage data (waste management / recycling technicians vs general averages)

Separate tech maintenance salaries for CVAS (often higher) from operators.

Decompose maintenance into (i) planned vendor service, (ii) consumables/spares, (iii) unplanned downtime, and (iv) software subscription/support; provide vendor-style OPEX schedules and include downtime penalties

The model is purely cost-side. It ignores revenue effects: (i) improved purity increases commodity prices; (ii) reduced contamination lowers reject/landfill fees; (iii) higher pick rates enable more product categories.

Abstract: Add concrete quantitative results (e.g., wage break-even, machine cost threshold, headline per-ton differentials) rather than general statements

In equatn 1, Define whether capex is at t=0 (then discount exponent differs), and whether Cf,k,t includes financing/lease

Abbreviations: NPV appears twice in the list; remove duplication

Check consistency of journal styles and duplicate links (some URLs appear twice)

When plotting or citing wages/interest rates by country, always indicate year and currency basis (nominal EUR with conversion rate specified)

The Introduction sets context well but repeats some points (EU waste share appears twice).

The Methods should include a figure mapping process stages to cost buckets (labor, machinery, facility) for CS and CVAS, with counts of machines and worker roles per stage

Share the datasets and code used to compute NPVs and figure values

 

Author Response

No.

Comments

Replies

1

The paper divides total CS costs by 2.3 (the throughput ratio 70/30) to “ensure a fair comparison.” This is a strong—and likely inappropriate—assumption because many cost components (labor, facility, capex, maintenance) are not strictly linear in throughput, and because time/shift structures differ across systems.

This single step can materially bias the comparison in favor of CVAS or CS depending on scale effects, fixed vs variable cost splits, and operational scheduling.

Reframe all results per ton processed (or per ton per year) instead of dividing total costs ex post. Model (i) fixed vs variable cost components, (ii) shift counts, (iii) utilization/downtime, and (iv) scaling rules (e.g., multiple robots in parallel vs longer hours). Provide both levelized cost per ton and total NPV to support apples-to-apples comparisons.

Thank you for this important point. Our goal was to place CS (70 t/h) and CVAS (30 t/h) on a comparable per-throughput basis. In process-industry comparisons, this is commonly handled via cost-to-capacity scaling , where n<1 captures non-linear scale effects rather than strict proportionality (Peters & Timmerhaus, 1991; Ulrich & Vasudevan, 2004). Related capacity-based models in resource/material processing (e.g., USGS prefeasibility cost models) and full-cost accounting practices in solid waste normalize costs per ton to compare facilities with unequal processing rates. Building on this, we now (i) clarify that the 2.3 factor was a case-specific normalization, and (ii) add a sensitivity formulation using an exponent n∈[0.6,1] to reflect plausible non-linear scaling across fixed and variable cost components. The lower bound (0.6) reflects the classical “six-tenths rule” commonly used in chemical and process engineering to capture economies of scale (Peters & Timmerhaus, 1991; Ulrich & Vasudevan, 2004), while the upper bound (1) represents the linear baseline assumption of strictly proportional costs. This range spans plausible scale effects in cost components, from strongly non-linear to linear scaling. We also note that alternative shift structures could change effective utilization and therefore per-ton costs. Our revised manuscript incorporates these revisions and a robustness statement in Section 2.2.

2

The baseline “discount rate” of 4.5% appears derived from macro policy rates, and the sensitivity spans extreme values up to 80%.

 

The discussion later compares countries using policy/central-bank rates rather than project-specific WACC or sector risk-adjusted rates

 

Project evaluation should use WACC (or at least a reasoned real discount rate) reflecting industry risk, capital structure, and inflation. An 80% rate is not decision-relevant and distracts from credible ranges.

We thank the reviewer for this important comment. The baseline of 4.5% was chosen to reflect prevailing EU policy rates and to enable transparent cross-country comparison. We agree that WACC or sector-specific real discount rates would be more appropriate for project-level evaluation, but such data were not available for the case company. This limitation is now clarified in the revised manuscript.

 

Regarding the sensitivity range, we retained high values such as 80% as stress-test scenarios. While not typical for project finance in developed economies, some emerging contexts do report extremely high rates (e.g., Turkey reached around 46% in 2024). We have added this explanation to the Section 4 to emphasize that upper-bound values illustrate robustness under extreme financial conditions rather than decision-relevant benchmarks.

3

Table 1 lists facility costs of €5.0m (CVAS) vs €11.5m (CS) with no justification/source

 

Provide source, rationale, and a breakdown (land, building, conveyance, power distribution, ventilation, safety, permitting) for both cases. Run a targeted sensitivity on facility cost and utilization.

We thank the reviewer for this helpful observation. We would like to clarify that the facility costs in Table 1 (€5.0m for CVAS and €11.5m for CS) are based on empirical data collected and validated in a prior case study (Farshadfar et al., 2025). We added the description about the data source in Section 2.2, this dataset was developed from eight semi-structured interviews with facility managers and ZenRobotics experts, two site visits, and a review of 25 industry documents, company reports, and four technical videos. All process maps and cost assumptions, including facility costs, were validated with interviewees and benchmarked against industry sources, ensuring robustness of the dataset.

 

With respect to breakdown, detailed confidential facility cost data (e.g., land, building, conveyance, power distribution, ventilation, safety, permitting) were not disclosed by the case company. However, we have now clarified in the Methods that such elements are included within the reported facility costs.

 

4

Use sector-specific wage data (waste management / recycling technicians vs general averages)

 

Separate tech maintenance salaries for CVAS (often higher) from operators.

 

Decompose maintenance into (i) planned vendor service, (ii) consumables/spares, (iii) unplanned downtime, and (iv) software subscription/support; provide vendor-style OPEX schedules and include downtime penalties

We thank the reviewer for these valuable suggestions. We fully agree that using sector-specific wage data, distinguishing between operator and technical maintenance roles, and decomposing maintenance costs into detailed categories would improve the accuracy of the model.

 

However, such disaggregated data were not available from the case company, and industry-wide vendor OPEX schedules are not publicly accessible. To address this limitation, we adopted proxy measures that capture the main effects at an aggregate level:

 

Labor costs were parameterized using national wage statistics validated with the case company, with CVAS presented a higher overall wage level (€60k/year vs. €50k/year for CS) to reflect the higher skill requirements of automated systems.

 

Maintenance costs were modeled as 5% of machinery acquisition cost per year, a value validated with interviewees as representative of long-term practice. This aggregate parameter implicitly includes planned servicing, consumables, and typical part replacement.

 

We have clarified these points in the revised Discussion that future research should incorporate more detailed wage structures and vendor-specific OPEX breakdowns, including downtime and software support, once such data become available.

5

The model is purely cost-side. It ignores revenue effects: (i) improved purity increases commodity prices; (ii) reduced contamination lowers reject/landfill fees; (iii) higher pick rates enable more product categories.

We appreciate this valuable observation. Our study intentionally focused on the cost side in order to isolate and analyze the sensitivity of major cost drivers under different economic conditions. Revenue-side factors such as improved commodity prices, reduced landfill fees, and expanded product categories are indeed highly relevant, but such data are strongly dependent on market conditions and were not systematically available from the case company.

 

We agree that incorporating revenue effects would provide a more comprehensive assessment of competitiveness, and we expect that doing so would likely strengthen the case for CVAS, since higher purity and pick rates directly translate into additional value. We have now acknowledged this limitation in the Section 4 and suggested that future research extend the model to include revenue-side impacts.

6

Abstract: Add concrete quantitative results (e.g., wage break-even, machine cost threshold, headline per-ton differentials) rather than general statements

We thank the reviewer for this constructive suggestion. In the revised Abstract, we have updated the Abstract to include concrete quantitative results in addition to general statements.

7

In equatn 1, Define whether capex is at t=0 (then discount exponent differs), and whether Cf,k,t includes financing/lease

Capital expenditures (CapEx) may occur at ?=0 as a lump sum or be distributed across multiple years (0≤t≤T). In our model, such investments are discounted year by year according to their occurrence. Operating costs ??,?,? are then added from the first year of operation onward. Financing and leasing costs are included,  all potential cost related to facility are included.

8

Abbreviations: NPV appears twice in the list; remove duplication.

 

Check consistency of journal styles and duplicate links (some URLs appear twice).

 

When plotting or citing wages/interest rates by country, always indicate year and currency basis (nominal EUR with conversion rate specified).

We thank the reviewer for these useful editorial observations. In the revised manuscript we have:

 

Removed the duplicate NPV abbreviation.

 

Checked all references for consistency with journal style and removed duplicate links.

 

Added clarification on the year and currency basis for all wage and interest rate data. Specifically, we now indicate that values are reported in nominal EUR, with conversion rates and reference years specified in Section 3.2.1.

9

The Introduction sets context well but repeats some points (EU waste share appears twice).

 

 

We thank the reviewer for this observation. We fully agree that conciseness is important. In the revised Introduction, we have carefully reviewed the two mentions of the EU waste share. We decided to keep both because they serve different purposes in the narrative: the first establishes the overall scale of waste generation in Europe, while the second highlights the specific contribution of construction and demolition (C&D) waste, which accounts for over one-third of the total. This repetition is intentional to emphasize the centrality of C&D waste in the policy and industry context, but we have slightly rephrased the second mention to minimize the impression of repetition.

10

The Methods should include a figure mapping process stages to cost buckets (labor, machinery, facility) for CS and CVAS, with counts of machines and worker roles per stage.

We appreciate the reviewer’s suggestion to add a schematic figure mapping process stages to cost buckets. While such a figure would certainly be informative, we would like to note that the necessary details are already provided in Table 4, where the cost components are decomposed into labor, machinery, and facility categories, with counts of machines and worker roles specified for both CS and CVAS. To avoid redundancy and maintain conciseness, we have opted to retain the tabular presentation, which provides the same information in a structured and traceable format. We believe this ensures clarity while keeping the Methodology section focused and consistent with the journal’s style.

11

Share the datasets and code used to compute NPVs and figure values.

We thank the reviewer for highlighting the importance of transparency and reproducibility. All empirical data used in this study are already fully reported in the manuscript (see Table 1, 3, 4, 5, 6), including facility costs, machinery prices, labor wages, training, and maintenance parameters. The NPV and sensitivity analyses were computed directly from these published values using the equations 1 provided in Section 3. Therefore, no additional proprietary datasets were used.

 

Author Response File: Author Response.pdf

Reviewer 3 Report

Comments and Suggestions for Authors

The overall structure of the paper is reasonable, with clearly organized sections that give the work a strong logical flow and allow readers to follow the authors’ line of thought smoothly. The paper provides a solid exposition of the research background, methodological application, and presentation of results, meeting the general standards of an academic publication. Overall, it is highly readable. I recommend acceptance without further revisions.

Author Response

We sincerely thank the reviewer for the positive evaluation and recommendation for acceptance. We appreciate the recognition of our efforts to ensure clarity, logical flow, and readability throughout the manuscript.

Reviewer 4 Report

Comments and Suggestions for Authors

Conclusion is missing. Please consider the extracting paragraph from Discussion and transfer it to Conclusion. 

The article “Computer Vision-Enabled Construction Waste Sorting: A Sensitivity Analysis” is well-written, well-organized and easy to understand. The manuscript clear and relevant for the field of computer vision-enabled robotic sorting. Article contains four well-developed sections: 1) Introduction, 2) Materials and Methods 3) Results, 4) Discussion, which are presented in a well-structured manner. Conclusion is missing. The manuscript ending with Discussion. Reviewer suggests that Conclusion should be made of extracted paragraphs from Discussion.

The authors do a good job of synthesizing the literature since the cited references are mostly recent publications (91% are within the last 5 years) and reference list includes two self-citations, which is not an excessive number. The review is clear, comprehensive and relevant to: 1) construction and demolition (C&D) waste; 2) waste management by conventional sorting (CS) methods; 3) computer vision (CV)-enabled robotics and 4) Artificial Intelligence (AI) and particular machine learning (ML) for automated waste sorting. The gap in knowledge is identified as the lack of integration between technical performance and cost competitiveness.

The case study in this research is the implementation of ZenRobotics CV-enabled automated sorting (CVAS) system in a Finnish construction waste recycling center. The methodology clearly explained how the mathematical cost model published earlier is extended by incorporating the time value of money. Data was provided through primary and secondary sources and compare cost components of CVAS and CS methods over 7-year time span. In Materials and Methods authors set out two questions and the sensitivity analysis is appropriate to give answer to those questions. The manuscript is scientifically sound and the manuscript’s results is reproducible based on the details given in the methods section.

The sensitivity analysis is performed to examine the impact of variations in key cost components on the total cost. NPV framework is introduced into the model to answer how varying discount rates affect cost competitiveness, and what is the specific costs. The results of sensitivity analysis are presented by comparative data, graphical representations, and implications on competitiveness for specific cost components (number of employees, wages, training cost, machinery costs, maintenance costs, discount rate). The authors’ results are convincing by comparation of cost trajectories of CVAS and CS.

Discussions emphasis results that shows: 1) computer vision-enabled system achieves competitiveness over conventional sorting when headcount, wages, and training costs increase, and 2) conventional sorting retains cost advantages in scenarios characterized by higher machinery and maintenance costs or extremely elevated discount rates.

There are two main questions addressed by the research: 1) what is the impact of varying discount rates on the cost competitiveness of the CV-enabled robotic systems in construction waste recycling versus conventional sorting? And 2) what is the impact of varying other costs related to headcount, wages, personnel training, machinery, and maintenance operations on the competitiveness of the CV-enabled waste recycling system in construction? This original study addresses gap by extending cost models to capture how wages, training, maintenance, and discount rates influence competitiveness. The economic and operational feasibility of deploying AI and robotics are explored in this article and added CVAS to the area of construction waste management.

The nine figures are clear, easy to interpret and understand. The seven tables appropriately and consistently interpreted data throughout the manuscript.

Author Response

No.

Comments

Replies

1

Conclusion is missing. Please consider the extracting paragraph from Discussion and transfer it to Conclusion.

We thank the reviewer for this important observation. In the revised manuscript, we have added a separate Conclusion section (Section 5).

Author Response File: Author Response.pdf

Round 2

Reviewer 2 Report

Comments and Suggestions for Authors

The abstract could better emphasize the novelty. It mostly reiterates known benefits (labor cost reduction, efficiency) rather than underscoring what this study uniquely contributes—i.e., the integration of NPV into sensitivity modeling.

  • The literature review is broad but insufficiently critical. Sources are summarized rather than interrogated. For instance, studies showing >90% accuracy in waste recognition (e.g., Ku et al., Lin et al.) are reported without asking whether such accuracy translates into operational feasibility or ROI.

  • The "gap in literature" section is stronger, but it should have been integrated earlier into the review to provide a sharper critique rather than descriptive build-up.

  • There is limited engagement with counter-evidence. For example, studies that document failures or neutral results of automation adoption are acknowledged only briefly. A deeper critique of such findings would elevate the contribution.

  • Heavy reliance on secondary data (reports, YouTube videos) raises concerns about reliability, despite triangulation. While validated with interviews, more transparency is needed on how assumptions (e.g., 5% maintenance cost, 2.3 throughput scaling factor) were derived and tested.

  • Sensitivity analysis design is conventional (varying parameters across ranges) but lacks justification for why specific ranges (e.g., discount rate up to 80%) were chosen beyond a “stress test” claim. A probabilistic Monte Carlo simulation would provide stronger evidence.

  • The throughput normalization factor (2.3) is case-specific and may undermine generalizability; this limitation is acknowledged but underplays the potential distortion

  • Figures are numerous but occasionally redundant (e.g., Figures 3–4 both show labor effects). Consolidation could improve clarity.

  • Results are over-interpreted at times. For instance, global wage comparisons (China, India, US) are presented, but these generalizations may oversimplify realities of regulatory and infrastructural differences.

  • Statistical rigor is limited: the analysis is deterministic. No confidence intervals or error margins are reported, which weakens claims of robustness.

  • Discussion is descriptive and lacks theoretical anchoring. No reference is made to established innovation adoption theories (e.g., Technology Acceptance Model, Diffusion of Innovations) that could frame the cost–benefit trade-offs.

  • The international contextualization is appreciated, but the leap from Finland to global wage dynamics is under-theorized and risks overstretching the findings.

Author Response

No.

Comments

Replies

1

The abstract could better emphasize the novelty. It mostly reiterates known benefits (labor cost reduction, efficiency) rather than underscoring what this study uniquely contributes—i.e., the integration of NPV into sensitivity modeling.

Thank you for this important point. The novelty of this work arises from the use of a pioneering real-world case study and the improvements offered on a comprehensive comparative cost model for CVAS and CS, and furthermore clarifying the impact of key cost variables on solution (CVAS or CS) selection. Revised in Abstract.

2

The literature review is broad but insufficiently critical. Sources are summarized rather than interrogated. For instance, studies showing >90% accuracy in waste recognition (e.g., Ku et al., Lin et al.) are reported without asking whether such accuracy translates into operational feasibility or ROI.

 

We thank the reviewer for this constructive comment. We agree that the literature review benefits from a more critical perspective. Improved in Section 2.3.

3

The "gap in literature" section is stronger, but it should have been integrated earlier into the review to provide a sharper critique rather than descriptive build-up.

We thank the reviewer for this valuable suggestion. In the revised manuscript, we have repositioned the literature gap to appear earlier within the review section. Improved in Section 2.3.

4

There is limited engagement with counter-evidence. For example, studies that document failures or neutral results of automation adoption are acknowledged only briefly. A deeper critique of such findings would elevate the contribution.

We appreciate this insightful comment. We fully agree that counter-evidence and neutral findings are crucial to present a balanced view. While our original review briefly acknowledged such studies, the main contribution of this paper is precisely to identify and analyze the conditions under which automation may not yield advantages.

5

Heavy reliance on secondary data (reports, YouTube videos) raises concerns about reliability, despite triangulation. While validated with interviews, more transparency is needed on how assumptions (e.g., 5% maintenance cost, 2.3 throughput scaling factor) were derived and tested.

We thank the reviewer for this valuable suggestion. Pioneering case of deploying the computer vision of robotics , cannot access to many different sources of data.However we deployed the most maticulous method of data collection, transcription of interviews, analysis and images videos of site visits, and triangleation to ensure a high quality of data.

6

Sensitivity analysis design is conventional (varying parameters across ranges) but lacks justification for why specific ranges (e.g., discount rate up to 80%) were chosen beyond a “stress test” claim. A probabilistic Monte Carlo simulation would provide stronger evidence.

We thank the reviewer for this comment. As noted in our previous revision, we have already clarified the rationale for using high values such as 80% as stress-test scenarios, with reference to empirical cases (e.g., Turkey reaching ~46% in 2024). This explanation has been added to Section 4 to emphasize that such upper-bound values are intended to illustrate robustness under extreme financial conditions rather than to serve as decision-relevant benchmarks.

7

The throughput normalization factor (2.3) is case-specific and may undermine generalizability; this limitation is acknowledged but underplays the potential distortion

We appreciate this important observation. In our study, the scaling factor of 2.3 was applied to normalize the cost comparison between conventional sorting (CS, 70 tons/hour) and computer vision-enabled automated sorting (CVAS, 30 tons/hour). This ratio directly reflects the operational throughput difference reported by the case company and has also been documented in prior research on the same facility (Farshadfar et al., 2025).

 

Furthermore, our sensitivity analysis already explores key cost drivers, including machinery, wages, training, and discount rates, which indirectly capture the implications of scaling variations on competitiveness. Nevertheless, we acknowledge that throughput variability could influence financial outcomes. To clarify this point, manuscript explicitly notes that the 2.3 factor is case-specific and may vary across facilities and waste streams, and that future work should further investigate scaling dynamics in different operational contexts in Section 2.2 & Section 4.1.

8

Figures are numerous but occasionally redundant (e.g., Figures 3–4 both show labor effects). Consolidation could improve clarity.

We thank the reviewer for this observation. We carefully revisited Figures 3 and 4 and would like to clarify that they serve different analytical purposes. Figure 3 illustrates how varying workforce size directly affects the total cost trajectory, while Figure 4 provides a cost breakdown under the lowest and lower workforce scenarios, highlighting the relative contribution of each cost component. Presenting them separately allows readers to distinguish between overall cost dynamics and detailed cost composition.

 

More generally, the relatively large number of figures reflects the comprehensive nature of our sensitivity analysis, which examined multiple parameters (wages, workforce size, training, machinery, maintenance, and discount rate). We believe that retaining these figures separately provides clarity and transparency for each cost driver.

9

Results are over-interpreted at times. For instance, global wage comparisons (China, India, US) are presented, but these generalizations may oversimplify realities of regulatory and infrastructural differences.

 

We thank the reviewer for this important observation. We agree that wage differences alone cannot fully capture the complexities of regulatory, infrastructural, and institutional contexts across countries. Our intention in presenting the global wage comparisons (e.g., China, India, US) was to illustrate the sensitivity of the model to labor-related parameters rather than to offer comprehensive country-level assessments. In the revised manuscript, we have clarified this point in the Section 4.1.2 by explicitly noting that these comparisons should be interpreted as illustrative sensitivity tests, and not as complete representations of cross-country feasibility.

10

Statistical rigor is limited: the analysis is deterministic. No confidence intervals or error margins are reported, which weakens claims of robustness.

We thank the reviewer for this valuable comment. We acknowledge that the analysis is deterministic and does not include statistical confidence intervals or error margins. This choice reflects the case-study design and the limited availability of probabilistic data distributions for key parameters. Our aim was to provide a transparent sensitivity framework rather than statistical inference. In the revised manuscript, we have added a note in the Discussion to explicitly acknowledge this limitation and to highlight that future research should incorporate probabilistic approaches (e.g., Monte Carlo simulation or bootstrapping) to quantify uncertainty ranges and strengthen claims of robustness.

11

Discussion is descriptive and lacks theoretical anchoring. No reference is made to established innovation adoption theories (e.g., Technology Acceptance Model, Diffusion of Innovations) that could frame the cost–benefit trade-offs.

Thank you for this important point. We revised Discssion part.

12

The international contextualization is appreciated, but the leap from Finland to global wage dynamics is under-theorized and risks overstretching the findings.

 

We thank the reviewer for recognizing the value of the international contextualization. Our intention was not to make generalized claims, but rather to illustrate how labor cost variation influences cost competitiveness. To avoid over-interpretation, we have clarified in the Discussion that the international wage comparisons should be understood as illustrative sensitivity tests rather than as generalizable cross-country conclusions. Revised in Section 4.1.2

Author Response File: Author Response.pdf

Reviewer 4 Report

Comments and Suggestions for Authors

Review have no suggestions for authors.

Author Response

We sincerely thank the reviewer for the positive evaluation and are pleased that the manuscript was found satisfactory without further suggestions.

Back to TopTop