Next Article in Journal
Analysis of the Properties of Upcycled Wood Waste for Sustainable Furniture Production
Previous Article in Journal
Circular Economy for Strategic Management in the Copper Mining Industry
Previous Article in Special Issue
Horizontal UHS Predictions for Varying Deep Geology Conditions—A Case Study of the City of Banja Luka
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Forecasting Return Quantity, Timing and Condition in Remanufacturing with Machine Learning: A Mixed-Methods Approach

1
Fraunhofer Institute for Manufacturing Engineering and Automation IPA, 95447 Bayreuth, Germany
2
Chair Manufacturing and Remanufacturing Technology, University of Bayreuth, 95447 Bayreuth, Germany
*
Author to whom correspondence should be addressed.
Sustainability 2025, 17(14), 6367; https://doi.org/10.3390/su17146367
Submission received: 16 May 2025 / Revised: 11 June 2025 / Accepted: 30 June 2025 / Published: 11 July 2025

Abstract

Remanufacturing plays a key role in the circular economy by reducing material consumption and extending product life cycles. However, a major challenge in remanufacturing is accurately forecasting the availability of cores, particularly regarding their quantity, timing, and condition. Although machine learning (ML) offers promising approaches for addressing this challenge, there is limited clarity on which influencing factors are most critical and which ML approaches are best suited to remanufacturing-specific forecasting tasks. This study addresses this gap through a mixed-method approach combining expert interviews with two systematic literature reviews. The interviews with professionals from remanufacturing companies identified key influencing factors affecting product returns, which were structured into an adapted Ishikawa diagram. In parallel, the literature reviews analyzed 125 peer-reviewed publications on ML-based forecasting in related domains—specifically, spare parts logistics and manufacturing quality prediction. The review categorized data sources into real-world, simulated, and benchmark datasets and examined commonly applied ML models, including traditional methods and deep learning architectures. The findings highlight transferable methodologies and critical gaps, particularly a lack of remanufacturing-specific datasets and integrated models. This study contributes a structured overview of ML forecasting in remanufacturing and outlines future research directions for enhancing predictive accuracy and practical applicability.

1. Introduction

The growing demand for raw materials, driven by rising production volumes and shorter product life cycles, has reinforced the urgency of transitioning toward a circular economy. In response, the European Union has prioritized reducing the use of primary materials by introducing the Industrial Clean Deal, which mandates that by 2023, 24% of the raw materials must be circular materials [1], while also promoting value retention strategies such as reuse, repair, and remanufacturing at the end of product life [2]. Among these, remanufacturing as a key process of the circular economy is an industrial process to return a used product, hereafter referred to as core, to “at least its original performance, with a warranty that is equivalent or better than that of the newly manufactured product” [3]. Research has shown that remanufacturing can reduce greenhouse gas emissions by 79–99%, energy usage by up to 55%, and material consumption by up to 50% compared to conventional new production [4,5,6,7,8]. In addition to its environmental benefits, remanufacturing also has notable economic advantages, as remanufactured products are typically sold at lower prices than newly manufactured equivalents [9].
The remanufacturing process typically consists of five steps: (1) disassembly, (2) cleaning, (3) inspection and sorting, (4) repair or replacement, and (5) reassembly, each of which is supported by quality control measures [6]. Therefore, the efficient execution of these steps depends mainly on accurate operational and tactical planning, particularly regarding the availability and condition of incoming cores. However, return flows are influenced by numerous interdependent factors, including customer usage behavior; product life cycle dynamics; business models; the core acquisition strategy, hereafter referred to as take-back strategy; return logistics; and market conditions [9]. These factors collectively contribute to significant variability in the quantity, timing, and condition of returned cores, creating a high degree of uncertainty in forecasting processes [10].
Traditional forecasting techniques, such as linear regression analysis (R), moving average (MA), and exponential smoothing (ES), typically based on deterministic or rule-based models, often struggle to capture these uncertainties with their nonlinear and heterogeneous patterns present in real-world remanufacturing systems [11,12]. For example, these techniques may fail to account for sudden spikes in returns due to product recalls, unusual usage patterns across customer segments, or variable core quality influenced by environmental conditions. As a result, these methods frequently lead to inaccurate predictions of core availability, causing disruptions and inefficiencies in production planning and execution [13,14].
In contrast, ML offers promising capabilities by enabling data-driven predictions based on large and diverse datasets. ML models, e.g., decision trees (DTs), support vector machines (SVMs), or artificial neural networks (ANNs), are capable of capturing nonlinear relationships, managing missing or noisy data, and learning from complex historical patterns, which makes them particularly suitable for forecasting return quantity, timing, and condition [14,15]. However, despite growing academic interest in this area, there is still limited understanding of which influencing factors are most critical and which ML models are best suited for forecasting in the specific context of remanufacturing.
Previous research has explored various forecasting methods in related contexts. Early studies, such as those by Goh and Varaprasad [16] and Kelle and Silver [17], proposed statistical models to estimate returns of reusable containers. While foundational, these works differ from the complexities inherent in remanufacturable products. Guide [11] highlighted this distinction, emphasizing the added uncertainty in return quantity and timing. Subsequent studies by Brito and van der Laan [12] examined the cost implications of different forecasting models under imperfect data, showing that more data does not always yield more accurate results.
Hybrid approaches have been developed to overcome data limitations. Marx-Gomez et al. [18] applied a combination of simulation, fuzzy logic, and neuro-fuzzy systems to predict returns of photocopiers. Clottey et al. [19] and Geda and Kwong [20] advanced time-lag-based forecasting using distributed lag models, with the latter incorporating Bayesian inference for parameter estimation. Additionally, methods like ARIMA and Holt–Winters have been used for seasonal forecasting [21], while neural networks have been applied to predict the return quantity of heavy machinery [15].
While forecasting the condition of returned cores has received comparatively less attention than quantity and timing in the remanufacturing literature, several studies have addressed this challenge, often through core quality assessment or grading frameworks. For instance, Ponte et al. [22] and Stamer and Sauer [23] explore methods for classifying the condition of returned products to support remanufacturing decisions. These approaches typically involve categorizing cores into predefined quality levels (e.g., usable, repairable, or scrap) based on visual inspection, historical usage data, and sensor-based condition indicators.
More strategic, long-term forecasting approaches have also emerged in recent years, particularly in the context of electric vehicle (EV) batteries. Liang et al. [24] present a forecasting model that estimates the quantity and quality of returned EV batteries by integrating product sales, customer behavior, usage intensity, and expected service life. These variables are modeled as separate probability distributions and combined using a convolution-based approach to capture their interdependencies.
Similarly, Huster et al. [25] examine long-term return forecasting for remanufacturing capacity planning. Their study focuses on how different forecasting assumptions, such as variations in service time, return rates, and quality levels, affect planning outcomes. They highlight the need for scenario-based modeling to handle the inherent uncertainties in core returns and stress the value of quality-related forecasting in aligning remanufacturing capacity with actual return flows.
Despite the relevance of these studies, their isolated focus makes it difficult to generalize findings across remanufacturing contexts. It is still unclear which influencing factors are decisive and which ML models are particularly suitable for use in remanufacturing. Against this background, the following research questions (RQs) arise, which are addressed in this study:
  • RQ1: Which data characteristics are commonly used to train machine learning models for forecasting the availability of returned cores in remanufacturing? This question identifies the types of input variables (features) commonly selected to develop predictive models. Understanding the nature and relevance of these features is essential for designing accurate and practically implementable models in real-world remanufacturing environments.
  • RQ2: Which data sources are commonly used for training machine learning models for forecasting the availability of returned cores in remanufacturing? This includes exploring the origins of the datasets used to train and evaluate models. Clarifying the advantages and limitations of each data source helps assess their transferability to remanufacturing applications, where data availability is often limited or inconsistent.
  • RQ3: Which supervised machine learning models are commonly applied to forecast return quantity, timing, and condition of cores? The aim is to analyze the types of supervised learning problems addressed in the literature and assess the prevalence and suitability of different ML algorithms.
Based on these research questions, this study aims to derive key insights and highlight open challenges associated with forecasting in remanufacturing. Moreover, an outlook on future research directions is provided to support the broader adoption of ML-based forecasting approaches in academia and industry.
The remainder of the study is structured as follows. Section 2 describes the methodology, including expert interviews used to identify relevant influencing factors and systematic literature reviews undertaken to evaluate applicable ML models. Section 3 presents the empirical and literature-based results of the three research questions. Section 4 discusses these findings in the context of remanufacturing challenges and opportunities. Section 5 concludes the study and provides suggestions for future research.

2. Materials and Methods

This study applies a mixed-method approach, combining an empirical–exploratory component with a systematic literature review. Semi-structured expert interviews were conducted with professionals from remanufacturing companies to gain practical insights into the influencing factors affecting the predictability of return quantity, timing, and condition. In addition, a systematic literature review was carried out to assess the current research on machine learning methods used for forecasting in remanufacturing and adjacent domains.

2.1. Expert Interviews

As a first step, expert interviews were used to gather domain-specific insights on factors influencing core returns, directly addressing RQ1. A qualitative empirical approach was adopted using semi-structured expert interviews to investigate these practical influencing factors. This method is well-suited for accessing context-specific, experience-based knowledge and is commonly used in exploratory research [26,27]. The semi-structured format enables consistency across interviews while also allowing for flexible, open-ended exploration of case-specific insights [28]. A stimulus-based format was employed to enhance validity: predefined influencing factors were presented to the participants for assessment and commentary.
These influencing factors were identified beforehand through a brainstorming session aimed at compiling a broad range of potentially relevant variables. This approach is instrumental in expert settings where time is limited but depth of response is essential [29]. Participants were also encouraged to suggest additional factors not covered in the predefined list.
Five interviews were conducted with professionals responsible for evaluating and processing returned cores, illustrated in Table 1. Participants were selected via purposive sampling to ensure relevant domain expertise. Data saturation was observed after the fourth interview, as recurring themes consistently emerged. The fifth interview confirmed the saturation of key insights, indicating that additional interviews were unlikely to yield substantially new information. All interviews were conducted online using video conferencing tools. Audio recordings were not made to create an open and informal environment. Instead, a third-party observer documented detailed, real-time notes that were structured in a result-oriented format. While this approach supported open dialogue, it also introduced potential limitations, such as observer bias or incomplete data capture. To mitigate these risks, the interviewees subsequently validated the notes by reviewing and confirming the interview notes, ensuring objectivity and corresponding quality.

2.2. Literature Review

A structured literature review was conducted to systematically assess the application of machine learning models for forecasting return quantity, timing, and condition in remanufacturing. Various models for literature review processes exist in the academic domain, such as those proposed by Bandara et al., Levy & Ellis, Li & Wu, and vom Brocke et al. [30,31,32,33]. While these models may vary in detail, they typically follow a three-phase structure: planning, conducting, and reporting [34]. The literature review conducted in this study is based on a generic process model proposed by Brendel et al., which synthesizes various established review models [35]. This model refines the traditional three-phase structure by dividing it into six systematic steps: goal definition, definition of review scope, search, analysis, synthesis, and discussion. The latter model is used for the literature review in this study and visualized in Figure 1.
Following the model proposed by Brendel et al. [35], two separate literature reviews were conducted to identify relevant machine learning models applied in core forecasting in remanufacturing. One review focused specifically on models used to forecast return quantity and timing, while the other addressed models related to the prediction of core condition.
As part of the preparation, the aim of both literature reviews is first defined, the research area is narrowed down, and the research question is precisely formulated [35]. Both reviews aimed to identify and critically evaluate peer-reviewed publications applying supervised machine learning techniques to forecasting in remanufacturing or related domains. The analysis concentrates on the types of ML models employed, the nature of the data sources, and the potential transferability of existing approaches to remanufacturing contexts.
Given the limited number of direct applications of ML in remanufacturing, a transfer strategy was adopted:
  • For return quantity and timing, the scope was extended to include publications on spare parts demand forecasting due to structural similarities such as uncertain return flows, failure rates, and non-regular demand patterns.
  • For condition prediction, the review focused on studies in manufacturing that apply ML to quality, defect, and fault prediction, based on the conceptual overlap with assessing the condition of returned parts.
This approach enabled a broader, yet targeted, exploration of machine learning applications with potential relevance to remanufacturing.
According to Banker and Kauffman, the focus of the research area for these literature reviews is placed on the research direction of decision support and design science [37], aligning the reviews with RQ2 and RQ3 defined in the “Introduction” section.

2.2.1. Forecasting Return Quantity and Timing

The search was conducted in February 2025 across four major academic databases: IEEE Xplore, ScienceDirect, Scopus, and Wiley Online Library. Boolean operators were used to construct precise title-based queries. Due to the scarcity of literature directly focused on remanufacturing, the search included publications on spare parts demand forecasting, given the similarities in logistical uncertainty and forecasting requirements. The exact query is illustrated in Figure 2.
During the literature search, the previously identified journals and databases were systematically searched for relevant publications [35]. The initial search resulted in a data set of 194 publications. The publications found were then subjected to a formal and content analysis, sorting out unsuitable results using defined exclusion criteria. During the formal screening process, 54 duplicates were eliminated, and 37 publications based on predefined criteria (e.g., language, time horizon, and scientific nature) were excluded. The remaining 103 entries underwent a content-based relevance assessment, regarding RQ2 and RQ3, following the approach by Booth et al. [38], involving a three-stage screening of titles, abstracts, and full texts. This process led to the exclusion of 31 publications: five during the title screening, 19 during the abstract review, and seven after full-text evaluation, resulting in a final dataset of 72 studies. The selection process is documented in Figure 3, based on the “Preferred Reporting Items for Systematic reviews and Meta-Analyses” (PRISMA) statement [39].
In the synthesis phase, the selected publications were analyzed in relation to the types of ML models employed, the data set source, and the context of their application. A qualitative content analysis was conducted to identify recurring modeling strategies and trends relevant to forecasting in remanufacturing. These findings are presented in detail in Section 3.

2.2.2. Forecasting the Condition of Cores

A second literature review was conducted to investigate supervised learning methods for predicting the condition of cores, focusing on approaches analogous to remanufacturing. This review followed the same six-step structure as the previous one, ensuring methodological consistency.
As before, the review targeted peer-reviewed articles published after 2015 in English or German. The databases and search logic remained unchanged. However, due to the lack of remanufacturing-specific studies on condition prediction, the scope was extended to include publications on quality, defect, and fault prediction in manufacturing. This methodological transfer assumes that analogous challenges in production environments can inform condition prediction upon return. The specific search query is depicted in Figure 4.
The search yielded 250 publications. Following a similar screening approach, 74 duplicates and 42 formally non-compliant publications were excluded. The remaining 134 entries were assessed in three steps—titles, abstracts, and full texts—leading to the exclusion of 81 studies. In total, 53 publications were included in the analysis. The selection process is visualized in Figure 5 using the PRISMA statement [39].
In the synthesis and discussion phases, these publications were analyzed to identify the types of machine learning models used, the learning tasks (e.g., classification or regression), and the context of their application. The results form the basis for a structured evaluation of condition forecasting methods in Section 3.

3. Results

This chapter presents the findings from the expert interviews and systematic literature reviews conducted in this study. The objective is to identify key influencing factors affecting the return quantity, timing, and condition of cores in remanufacturing, and to evaluate which machine learning models are most commonly applied for predictive tasks in this domain.

3.1. Identification of Influencing Factors of Return Quantity, Timing, and Condition of Cores

To systematically identify and categorize the factors influencing return quantity, timing, and condition in remanufacturing, insights from expert interviews were consolidated using a cause-and-effect diagram, commonly known as an Ishikawa or fishbone diagram. This visualization method, originally developed by Kaoru Ishikawa, is particularly well-suited for revealing complex interdependencies between variables [40].
The Ishikawa diagram groups the influencing factors into five adapted categories: People, Material, Environment, Returns, and Product, which are derived from the traditional 5-M method (Man, Machine, Material, Method, and Mother Nature) [41]. This structure was modified to better reflect the specific context of remanufacturing, as shown in Figure 6. The resulting categorization, particularly the distribution of attributes across categories, helps to identify underrepresented areas and supports a more structured and comprehensive brainstorming of additional influencing factors.
The People category includes all human-related influences, encompassing both customers and users on the one hand, and production and logistics personnel on the other. The customer’s behavior directly affects the return timing and condition of a product through several steps: the purchase decision, the intensity and duration of product use, and ultimately, the decision to return it [24]. Improper use, delayed return, or failure to return the part altogether can drastically impact the predictability of returns. In addition, production employees also play a critical role. Errors during manufacturing, handling, or transport can negatively influence a product’s lifespan and return condition. Interview participants repeatedly highlighted the importance of staff training and procedural adherence in minimizing unplanned wear or damage prior to the return of cores.
The Material category refers to the quality, durability, and sourcing of the input materials used in the original product. Poor material quality or variability can reduce a part’s reliability, leading to premature failure, affecting the return condition, and potentially triggering a customer-initiated return. Thus, while material aspects are primarily associated with product condition, they can also indirectly influence return timing and volume through their impact on failure rates.
The Environment category encompasses external factors, including climatic conditions, market dynamics, and broader macroeconomic trends. Experts emphasized that environmental conditions (e.g., temperature, humidity, geographic location) significantly affect product degradation and customer usage behavior. From a market perspective, fluctuations in sales of new products directly impact the availability and return rate of corresponding cores. As pointed out by multiple interviewees, there is often a causal correlation between the timing of new product sales and the timing of returns [42,43].
The Returns category captures the design and structure of the take-back system used by a company. Experts identified a wide range of take-back models in practice, including [44,45]:
  • Ownership-based take-back;
  • Contractual take-back agreements (e.g., service contracts, contract repairs);
  • 1:1 returns;
  • Returns tied to discounts on remanufactured goods;
  • Purchase-based returns;
  • Voluntary returns.
Each of these systems entails different levels of predictability. According to Agrawal and Singh, companies that maintain ownership of the product, e.g., by offering product–service systems, or that have contractual obligations for returns, are generally more capable of accurately forecasting return timing and quantity [46]. Interview insights strongly supported this, particularly from remanufacturers with integrated service contracts.
The Product category includes the inherent characteristics, such as design complexity, application area, and expected service life. These features determine the wear profile, directly influencing the return condition. Components with longer life cycles or modular design tend to be returned in more predictable intervals and often in better condition. As several experts noted, a product failure reflects degradation in condition and is frequently the trigger for the return itself, making it both a consequence and a cause.
A central insight from the expert interviews is that these categories do not operate in isolation. Instead, they form a network of interacting variables. For example, poor material quality (Material) might lead to failure (Product), but the impact on return timing may depend on usage intensity (People) and environmental exposure (Environment). Similarly, while a take-back model (Returns) determines whether and when a product is returned, the condition upon return is shaped by all preceding factors. This interdependency underlines the complexity of forecasting in remanufacturing and highlights the necessity for data-driven approaches to capture such multi-factorial dynamics.

3.2. Machine Learning Methods for Forecasting Return Quantity and Timing of Cores

This section presents the literature review results concerning ML methods used to forecast the quantity and timing of returned cores. Due to the limited number of studies focusing directly on remanufacturing, the review was extended to include spare parts demand forecasting, given the structural similarities between the two domains. These include uncertain return flows, component life cycles, failure rates, and variable demand patterns, all of which are difficult to predict. The review emphasizes data sources and the applied machine learning models, as these elements are relatively domain-agnostic and thus transferable across industries.

3.2.1. Data Set Sources

In the context of this literature review, three primary sources of data were identified, from which the analyzed publications obtained their datasets for model training:
  • Simulated data generated through computational models;
  • Freely available benchmark datasets, often created in the context of competitions or for comparative research purposes;
  • Real-world data.
Table 2 provides an overview of the reviewed publications according to their primary data source. While only a small proportion of studies relied on simulation data (11%) or benchmark data (7%), most publications utilized real-world data for training and validating their models.
The following provides a detailed discussion of each data source category used for model development, training, and validation.
Simulation: Eight publications [20,24,47,48,49,50,51,52] employed synthetic data generated through simulations of manufacturing or remanufacturing processes. Simulation was especially prevalent in studies addressing remanufacturing, where access to operational data is often restricted due to confidentiality or the complexity of reverse logistics [20,24,48,49,50]. Several studies used Monte Carlo simulations to replicate return flows by sampling from statistical distributions that represent uncertainty in variables such as return timing, product lifespans, failure rates, and external disruptions [20,48,49]. In addition to validation, simulation environments often served as controlled testbeds for tuning model hyperparameters and exploring different configurations [20,49]. Jung et al. explicitly cited data scarcity as a motivation for using simulation, emphasizing its role as a practical workaround in the early stages of model development [47].
Benchmark: Five studies utilized publicly available datasets as a training basis for ML models [53,54,55,56,57]. These benchmark datasets supported comparability across studies and reproducibility of results. Notably, three studies [53,54,55] used maintenance data from the Army DELIIS system, which includes detailed records of spare parts failures and maintenance histories for military vehicles such as tanks and aircraft. These datasets enabled the application of various ML techniques, including text mining, time series analysis, and neural networks. Chandriah et al. [56] employed Norway’s public vehicle registration data to forecast spare parts demand using deep learning, while Pawar and Tiple [57] adapted a nontraditional dataset, the Vietnam War Bombing Operations dataset, for predictive modeling, highlighting creative strategies for compensating for data scarcity.
Real-world data: A total of 60 of the 72 reviewed publications [15,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115] used empirical datasets from actual operations, making this the dominant data source (82%). These datasets often include historical time series tracking usage, failures, maintenance events, inventory levels, and sales. The data were typically sourced from sectors such as military logistics, automotive aftermarket, or heavy machinery industries, where the need for accurate return forecasts is especially pronounced, reflecting a broad applicability across different domains. While real-world data increases model relevance and external validity, it also presents practical challenges, including missing values, inconsistent measurements, and noise—issues that must be addressed during preprocessing and model training.

3.2.2. Machine Learning Methods

After assessing the data sources, the review turned to the machine learning models applied to forecast return quantity and timing. The analysis considered both the types of models employed and the experimental strategies used to evaluate their effectiveness.
Overall, 34% of publications (24 out of 71) evaluated only a single model or different variants of the same model, such as alternative architectures or hyperparameter settings. The remaining 66% (47 publications) compared multiple models in their experiments. In the following, the term “prime model” refers to the best-performing model or the primary focus of a publication, while “baseline model” denote those used for performance comparison. Table 3 presents the ten most frequently used ML models across the reviewed publications.
Traditional statistical models: Classical forecasting techniques—including Moving Average (MA), Autoregressive Integrated Moving Average (ARIMA), Exponential Smoothing (ES), Syntetos–Boylan Approximation (SBA), and Croston (CR)—were widely used, primarily as baseline models [65,89,107,109]. These models are valued for their simplicity and interpretability, and they perform well on stable time series data with consistent trends or seasonality. However, their limitations become apparent when faced with highly volatile, nonlinear, or intermittent return patterns, which are common in core returns for remanufacturing. Consequently, they are often used to benchmark more flexible ML-based models.
Traditional machine learning models: Algorithms such as Regression (R), Support Vector Machines (SVMs), and Random Forests (RFs) were frequently applied due to their ability to capture nonlinear dependencies and handle diverse input features. SVMs and RFs, in particular, demonstrated robustness against overfitting and performed well even with smaller or incomplete datasets, making them especially relevant in contexts where high-quality data is scarce or inconsistent [69,96,113]. These models also support interpretability and can handle categorical and continuous input variables.
Deep learning models: Advanced models such as Artificial Neural Networks (ANNs) and Long Short-Term Memory (LSTM) networks were increasingly employed in studies dealing with larger, more complex datasets [56,66,103]. LSTMs in particular excel at modeling temporal dependencies, making them ideal for time-series forecasting tasks in remanufacturing. However, their high data and computational requirements can present obstacles in industrial settings where training data may be limited or noisy [56,70].

3.3. Machine Learning Methods for Forecasting the Condition of Cores

Building on the return quantity and timing findings, this section examines machine learning methods applied to forecasting the condition of returned cores. Due to a lack of studies specifically addressing condition prediction in remanufacturing, the review draws on adjacent research in quality prediction within manufacturing processes. The underlying rationale is that the core condition upon return can be analogously modeled using quality-related indicators derived from production data.
Within the reviewed literature, two primary methodological approaches were identified: classification and regression, each corresponding to the nature of the predicted outcome. Classification methods, used in 21 of the 53 reviewed studies, aim to predict discrete quality classes (e.g., pass/fail, scrap/non-scrap). Regression methods, applied in 32 studies, focus on forecasting continuous condition metrics such as wear level, dimensional deviations, or surface roughness. The following sections provide a detailed examination of data sources and model types used in both approaches, with a view toward their relevance and transferability to remanufacturing.

3.3.1. Data Set Sources

As with forecasting return quantity and timing, three primary data sources were identified in the reviewed literature on condition prediction: real-world production data, simulated data, and public benchmark datasets. These sources mirror the structure and characteristics discussed in the previous section. Table 4 provides an overview of the reviewed publications classified according to these data source types.
Simulation: Two publications [116,117] used simulation-based data to train and validate models under controlled conditions, particularly when real-world data was scarce. For instance, Alenezi et al. [116] implemented a physics-informed, weakly supervised approach incorporating domain knowledge into model training. Similarly, Wang et al. [117] utilized graph-based modeling to emulate variability in mass-customized production settings. In both cases, simulation facilitated the modeling of complex manufacturing dynamics without sufficient empirical data.
Benchmark: Twelve studies [118,119,120,121,122,123,124,125,126,127,128,129] employed publicly available datasets to benchmark model performance. These datasets often feature high dimensionality and class imbalance, making them ideal for testing the robustness of hybrid or ensemble methods. For example, Bai et al. [126] and Zhou et al. [118] evaluated models under realistic industrial constraints, while others, like Psarommatis et al. [120] and Kao et al. [119], focused on multistage or zero-defect manufacturing processes, using datasets that reflect sequential quality control points and sparse defect labels.
Real-world data: The majority of studies, 39 out of 53, relied on empirical datasets collected from operational manufacturing environments [130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160,161,162,163,164,165,166,167,168]. These covered a broad range of applications, including additive manufacturing [133,156,167], semiconductor production [135,142], metal casting [141], and bioprocessing [164]. The datasets often included time-series sensor readings, process parameters, and post-production quality labels.
Although real-world data offers strong external validity and practical relevance, it also introduces challenges such as missing values, inconsistent sampling, and class imbalance. These issues were addressed using advanced preprocessing techniques, transfer learning, or ensemble modeling [123,150,154]. Several studies also explored the integration of edge computing [147,153] and explainable AI methods [144] to enhance real-time applicability and transparency in decision-making.

3.3.2. Machine Learning Methods

The analysis of machine learning methods for condition prediction revealed various techniques, reflecting the complexity and variety of manufacturing quality data. As with return forecasting, approximately one-third of the studies applied a single model or internal model variants, while most conducted comparative analyses of multiple approaches. Table 5 summarizes the ten most commonly used ML models identified across the reviewed publications.
Regression: A total of 32 publications employed regression models like linear or multiple regression to predict continuous quality characteristics. Among traditional approaches, regression models remain a common baseline due to its simplicity and interpretability, appearing in 12 studies [116,120,125,133,134,139,140,150,159,164,167]. While often outperformed by more advanced methods, they are still valuable for low-variance scenarios and for establishing benchmark accuracy levels.
SVM and Random Forest (RF) represent this category’s most frequently applied traditional machine learning models. SVM was used in 16 studies [116,121,123,125,126,127,129,131,134,137,141,143,146,159,161,164], demonstrating strong performance in modeling nonlinear relationships, particularly with smaller datasets. RF appeared in 14 studies, offering ensemble-based robustness against overfitting and interpretability through feature importance ranking [116,117,125,128,131,133,136,137,138,139,140,146,159,164].
Deep learning models are also increasingly applied in regression tasks. ANNs were used in 16 publications [121,122,125,126,129,131,134,136,137,141,143,145,152,159,164,167], typically in scenarios with sufficient training data and complex feature interactions. More advanced models like LSTM appeared in six studies [120,133,140,141,145,161], especially where sequential dependencies or time-based process characteristics were central in determining final product quality.
Classification: Of the 53 reviewed publications, 21 studies applied classification techniques to predict discrete quality outcomes, such as defect classes, pass/fail status, or categorical condition levels. SVM emerged as the most frequently used classification method, adopted in 12 studies [118,119,147,148,149,153,155,156,157,158,160,162]. SVM’s ability to construct hyperplanes in high-dimensional spaces makes it particularly effective for separating class boundaries in imbalanced or noisy datasets—a common feature of industrial defect data.
RF also plays a central role in classification, appearing in ten publications [132,144,151,153,157,158,160,162,163,168]. Its ensemble structure allows it to generalize well, even in highly variable production environments. Other widely used methods include Decision Trees (DTs) [119,144,148,151,153,155,157,160,162,168] and Extreme Gradient Boosting (XGB) [144,153,158,162,163,165], which offer strong performance and scalability, especially in cases with limited or imbalanced labeled data.
Neural networks continue to gain traction in classification tasks as well. Although less prevalent than in regression, ANNs were still used in eight studies for categorical quality prediction [119,132,135,147,148,156,157,160]. These models are particularly favored when quality classification is based on complex sensor signals or multivariate process data.
In summary, the reviewed literature on quality prediction in manufacturing demonstrates a broad spectrum of machine learning techniques, each tailored to the nature of the data and the forecasting objective, whether categorical or continuous. This methodological diversity underscores the field’s maturity and provides valuable insights for transferring these approaches to remanufacturing, where similar challenges arise in predicting the condition of cores. The following discussion will synthesize these findings and reflect on their implications for future research and industrial applications in remanufacturing.

4. Discussion

This study aimed to identify key influencing factors and suitable ML approaches for forecasting return quantity, timing, and condition of cores in remanufacturing. The findings offer practical and academic perspectives from expert interviews and two systematic literature reviews. The following discussion synthesizes these insights, focusing on their transferability to the remanufacturing context and outlining current limitations and future research needs.

4.1. Transferability of Influencing Factors and Data Sources

The expert interviews revealed that the predictability of returns is influenced by a diverse set of interrelated factors, spanning product design, material quality, customer behavior, usage context, and take-back strategies. These factors were systematically categorized using an adapted Ishikawa diagram to reflect their multi-causal nature. Particularly noteworthy is the cascading influence of user behavior and product failure on return timing and condition, highlighting the need for data that captures technical, behavioral, and contextual variables. However, it should be noted that all interviewees came from the transportation sector, including automotive, rail, and bicycle industries, which may introduce a sector-specific bias in the identified influencing factors and their perceived importance.
Once these influencing factors are identified, the next step involves locating and extracting the relevant data from the company’s IT systems. This requires assessing where the data is stored, evaluating whether its granularity is sufficient, and determining how it can be accessed for model development. Not all identified factors are equally relevant, as some may have their influence overestimated or underestimated. Therefore, applying feature selection techniques is essential to isolate the most impactful variables, ensuring that the final set of model inputs contributes meaningfully to forecasting accuracy.
The literature reviews confirmed that machine learning models for forecasting return timing, quantity, and condition typically rely on three types of data: real-world operational data, simulated datasets, and publicly available benchmark datasets. This combination mirrors the data landscape in remanufacturing, where proprietary constraints, inconsistent data quality, and limited access to labeled datasets present ongoing challenges. While real-world data offers the highest fidelity for deployment, simulated and benchmark datasets remain crucial for early experimentation, model comparison, and addressing edge cases, particularly in underexplored remanufacturing scenarios.
However, the influencing factors identified in adjacent research domains, such as spare parts demand forecasting or quality and defect prediction in manufacturing, often differ from those relevant in remanufacturing. These domains focus on structured, product-centric variables within stable, linear supply chains. In contrast, remanufacturing involves greater variability and uncertainty, driven by diverse customer usage behaviors, inconsistent core conditions, and complex reverse logistics systems. As such, many of the influencing factors in remanufacturing are context-specific and underrepresented in the reviewed literature. This highlights the importance of conducting expert interviews to capture practice-oriented knowledge that complements and extends the literature findings.
Nonetheless, despite these differences in influencing factors, the underlying structure of machine learning models identified in the literature remains applicable. These model types provide a transferable foundation that can be adapted to the specific characteristics and requirements of remanufacturing use cases.

4.2. Applicability of Machine Learning Models in Remanufacturing

From a methodological perspective, the literature review showed that a broad spectrum of ML models has been applied in both forecasting domains, ranging from classical statistical approaches to advanced neural networks. For remanufacturing, where data availability can be sparse or inconsistent, traditional models like regression, SVMs, and RFs are particularly promising due to their robustness and interpretability.
In contrast, deep learning models such as LSTM and ANN offer advantages in capturing nonlinear dependencies and sequential behavior, which are beneficial in modeling degradation patterns or time-series returns but require extensive and high-quality datasets that are often unavailable in remanufacturing.
An important insight from the review is the alignment between model type and prediction target. For return quantity and timing, time-series forecasting methods and regression models dominate. At the same time, condition forecasting employs regression and classification techniques depending on whether the outcome is categorical or continuous. This flexible approach is directly transferable to remanufacturing, where cores may be assessed on graded scales or binary condition outcomes (e.g., non-scrap vs. scrap).
However, it is important to acknowledge the limitations inherent in transferring findings from adjacent domains, such as spare parts demand forecasting and quality or fault prediction in manufacturing. While these fields share structural similarities with remanufacturing, such as irregular demand patterns, failure prediction, and data-driven decision-making, they typically operate in more controlled, linear supply chains with better data availability and standardization. Remanufacturing, by contrast, involves greater uncertainty due to variable product usage histories, inconsistent core condition, and more complex reverse logistics. These differences may affect model performance and generalizability. Therefore, while cross-domain insights provide valuable starting points, their application in remanufacturing requires careful contextual adaptation and further empirical validation.

4.3. Challenges and Research Gaps

Despite methodological advancements, several gaps remain. First, there is a lack of direct application and validation of ML models within actual remanufacturing systems. Most current research is derived from adjacent domains like spare parts logistics and manufacturing, limiting contextual relevance. Second, data heterogeneity, including sensor types, process structures, and quality indicators, poses a significant challenge for model generalization. Third, behavioral and contextual data, repeatedly cited by experts as critical, are underrepresented in existing datasets.
To address these limitations, future research should prioritize the development of remanufacturing-specific datasets, including integrated behavioral, operational, and environmental variables. Moreover, applying advanced learning techniques such as multi-task learning, transfer learning, and hybrid modeling offers a promising approach for capturing the dynamics of remanufacturing. Multi-task learning, for instance, enables simultaneous prediction of multiple return characteristics, e.g., timing, quantity, and condition, by leveraging shared features across tasks to improve overall performance. Transfer learning allows knowledge gained from related domains or pre-trained models to be applied to remanufacturing scenarios, which is particularly beneficial in cases of limited or imbalanced data. Hybrid modeling combines data-driven machine learning with domain knowledge, such as physical laws or expert-defined rules, thus improving model robustness and interpretability.

5. Conclusions

This work investigates how machine learning can support forecasting return quantity, timing, and condition of used parts in remanufacturing. A mixed-method approach was employed, combining expert interviews with two systematic literature reviews. The expert interviews provided practical insights into the real-world factors affecting product returns. At the same time, the literature reviews explored existing machine learning methods used in related fields—specifically, spare parts logistics and manufacturing quality prediction.
The expert interviews revealed that a wide range of interconnected factors influence the predictability of returns. These include product and material properties, human behavior, environmental conditions, and the structure of the take-back system. The results were synthesized into an adapted Ishikawa diagram to visualize these multi-causal relationships and underline their relevance for forecasting models.
The literature reviews complemented these findings by analyzing 125 publications—72 related to return quantity and timing, and 53 to condition forecasting. While only a few studies explicitly addressed remanufacturing, many contributions from adjacent fields could be transferred. Data sources used for model training were primarily real-world datasets, but simulations and benchmark data also played a significant role, especially where access to proprietary data was limited.
Across both forecasting domains, a wide array of machine learning models was identified. Traditional models such as Regression, SVMs, and RFs were commonly used due to their robustness and interpretability. More complex models like ANNs and LSTMs were favored in data-rich environments, particularly when capturing nonlinear or temporal patterns. In the condition prediction literature, a distinction was made between classification and regression approaches, depending on whether the quality outcome was discrete or continuous.
This work contributes to the growing body of research in remanufacturing by systematically identifying applicable ML models and data requirements and proposing a structured framework for understanding the factors influencing product returns. However, while the work outlines suitable influencing factors and ML models, real-world application in remanufacturing is still challenged by the limited availability of historical data. Additionally, the review results have not yet been tested or benchmarked in industrial contexts. Without empirical validation in operational environments, the reliability and practicality of these findings regarding influencing factors and ML models remain uncertain. Future research should therefore focus on real-world implementation and the development of remanufacturing-specific datasets to strengthen practical applicability.

Author Contributions

Conceptualization, J.G.E. and J.K.; methodology, J.G.E.; results, J.G.E.; writing—original draft preparation, J.G.E., E.A. and R.W.; writing—review and editing, J.K. and F.D.; visualization, J.G.E.; project administration, J.G.E.; funding acquisition, J.G.E., J.K. and F.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Federal Ministry for Economic Affairs and Climate Action, grant number 19S23004C.

Institutional Review Board Statement

Ethical review and approval were waived for this study as the research did not involve human subjects, human material, or tissue. Expert interviews were conducted in full compliance with the General Data Protection Regulation (EU 2016/679) and were completely anonymized. All relevant ethical and legal standards—particularly those of the European Convention on Human Rights, the Council of Europe Convention for the Protection of Individuals regarding Automated Processing of Personal Data, and the GDPR—were strictly followed.

Informed Consent Statement

Informed consent was obtained from all subjects involved in this study.

Data Availability Statement

Data regarding the literature reviews are available upon request. Data regarding the expert interviews is not available upon request due to privacy restrictions.

Acknowledgments

The authors would like to thank the experts who participated in this study for their time and input, as well as the anonymous reviewers for their evaluation.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ANNArtificial Neural Network
CRCroston
DTDecision Tree
ESExponential Smoothing
LSTMLong-Short-Term-Memory
MAMoving Average
MLMachine Learning
RRegression
RFRandom Forest
RQResearch question
SBASyntetos-Boylan-Approximation
SVMSupport Vector Machine
XGBExtreme Gradient Boost

References

  1. European Commission. The Clean Industrial Deal: A Joint Roadmap for Competitiveness and Decarbonisation; European Commission: Brussels, Belgium, 2025. [Google Scholar]
  2. Kammerer, F.; Kappe, T. Nationale Kreislaufwirtschaftsstrategie; Bundesministerium für Umwelt, Naturschutz, nukleare Sicherheit und Verbraucherschutz: Berlin, Germany, 2024. [Google Scholar]
  3. BS 8887-2:2009; Design for Manufacture, Assembly, Disassembly and End-of-Life Processing (MADE). British Standards Institution: London, UK, 2009.
  4. Grosse Erdmann, J.; Mahr, A.; Derr, P.; Walczak, P.; Koller, J. Comparative Life Cycle Assessment of Conventionally Manufactured and Additive Remanufactured Electric Bicycle Motors. In Proceedings of the Conference on Production Systems and Logistics: CPSL 2023-2; Herberger, D., Hübner, M., Eds.; Publish-Ing: Hannover, Germany, 2023. [Google Scholar]
  5. Köhler, D.C.F. Regenerative Supply Chains: Regenerative Wertschöpfungsketten. Ph.D. Thesis, Universität Bayreuth, Bayreuth, Germany, 2011. [Google Scholar]
  6. Steinhilper, R. Remanufacturing: The Ultimate form of Recycling; Fraunhofer-IRB-Verl: Stuttgart, Germany, 1998; ISBN 978-3816752165. [Google Scholar]
  7. Bobba, S.; Tecchio, P.; Ardente, F.; Mathieux, F.; dos Santos, F.M.; Pekar, F. Analysing the contribution of automotive remanufacturing to the circularity of materials. Procedia CIRP 2020, 90, 67–72. [Google Scholar] [CrossRef]
  8. Nasr, N.Z.; Russell, J.D. Re-Defining Value–The Manufacturing Revolution: Remanufacturing, Refurbishment, Repair and Direct Reuse in the Circular Economy; International Resource Panel: Nairobi, Kenya, 2018. [Google Scholar]
  9. Lange, U. Ressourceneffizienz durch Remanufacturing—Industrielle Aufarbeitung von Altteilen; VDI Zentrum Ressourceneffizienz GmbH: Berlin, Germany, 2017. [Google Scholar]
  10. Kurilova-Palisaitiene, J.; Sundin, E.; Poksinska, B. Remanufacturing challenges and possible lean improvements. J. Clean. Prod. 2018, 172, 3225–3236. [Google Scholar] [CrossRef]
  11. Guide, V.R. Production planning and control for remanufacturing: Industry practice and research needs. J. Oper. Manag. 2000, 18, 467–483. [Google Scholar] [CrossRef]
  12. de Brito, M.P.; van der Laan, E.A. Inventory control with product returns: The impact of imperfect information. Eur. J. Oper. Res. 2009, 194, 85–101. [Google Scholar] [CrossRef]
  13. Wei, S.; Tang, O.; Sundin, E. Core (product) Acquisition Management for remanufacturing: A review. J. Remanuf. 2015, 5, 4. [Google Scholar] [CrossRef]
  14. Maurer, I.; Dertouzos, J.; Goel, U.; Scarinci, E.; Tolstinev, D. Powering the Remanufacturing Renaissance with AI; McKinsey & Company: New York, NY, USA, 2025. [Google Scholar]
  15. Saraswati, D.; Sari, D.K.; Puspitasari, F.; Amalia, F. Forecasting product returns using artificial neural network for remanufacturing processes. AIP Conf. Proc. 2023, 2485, 110005. [Google Scholar] [CrossRef]
  16. Goh, T.N.; Varaprasad, N. A Statistical Methodology for the Analysis of the Life-Cycle of Reusable Containers. IIE Trans. 1986, 18, 42–47. [Google Scholar] [CrossRef]
  17. Kelle, P.; Silver, E.A. Forecasting the returns of reusable containers. J. Oper. Manag. 1989, 8, 17–35. [Google Scholar] [CrossRef]
  18. Marx-Gómez, J.; Rautenstrauch, C.; Nürnberger, A.; Kruse, R. Neuro-fuzzy approach to forecast returns of scrapped products to recycling and remanufacturing. Knowl.-Based Syst. 2002, 15, 119–128. [Google Scholar] [CrossRef]
  19. Clottey, T.; Benton, W.C.; Srivastava, R. Forecasting Product Returns for Remanufacturing Operations. Decis. Sci. 2012, 43, 589–614. [Google Scholar] [CrossRef]
  20. Geda, M.; Kwong, C.K. An MCMC based Bayesian inference approach to parameter estimation of distributed lag models for forecasting used product returns for remanufacturing. J. Remanuf. 2021, 11, 175–194. [Google Scholar] [CrossRef]
  21. Matsumoto, M.; Komatsu, S. Demand forecasting for production planning in remanufacturing. Int. J. Adv. Manuf. Technol. 2015, 79, 161–175. [Google Scholar] [CrossRef]
  22. Ponte, B.; Cannella, S.; Dominguez, R.; Naim, M.M.; Syntetos, A.A. Quality grading of returns and the dynamics of remanufacturing. Int. J. Prod. Econ. 2021, 236, 108129. [Google Scholar] [CrossRef]
  23. Stamer, F.; Sauer, J. Optimizing quality and cost in remanufacturing under uncertainty. Prod. Eng. Res. Devel. 2024, 19, 369–390. [Google Scholar] [CrossRef]
  24. Liang, X.; Jin, X.; Ni, J. Forecasting product returns for remanufacturing systems. J. Remanuf. 2014, 4, 8. [Google Scholar] [CrossRef]
  25. Huster, S.; Rosenberg, S.; Glöser-Chahoud, S.; Schultmann, F. Remanufacturing capacity planning in new markets—Effects of different forecasting assumptions on remanufacturing capacity planning for electric vehicle batteries. J. Remanuf. 2023, 13, 283–304. [Google Scholar] [CrossRef]
  26. Brink, A. Anfertigung Wissenschaftlicher Arbeiten: Ein Prozessorientierter Leitfaden zur Erstellung von Bachelor-, Master- und Diplomarbeiten; Aktualisierte und erweiterte Auflage; Springer Gabler: Wiesbaden, Germany, 2013; ISBN 978-3-658-02510-6. [Google Scholar]
  27. Gläser, J.; Laudel, G. Experteninterviews und Qualitative Inhaltsanalyse als Instrumente Rekonstruierender Untersuchungen; VS Verlag für Sozialwissenschaften: Wiesbaden, Germany, 2009; ISBN 978-3-531-15684-2. [Google Scholar]
  28. Brinkmann, S.; Kvale, S. InterViews: Learning the Craft of Qualitative Research Interviewing, 3rd ed.; SAGE: Los Angeles, CA, USA; London, UK; New Delhi, India; Singapore; Washington, DC, USA, 2015; ISBN 978-1-4522-7572-7. [Google Scholar]
  29. Bogner, A.; Menz, W. The Theory-Generating Expert Interview: Epistemological Interest, Forms of Knowledge, Interaction. In Interviewing Experts; Bogner, A., Littig, B., Menz, W., Kittel, B., Eds.; Palgrave Macmillan: Houndmills, UK, 2009; pp. 43–80. ISBN 978-0-230-22019-5. [Google Scholar]
  30. Bandara, W.; Furtmueller, E.; Gorbacheva, E.; Miskon, S.; Beekhuyzen, J. Achieving Rigor in Literature Reviews: Insights from Qualitative Data Analysis and Tool-Support. CAIS 2015, 37, 8. [Google Scholar] [CrossRef]
  31. Levy, Y.; Ellis, T.J. A Systems Approach to Conduct an Effective Literature Review in Support of Information Systems Research. Informing Sci. Int. J. Emerg. Transdiscipl. 2006, 9, 181–212. [Google Scholar] [CrossRef]
  32. Li, J.; Wu, Z. Remanufacturing Processes, Planning and Control. In New Frontiers of Multidisciplinary Research in STEAM-H (Science, Technology, Engineering, Agriculture, Mathematics, and Health); Toni, B., Ed.; Springer International Publishing: Cham, Switzerland, 2014; pp. 329–356. ISBN 978-3-319-07754-3. [Google Scholar]
  33. vom Brocke, J.; Simons, A.; Niehaves, B.; Riemer, K.; Plattfaut, R.; Cleven, A. Reconstructing the Giant: On the Importance of Rigour in Documenting the Literature Search Process. In Proceedings of the 17th European Conference on Information Systems (ECIS 2009), Verona, Italy, 8–10 June 2009. [Google Scholar]
  34. Kraus, S.; Breier, M.; Dasí-Rodríguez, S. The art of crafting a systematic literature review in entrepreneurship research. Int. Entrep. Manag. J. 2020, 16, 1023–1042. [Google Scholar] [CrossRef]
  35. Brendel, A.B.; Marrone, M.; Trang, S.T.N.; Lichtenberg, S.; Kolbe, L.M. What to do for a Literature Review?—A Synthesis of Literature Review Practices. In Proceedings of the 26th Americas Conference on Information Systems (AMCIS 2020), Salt Lake City, UT, USA, 10–14 August 2020; pp. 1–11, ISBN 978-1-7336325-4-6. [Google Scholar]
  36. Koller, J.; Häfner, R.; Döpper, F. Decentralized Spare Parts Production for the Aftermarket using Additive Manufacturing—A Literature Review. Procedia CIRP 2022, 107, 894–901. [Google Scholar] [CrossRef]
  37. Banker, R.D.; Kauffman, R.J. 50th Anniversary Article: The Evolution of Research on Information Systems: A Fiftieth-Year Survey of the Literature in Management Science. Manag. Sci. 2004, 50, 281–298. [Google Scholar] [CrossRef]
  38. Booth, A.; Sutton, A.; Papaioannou, D. Systematic Approaches to a Successful Literature Review, 2nd ed.; SAGE: Los Angeles, CA, USA; London, UK; New Delhi, India; Singapore; Washington, DC, USA; Melbourne, Australia, 2016; ISBN 978-1-4739-1245-8. [Google Scholar]
  39. Moher, D.; Liberati, A.; Tetzlaff, J.; Altman, D.G. Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. PLoS Med. 2009, 6, e1000097. [Google Scholar] [CrossRef] [PubMed]
  40. Kamiske, G.F.; Brauer, J.-P. Qualitätsmanagement von A bis Z: Wichtige Begriffe des Qualitätsmanagements und ihre Bedeutung; Hanser: München, Germany; Wien, Austria, 2011; ISBN 978-3-446-42581-1. [Google Scholar]
  41. Schmitt, R.; Pfeifer, T. Qualitätsmanagement: Strategien-Methoden-Techniken; Hanser: München, Germany, 2015; ISBN 978-3-446-43432-5. [Google Scholar]
  42. Ma, J.; Kim, H.M. Predictive Model Selection for Forecasting Product Returns. J. Mech. Des. Trans. ASME 2016, 138, 054501. [Google Scholar] [CrossRef]
  43. Cui, H.; Rajagopalan, S.; Ward, A.R. Predicting product return volume using machine learning methods. Eur. J. Oper. Res. 2020, 281, 612–627. [Google Scholar] [CrossRef]
  44. Östlin, J.; Sundin, E.; Björkman, M. Importance of closed-loop supply chain relationships for product remanufacturing. Int. J. Prod. Econ. 2008, 115, 336–348. [Google Scholar] [CrossRef]
  45. Sundin, E.; Sakao, T.; Lind, S.; Kao, C.-C.; Joungerious, B. Map of Remanufacturing Business Model Landscape; European Remanufacturing Network: Aylesbury, UK, 2016. [Google Scholar]
  46. Agrawal, S.; Singh, R.K. Forecasting product returns and reverse logistics performance: Structural equation modelling. MEQ 2020, 31, 1223–1237. [Google Scholar] [CrossRef]
  47. Jung, G.; Park, J.; Kim, Y.; Kim, Y.B. A modified bootstrap method for intermittent demand forecasting for rare spare parts. Int. J. Ind. Eng. Theory Appl. Pract. 2017, 24, 245–254. [Google Scholar]
  48. Tsiliyannis, C.A. Markov chain modeling and forecasting of product returns in remanufacturing based on stock mean-age. Eur. J. Oper. Res. 2018, 271, 474–489. [Google Scholar] [CrossRef]
  49. Geda, M.W.; Kwong, C.K. Forecasting of Used Product Returns for Remanufacturing. In Proceedings of the 2018 IEEE International Conference on Industrial Engineering and Engineering Management (IEEM), Bangkok, Thailand, 16–19 December 2018; pp. 889–893. [Google Scholar]
  50. Zhou, L.; Xie, J.; Gu, X.; Lin, Y.; Ieromonachou, P.; Zhang, X. Forecasting return of used products for remanufacturing using Graphical Evaluation and Review Technique (GERT). Int. J. Prod. Econ. 2016, 181, 315–324. [Google Scholar] [CrossRef]
  51. Zhu, S.; Dekker, R.; van Jaarsveld, W.; Renjie, R.W.; Koning, A.J. An improved method for forecasting spare parts demand using extreme value theory. Eur. J. Oper. Res. 2017, 261, 169–181. [Google Scholar] [CrossRef]
  52. van der Auweraer, S.; Boute, R. Forecasting spare part demand using service maintenance information. Int. J. Prod. Econ. 2019, 213, 138–149. [Google Scholar] [CrossRef]
  53. Kim, J.-D.; Kim, T.-H.; Han, S.W. Demand Forecasting of Spare Parts Using Artificial Intelligence: A Case Study of K-X Tanks. Mathematics 2023, 11, 501. [Google Scholar] [CrossRef]
  54. Choi, B.; Suh, J.H. Forecasting spare parts demand of military aircraft: Comparisons of data mining techniques and managerial features from the case of South Korea. Sustainability 2020, 12, 6045. [Google Scholar] [CrossRef]
  55. Kim, J. Text Mining-based Approach for Forecasting Spare Parts Demand of K-X Tanks. In Proceedings of the 2018 IEEE International Conference on Industrial Engineering and Engineering Management (IEEM), Bangkok, Thailand, 16–19 December 2018; pp. 1652–1656. [Google Scholar]
  56. Chandriah, K.K.; Naraganahalli, R.V. RNN/LSTM with modified Adam optimizer in deep learning approach for automobile spare parts demand forecasting. Multimed. Tools Appl. 2021, 80, 26145–26159. [Google Scholar] [CrossRef]
  57. Pawar, N.; Tiple, B. Analysis on Machine Learning Algorithms and Neural Networks for Demand Forecasting of Anti-Aircraft Missile Spare Parts. In Proceedings of the 2019 International Conference on Communication and Electronics Systems (ICCES), Coimbatore, India, 17–19 July 2019; pp. 854–859. [Google Scholar]
  58. Chien, C.-F.; Ku, C.-C.; Lu, Y.-Y. Ensemble learning for demand forecast of After-Market spare parts to empower data-driven value chain and an empirical study. Comput. Ind. Eng. 2023, 185, 109670. [Google Scholar] [CrossRef]
  59. Guo, F.; Diao, J.; Zhao, Q.; Wang, D.; Sun, Q. A double-level combination approach for demand forecasting of repairable airplane spare parts based on turnover data. Comput. Ind. Eng. 2017, 110, 92–108. [Google Scholar] [CrossRef]
  60. do Rego, J.R.; De Mesquita, M.A. Demand forecasting and inventory control: A simulation study on automotive spare parts. Int. J. Prod. Econ. 2015, 161, 1–16. [Google Scholar] [CrossRef]
  61. Dombi, J.; Jónás, T.; Tóth, Z.E. Modeling and long-term forecasting demand in spare parts logistics businesses. Int. J. Prod. Econ. 2018, 201, 1–17. [Google Scholar] [CrossRef]
  62. Amirkolaii, K.N.; Baboli, A.; Shahzad, M.K.; Tonadre, R. Demand Forecasting for Irregular Demands in Business Aircraft Spare Parts Supply Chains by using Artificial Intelligence (AI). IFAC-PapersOnLine 2017, 50, 15221–15226. [Google Scholar] [CrossRef]
  63. Baisariyev, M.; Bakytzhanuly, A.; Serik, Y.; Mukhanova, B.; Babai, M.Z.; Tsakalerou, M.; Papadopoulos, C.T. Demand forecasting methods for spare parts logistics for aviation: A real-world implementation of the Bootstrap method. Procedia Manuf. 2021, 55, 500–506. [Google Scholar] [CrossRef]
  64. İfraz, M.; Aktepe, A.; Ersöz, S.; Çetinyokuş, T. Demand forecasting of spare parts with regression and machine learning methods: Application in a bus fleet. J. Eng. Res. 2023, 11, 100057. [Google Scholar] [CrossRef]
  65. Mobarakeh, N.A.; Shahzad, M.K.; Baboli, A.; Tonadre, R. Improved Forecasts for uncertain and unpredictable Spare Parts Demand in Business Aircraft’s with Bootstrap Method. IFAC-PapersOnLine 2017, 50, 15241–15246. [Google Scholar] [CrossRef]
  66. AlAlaween, W.H.; Abueed, O.A.; AlAlawin, A.H.; Abdallah, O.H.; Albashabsheh, N.T.; AbdelAll, E.S.; Al-Abdallat, Y.A. Artificial neural networks for predicting the demand and price of the hybrid electric vehicle spare parts. Cogent Eng. 2022, 9, 2075075. [Google Scholar] [CrossRef]
  67. Armenzoni, M.; Montanari, R.; Vignali, G.; Bottani, E.; Ferretti, G.; Solari, F.; Rinaldi, M. An integrated approach for demand forecasting and inventory management optimisation of spare parts. Int. J. Simul. Process Model. 2015, 10, 223–240. [Google Scholar] [CrossRef]
  68. Babaveisi, V.; Teimoury, E.; Gholamian, M.R.; Rostami-Tabar, B. Integrated demand forecasting and planning model for repairable spare part: An empirical investigation. Int. J. Prod. Res. 2023, 61, 6791–6807. [Google Scholar] [CrossRef]
  69. Boukhtouta, A.; Jentsch, P. Support Vector Machine for Demand Forecasting of Canadian Armed Forces Spare Parts. In Proceedings of the 2018 6th International Symposium on Computational and Business Intelligence (ISCBI), Basel, Switzerland, 27–29 August 2018; pp. 59–64. [Google Scholar]
  70. Dali, H.; Chengcheng, L. Demand forecast of equipment spare parts based on EEMD-LSTM. In Proceedings of the 2021 6th International Conference on Intelligent Informatics and Biomedical Sciences (ICIIBMS), Oita, Japan, 25–27 November 2021; pp. 230–234. [Google Scholar]
  71. Han, Y.; Wang, L.; Gao, J.; Xing, Z.; Tao, T. Combination forecasting based on SVM and neural network for urban rail vehicle spare parts demand. In Proceedings of the 2017 36th Chinese Control Conference (CCC), Dalian, China, 26–28 July 2017; pp. 4660–4665. [Google Scholar]
  72. Jónás, T.; Tóth, Z.E.; Dombi, J. A knowledge discovery based approach to long-term forecasting of demand for electronic spare parts. In Proceedings of the 2015 16th IEEE International Symposium on Computational Intelligence and Informatics (CINTI), Budapest, Hungary, 19–21 November 2015; pp. 291–296. [Google Scholar]
  73. Lee, H.; Kim, J. A Predictive Model for Forecasting Spare Parts Demand in Military Logistics. In Proceedings of the 2018 IEEE International Conference on Industrial Engineering and Engineering Management (IEEM), Bangkok, Thailand, 16–19 December 2018; pp. 1106–1110. [Google Scholar]
  74. Liu, Y.; Zhang, Q.; Fan, Z.-P.; You, T.-H.; Wang, L.-X. Maintenance Spare Parts Demand Forecasting for Automobile 4S Shop Considering Weather Data. IEEE Trans. Fuzzy Syst. 2019, 27, 943–955. [Google Scholar] [CrossRef]
  75. Ma, Z.; Wang, C.; Zhang, Z. Deep Learning Algorithms for Automotive Spare Parts Demand Forecasting. In Proceedings of the 2021 International Conference on Computer Information Science and Artificial Intelligence (CISAI), Kunming, China, 17–19 September 2021; pp. 358–361. [Google Scholar]
  76. de Melo Menezes, B.A.; de Siqueira Braga, D.; Hellingrath, B.; de Lima Neto, F.B. An evaluation of forecasting methods for anticipating spare parts demand. In Proceedings of the 2015 Latin America Congress on Computational Intelligence (LA-CCI), Curitiba, Brazil, 13–16 October 2015; pp. 1–6. [Google Scholar]
  77. Niu, P.; Wang, Z.; Lei, Y.; Wan, J. Demand Forecast of Spare Parts of Air Materials Based on Wavelet Analysis and GM(1,1)-AR(p) Model. In Proceedings of the 2020 International Conference on Artificial Intelligence and Electromechanical Automation (AIEA), Tianjin, China, 26–28 June 2020; pp. 665–669. [Google Scholar]
  78. Pawar, N.; Tiple, B. Demand Forecasting of Anti-Aircraft Missile Spare Parts Using Neural Network. In Proceedings of the 2019 3rd International conference on Electronics, Communication and Aerospace Technology (ICECA), Coimbatore, India, 12–14 June 2019; pp. 572–578. [Google Scholar]
  79. Qiu, Q.; Qin, C.; Shi, J.; Zhou, H. Research on Demand Forecast of Aircraft Spare Parts Based on Fractional Order Discrete Grey Model. In Proceedings of the 2019 IEEE 5th International Conference on Computer and Communications (ICCC), Chengdu, China, 6–9 December 2019; pp. 2212–2216. [Google Scholar]
  80. Wang, H.; Liu, H.; Shao, S.; Zhang, Z. Demand Forecasting and Impact Analysis of Spare Parts Based on Large Components of the Warship. In Proceedings of the 2023 5th International Conference on System Reliability and Safety Engineering (SRSE), Beijing, China, 20–23 October 2023; pp. 38–43. [Google Scholar]
  81. Wang, Z.; Wen, J.; Hua, D. Research on distribution network spare parts demand forecasting and inventory quota. In Proceedings of the 2014 IEEE PES Asia-Pacific Power and Energy Engineering Conference (APPEEC), Hong Kong, China, 7–10 December 2014; pp. 1–6. [Google Scholar]
  82. Wu, X.; Bian, W. Demand analysis and forecast for spare parts of perishable hi-tech products. In Proceedings of the 2015 International Conference on Logistics, Informatics and Service Sciences (LISS), Barcelona, Spain, 27–29 July 2015; pp. 1–6. [Google Scholar]
  83. Xing, R.; Shi, X. A BP-SVM combined model for intermittent spare parts demand prediction. In Proceedings of the 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC), Bari, Italy, 6–9 October 2019; pp. 1085–1090. [Google Scholar]
  84. Yang, C.; Xu, Q.; Qin, H.; Xuan, K. Grey Forecasting Method of Equipment Spare Parts Demand Based on Swarm Intelligence Optimization. In Proceedings of the 2021 IEEE 5th Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), Chongqing, China, 12–14 March 2021; pp. 1–4. [Google Scholar]
  85. Cao, Y.; Li, Y. A two-stage approach of forecasting spare parts demand using particle swarm optimization and fuzzy neural network. J. Comput. Inf. Syst. 2014, 10, 6785–6793. [Google Scholar]
  86. Carmo, T.; Cruz, M.; Santos, J.; Ramos, S.; Barroso, S.; Araújo, P. Statistical and Machine Learning Methods for Automotive Spare Parts Demand Prediction. Math. Ind. 2022, 39, 471–476. [Google Scholar] [CrossRef]
  87. Caserta, M.; D’Angelo, L. Intermittent demand forecasting for spare parts with little historical information. J. Oper. Res. Soc. 2024, 76, 294–309. [Google Scholar] [CrossRef]
  88. Ding, J.; Liu, Y.; Cao, Y.; Zhang, L.; Wang, J. Spare part demand prediction based on context-aware matrix factorization. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Berlin/Heidelberg, Germany, 2015; Volume 9313, pp. 304–315. [Google Scholar] [CrossRef]
  89. Fan, L.; Liu, X.; Mao, W.; Yang, K.; Song, Z. Spare Parts Demand Forecasting Method Based on Intermittent Feature Adaptation. Entropy 2023, 25, 764. [Google Scholar] [CrossRef]
  90. Guimaraes, C.B.; Marques, J.M.; Tortato, U. Demand forecasting for high-turnover spare parts in agricultural and construction machines: A case study. S. Afr. J. Ind. Eng. 2020, 31, 116–128. [Google Scholar] [CrossRef]
  91. Hong, K.; Ren, Y.; Li, F.; Mao, W.; Gao, X. Robust Interval Prediction of Intermittent Demand for Spare Parts Based on Tensor Optimization. Sensors 2023, 23, 7182. [Google Scholar] [CrossRef] [PubMed]
  92. Hu, Q.; Bai, Y.; Zhao, J.; Cao, W. Modeling spare parts demands forecast under two-dimensional preventive maintenance policy. Math. Probl. Eng. 2015, 2015, 728241. [Google Scholar] [CrossRef]
  93. Hu, Y.G.; Sun, S.; Wen, J.Q. Agricultural machinery spare parts demand forecast based on BP neural network. Appl. Mech. Mater. 2014, 635–637, 1822–1825. [Google Scholar] [CrossRef]
  94. Huang, G.; Yang, Y.; Li, W.; Cao, X.; Yang, Z. A convolutional neural network- back propagation based three-layer combined forecasting method for spare part demand. RAIRO-Oper. Res. 2024, 58, 4181–4195. [Google Scholar] [CrossRef]
  95. Innuphat, S.; Toahchoodee, M. The Implementation of Discrete-Event Simulation and Demand Forecasting Using Temporal Fusion Transformers to Validate Spare Parts Inventory Policy for The Petrochemicals Industry. ECTI Trans. Comput. Inf. Technol. 2022, 16, 247–258. [Google Scholar] [CrossRef]
  96. Jiang, P.; Huang, Y.; Liu, X. Intermittent demand forecasting for spare parts in the heavy-duty vehicle industry: A support vector machine model. Int. J. Prod. Res. 2021, 59, 7423–7440. [Google Scholar] [CrossRef]
  97. Kačmáry, P.; Malindžák, D.; Spišák, J. The Design of Forecasting System Used for Prediction of Electro-Motion Spare Parts Demands as an Improving Tool for an Enterprise Management. Manag. Syst. Prod. Eng. 2019, 27, 242–249. [Google Scholar] [CrossRef]
  98. Kim, J.-D.; Hwang, J.-H.; Doh, H.-H. A Predictive Model with Data Scaling Methodologies for Forecasting Spare Parts Demand in Military Logistics. Def. Sci. J. 2023, 73, 666–674. [Google Scholar] [CrossRef]
  99. Li, Z.; Zhang, Y.; Yan, X.; Peng, Z. A novel prediction model for aircraft spare part intermittent demand in aviation transportation logistics using multi-components accumulation and high resolution analysis. Proc. Inst. Mech. Eng. Part G J. Aerosp. Eng. 2015, 229, 384–395. [Google Scholar] [CrossRef]
  100. Liu, M. Equipment Spare Parts Demand Forecasting and Ordering Decision Based on Holt and MPG. Front. Artif. Intell. Appl. 2023, 373, 306–312. [Google Scholar] [CrossRef]
  101. Lucht, T.; Alieksieiev, V.; Kämpfer, T.; Nyhuis, P. Spare Parts Demand Forecasting in Maintenance, Repair & Overhaul. In Proceedings of the Conference on Production Systems and Logistics; Publish-Ing: Hannover, Germany, 2022. [Google Scholar] [CrossRef]
  102. Ma, J.; Kim, H.M. Predictive modeling of product returns for remanufacturing. In Proceedings of the ASME Design Engineering Technical Conference; American Society of Mechanical Engineers:: New York, NY, USA, 2015; Volume 2A-2015. [Google Scholar] [CrossRef]
  103. Mao, H.L.; Gao, J.W.; Chen, X.J.; Gao, J.D. Demand prediction of the rarely used spare parts based on the BP neural network. Appl. Mech. Mater. 2014, 519–520, 1511–1517. [Google Scholar] [CrossRef]
  104. Özbay, E.; Hacialioğlu, B.; Dokuyucu, B.İ.; Şahin, H.; Saçlı, M.M.; Genç, M.N.; Staiou, E.; Paldrak, M. Developing a Spare Parts Demand Forecasting System. In Proceedings of the International Symposium for Production Research 2019; Lecture Notes in Mechanical Engineering. Springer: Cham, Switzerland, 2020; pp. 676–691. [Google Scholar] [CrossRef]
  105. Qiu, C.; Zhao, B.; Liu, S.; Zhang, W.; Zhou, L.; Li, Y.; Guo, R. Data Classification and Demand Prediction Methods Based on Semi-Supervised Agricultural Machinery Spare Parts Data. Agriculture 2023, 13, 49. [Google Scholar] [CrossRef]
  106. Ren, X.; Zhang, X.-F. Spare Parts Demand Forecasting based on ARMA Model. In Proceedings of the SPIE-The International Society for Optical Engineering, Xi’an, China, 16–18 September 2022; Volume 12462. [Google Scholar] [CrossRef]
  107. Rosienkiewicz, M. Accuracy Assessment of Artificial Intelligence-Based Hybrid Models for Spare Parts Demand Forecasting in Mining Industry. Adv. Intell. Syst. Comput. 2020, 1052, 176–187. [Google Scholar] [CrossRef]
  108. Sun, Y.; Yan, X.; Wang, Z. Demand Forecast of Aviation Spare Parts Based on Grey Model. In Proceedings of the SPIE-The International Society for Optical Engineering, Kaifeng, China, 26–28 May 2023; Volume 12784. [Google Scholar] [CrossRef]
  109. Tsao, Y.-C.; Kurniati, N.; Pujawan, I.N.; Yaqin, A.M.A. Spare parts demand forecasting in energy industry: A stacked generalization-based approach. In Proceedings of the 2019 International Conference on Management Science and Industrial Engineering, Phuket, Thailand, 24–26 May 2019. [Google Scholar] [CrossRef]
  110. Tsao, Y.-C.; Yaqin, A.; Lu, J.-C.; Kurniati, N.; Pujawan, N. Intelligent Demand Forecasting Approaches for Spare Parts in the Energy Industry. Int. J. Ind. Eng. Theory Appl. Pract. 2024, 31, 560–576. [Google Scholar] [CrossRef]
  111. Vaitkus, V.; Zylius, G.; Maskeliunas, R. Electrical spare parts demand forecasting. Elektron. Elektrotechnika 2014, 20, 7–10. [Google Scholar] [CrossRef]
  112. Vasumathi, B.; Saradha, A. Enhancement of intermittent demands in forecasting for spare parts industry. Indian J. Sci. Technol. 2015, 8, 1–8. [Google Scholar] [CrossRef]
  113. Xu, H.; Zhao, W.; Lin, S.; Niu, J.; Li, P. A Demand Forecast Method for Expressway Spare Parts Based on Analysis of Influencing Factors. In Proceedings of the CICTP 2020: Transportation Evolution Impacting Future Mobility-Selected Papers from the 20th COTA International Conference of Transportation Professionals, Xi’an, China, 14–16 August 2020. [Google Scholar]
  114. Yang, Y.; Liu, W.; Zeng, T.; Guo, L.; Qin, Y.; Wang, X. An Improved Stacking Model for Equipment Spare Parts Demand Forecasting Based on Scenario Analysis. Sci. Program. 2022, 2022, 5415702. [Google Scholar] [CrossRef]
  115. Zhu, Q.; Yang, L.; Liu, Y. Research on vehicle spare parts demand forecast based on XGBoost-LightGBM. In Proceedings of the 2023 5th International Conference on Pattern Recognition and Intelligent Systems, Shenyang, China, 28–30 July 2023; pp. 109–114. [Google Scholar] [CrossRef]
  116. Alenezi, D.F.; Biehler, M.; Shi, J.; Li, J. Physics-Informed Weakly-Supervised Learning for Quality Prediction of Manufacturing Processes. IEEE Trans. Autom. Sci. Eng. 2024, 22, 2006–2019. [Google Scholar] [CrossRef]
  117. Wang, Y.; Hu, W.; Zhang, H.; He, Y. Causal Graph Attention Networks for Quality Prediction of Mass Customization Production Process. In Proceedings of the 2024 IEEE 13th Data Driven Control and Learning Systems Conference (DDCLS), Kaifeng, China, 17–19 May 2024; pp. 559–564. [Google Scholar]
  118. Zhou, H.; Yu, K.-M.; Chen, Y.-C.; Hsu, H.-P. A Hybrid Feature Selection Method RFSTL for Manufacturing Quality Prediction Based on a High Dimensional Imbalanced Dataset. IEEE Access 2021, 9, 29719–29735. [Google Scholar] [CrossRef]
  119. Kao, H.-A.; Hsieh, Y.-S.; Chen, C.-H.; Lee, J. Quality prediction modeling for multistage manufacturing based on classification and association rule mining. MATEC Web Conf. 2017, 123, 00029. [Google Scholar] [CrossRef]
  120. Psarommatis, F.; Zhou, B.; Kharlamov, E. Implementation of Zero Defect Manufacturing using quality prediction: A spot welding case study from Bosch. Procedia Comput. Sci. 2024, 232, 1299–1308. [Google Scholar] [CrossRef]
  121. Bai, Y.; Xie, J.; Wang, D.; Zhang, W.; Li, C. A manufacturing quality prediction model based on AdaBoost-LSTM with rough knowledge. Comput. Ind. Eng. 2021, 155, 107227. [Google Scholar] [CrossRef]
  122. Caihong, Z.; Zengyuan, W.; Chang, L. A Study on Quality Prediction for Smart Manufacturing Based on the Optimized BP-AdaBoost Model. In Proceedings of the 2019 IEEE International Conference on Smart Manufacturing, Industrial & Logistics Engineering (SMILE), Hangzhou, China, 20–21 April 2019; pp. 1–3. [Google Scholar]
  123. Liu, D.; Hu, S.; Zhao, X.; Qiu, Q.; Jiang, Y.; Fan, P. Multi-condition Quality Prediction of Production Process Based on Hybrid Transfer Learning. In Proceedings of the 2024 6th International Conference on System Reliability and Safety Engineering (SRSE), Hangzhou, China, 11–14 October 2024; pp. 353–358. [Google Scholar]
  124. Peng, C.; Cheng, Z.; Ren, H.; Lu, R. A Quality Prediction Hybrid Model of Manufacturing Process Based on Genetic Programming. In Proceedings of the 2022 IEEE 11th Data Driven Control and Learning Systems Conference (DDCLS), Chengdu, China, 3–5 August 2022; pp. 77–81. [Google Scholar]
  125. Zhang, D.; Liu, Z.; Jia, W.; Liu, H.; Tan, J. Path Enhanced Bidirectional Graph Attention Network for Quality Prediction in Multistage Manufacturing Process. IEEE Trans. Ind. Inform. 2022, 18, 1018–1027. [Google Scholar] [CrossRef]
  126. Bai, Y.; Sun, Z.; Deng, J.; Li, L.; Long, J.; Li, C. Manufacturing quality prediction using intelligent learning approaches: A comparative study. Sustainability 2018, 10, 85. [Google Scholar] [CrossRef]
  127. Bai, Y.; Sun, Z.; Zeng, B.; Long, J.; Li, L.; de Oliveira, J.V.; Li, C. A comparison of dimension reduction techniques for support vector machine modeling of multi-parameter manufacturing quality prediction. J. Intell. Manuf. 2019, 30, 2245–2256. [Google Scholar] [CrossRef]
  128. Demirel, K.C.; Şahin, A.; Albey, E. A web-based decision support system for quality prediction in manufacturing using ensemble of regressor chains. Commun. Comput. Inf. Sci. 2020, 1255 CCIS, 96–114. [Google Scholar] [CrossRef]
  129. Deng, J.; Bai, Y.; Li, C. A deep regression model with low-dimensional feature extraction for multi-parameter manufacturing quality prediction. Appl. Sci. 2020, 10, 2522. [Google Scholar] [CrossRef]
  130. Chien, C.-H.; Trappey, A.J.; Wang, C.-C. ARIMA-AdaBoost hybrid approach for product quality prediction in advanced transformer manufacturing. Adv. Eng. Inform. 2023, 57, 102055. [Google Scholar] [CrossRef]
  131. Wang, H.; Li, B.; Xuan, F.-Z. A dimensionally augmented and physics-informed machine learning for quality prediction of additively manufactured high-entropy alloy. J. Mater. Process. Technol. 2022, 307, 117637. [Google Scholar] [CrossRef]
  132. Shim, J.; Kang, S.; Cho, S. Active inspection for cost-effective fault prediction in manufacturing process. J. Process Control. 2021, 105, 250–258. [Google Scholar] [CrossRef]
  133. Zhang, J.; Wang, P.; Gao, R.X. Modeling of Layer-wise Additive Manufacturing for Part Quality Prediction. Procedia Manuf. 2018, 16, 155–162. [Google Scholar] [CrossRef]
  134. Wang, M.; Wang, J.; Gao, W.; Guo, M. E-YQP: A self-adaptive end-to-end framework for quality prediction in yarn spinning manufacturing. Adv. Eng. Inform. 2024, 62, 102623. [Google Scholar] [CrossRef]
  135. Huynh, N.-T. Multi-stage defect prediction and classification model to reduce the inspection time in semiconductor back end manufacturing process and an empirical application. Comput. Ind. Eng. 2024, 187, 109778. [Google Scholar] [CrossRef]
  136. Wang, P.; Qu, H.; Zhang, Q.; Xu, X.; Yang, S. Production quality prediction of multistage manufacturing systems using multi-task joint deep learning. J. Manuf. Syst. 2023, 70, 48–68. [Google Scholar] [CrossRef]
  137. Nikita, S.; Thakur, G.; Jesubalan, N.G.; Kulkarni, A.; Yezhuvath, V.B.; Rathore, A.S. AI-ML applications in bioprocessing: ML as an enabler of real time quality prediction in continuous manufacturing of mAbs. Comput. Chem. Eng. 2022, 164, 107896. [Google Scholar] [CrossRef]
  138. Schorr, S.; Möller, M.; Heib, J.; Fang, S.; Bähre, D. Quality Prediction of Reamed Bores Based on Process Data and Machine Learning Algorithm: A Contribution to a More Sustainable Manufacturing. Procedia Manuf. 2020, 43, 519–526. [Google Scholar] [CrossRef]
  139. Kobayashi, S.; Miyakawa, M.; Takemasa, S.; Takahashi, N.; Watanabe, Y.; Satoh, T.; Kano, M. Transfer Learning for Quality Prediction in a Chemical Toner Manufacturing Process. In 14th International Symposium on Process Systems Engineering; Yamashita, Y., Kano, M., Eds.; Elsevier: Amsterdam, The Netherlands, 2022; pp. 1663–1668. [Google Scholar]
  140. Sun, X.; Beghi, A.; Susto, G.A.; Lv, Z. Deep learning-based quality prediction for multi-stage sequential hot rolling processes in heavy rail manufacturing. Comput. Ind. Eng. 2024, 196, 110466. [Google Scholar] [CrossRef]
  141. Yang, X.; Li, Z.; Cao, L.; Chen, L.; Huang, Q.; Bi, G. Process optimization and quality prediction of laser aided additive manufacturing SS 420 based on RSM and WOA-Bi-LSTM. Mater. Today Commun. 2024, 38, 107882. [Google Scholar] [CrossRef]
  142. Al-Kharaz, M.; Ananou, B.; Ouladsine, M.; Combal, M.; Pinaton, J. Quality Prediction in Semiconductor Manufacturing processes Using Multilayer Perceptron Feedforward Artificial Neural Network. In Proceedings of the 2019 8th International Conference on Systems and Control (ICSC), Marrakesh, Morocco, 23–25 October 2019; pp. 423–428. [Google Scholar]
  143. Bai, Y.; Li, C.; Sun, Z.; Chen, H. Deep neural network for manufacturing quality prediction. In Proceedings of the 2017 Prognostics and System Health Management Conference (PHM-Harbin), Harbin, China, 9–12 July 2017; pp. 1–5. [Google Scholar]
  144. Forsberg, B.; Williams, H.; Macdonald, B.; Chen, T.; Hamzeh, R.; Hulse, K. Utilising Explainable Techniques for Quality Prediction in a Complex Textiles Manufacturing Use Case. In Proceedings of the 2024 IEEE 20th International Conference on Automation Science and Engineering (CASE), Bari, Italy, 28 August–1 September 2024; pp. 245–251. [Google Scholar]
  145. Jiang, J.-R.; Yen, C.-T. Markov Transition Field and Convolutional Long Short-Term Memory Neural Network for Manufacturing Quality Prediction. In Proceedings of the 2020 IEEE International Conference on Consumer Electronics-Taiwan (ICCE-Taiwan), Taoyuan, Taiwan, 28–30 September 2020; pp. 1–2. [Google Scholar]
  146. Ju, L.; Zhou, J.; Zhang, X. Corundum production quality prediction based on support vector regression. In Proceedings of the 2017 12th IEEE Conference on Industrial Electronics and Applications (ICIEA), Siem Reap, Cambodia, 18–20 June 2017; pp. 2028–2032. [Google Scholar]
  147. Lee, K.-T.; Lee, Y.-S.; Yoon, H. Development of Edge-based Deep Learning Prediction Model for Defect Prediction in Manufacturing Process. In Proceedings of the 2019 International Conference on Information and Communication Technology Convergence (ICTC), Jeju, Republic of Korea, 16–18 October 2019; pp. 248–250. [Google Scholar]
  148. Matzka, S. Using Process Quality Prediction to Increase Resource Efficiency in Manufacturing Processes. In Proceedings of the 2018 First International Conference on Artificial Intelligence for Industries (AI4I), Laguna Hills, CA, USA, 26–28 September 2018; pp. 110–111. [Google Scholar]
  149. Mohammadi, P.; Wang, Z.J. Machine learning for quality prediction in abrasion-resistant material manufacturing process. In Proceedings of the 2016 IEEE Canadian Conference on Electrical and Computer Engineering (CCECE), Vancouver, BC, Canada, 15–18 May 2016; pp. 1–4. [Google Scholar]
  150. Trappey, A.J.C.; Chien, C.-H. Intelligent Product Quality Prediction for Highly Customized Complex Production Adopting Ensemble Learning Model. In Proceedings of the 2023 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Honolulu, Oahu, HI, USA, 1–4 October 2023; pp. 4575–4580. [Google Scholar]
  151. Yuan, B.-W.; Zhang, Z.-L.; Luo, X.-G.; Yu, Y.; Sun, J.-Y.; Zou, X.-H.; Zou, X.-D. Defect prediction of low pressure die casting in crankcase production based on data mining methods. In Proceedings of the 2020 Chinese Control and Decision Conference (CCDC), Hefei, China, 22–24 August 2020; pp. 2560–2564. [Google Scholar]
  152. Beckschulte, S.; Mohren, J.; Huebser, L.; Buschmann, D.; Schmitt, R.H. Benchmarking Control Charts and Machine Learning Methods for Fault Prediction in Manufacturing. In Lecture Notes in Production Engineering; Springer: Berlin/Heidelberg, Germany, 2023; Part F1163; pp. 545–554. [Google Scholar] [CrossRef]
  153. Chen, M.; Wei, Z.; Li, L.; Zhang, K. Edge computing-based proactive control method for industrial product manufacturing quality prediction. Sci. Rep. 2024, 14, 1288. [Google Scholar] [CrossRef]
  154. Chien, C.-H.; Trappey, A.J. AdaBoost-Based Transfer Learning Approach for Highly-Customized Product Quality Prediction in Smart Manufacturing. Adv. Transdiscipl. Eng. 2024, 60, 236–244. [Google Scholar] [CrossRef]
  155. Deuse, J.; Schmitt, J.; Bönig, J.; Beitinger, G. Dynamic X-ray testing in electronics production-Application of data mining techniques for quality prediction; [Dynamische Röntgenprüfung in der Elektronikproduktion: Einsatz von Data-Mining-Verfahren zur Qualitätsprognose]. ZWF Z. Fuer Wirtsch. Fabr. 2019, 114, 264–267. [Google Scholar] [CrossRef]
  156. Huang, Y.; Yue, C.; Tan, X.; Zhou, Z.; Li, X.; Zhang, X.; Zhou, C.; Peng, Y.; Wang, K. Quality Prediction for Wire Arc Additive Manufacturing Based on Multi-source Signals, Whale Optimization Algorithm–Variational Modal Decomposition, and One-Dimensional Convolutional Neural Network. J. Mater. Eng. Perform. 2024, 33, 11351–11364. [Google Scholar] [CrossRef]
  157. Jun, J.; Chang, T.-W.; Jun, S. Quality prediction and yield improvement in process manufacturing based on data analytics. Process. 2020, 8, 1068. [Google Scholar] [CrossRef]
  158. Jung, H.; Jeon, J.; Choi, D.; Park, A.J.-Y. Application of machine learning techniques in injection molding quality prediction: Implications on sustainable manufacturing industry. Sustainability 2021, 13, 4120. [Google Scholar] [CrossRef]
  159. Kim, A.; Oh, K.; Park, H.; Jung, J.-Y. Comparison of quality prediction algorithms in manufacturing process. ICIC Express Lett. 2017, 11, 1127–1132. [Google Scholar]
  160. Lee, J.H.; Do Noh, S.; Kim, H.-J.; Kang, Y.-S. Implementation of cyber-physical production systems for quality prediction and operation control in metal casting. Sensors 2018, 18, 1428. [Google Scholar] [CrossRef]
  161. Li, R.; Wang, X.; Wang, Z.; Zhu, Z.; Liu, Z. Multistage Quality Prediction Using Neural Networks in Discrete Manufacturing Systems. Appl. Sci. 2023, 13, 8776. [Google Scholar] [CrossRef]
  162. Olowe, M.; Ogunsanya, M.; Best, B.; Hanif, Y.; Bajaj, S.; Vakkalagadda, V.; Fatoki, O.; Desai, S. Spectral Features Analysis for Print Quality Prediction in Additive Manufacturing: An Acoustics-Based Approach. Sensors 2024, 24, 4864. [Google Scholar] [CrossRef]
  163. Sankhye, S.; Hu, G. Machine Learning Methods for Quality Prediction in Production. Logistics 2020, 4, 35. [Google Scholar] [CrossRef]
  164. Tercan, H.; Meisen, T. Online Quality Prediction in Windshield Manufacturing using Data-Efficient Machine Learning. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Long Beach, CA, USA, 6–10 August 2023. [Google Scholar] [CrossRef]
  165. Tian, S.; Zhang, Z.; Xie, X.; Yu, C. A new approach for quality prediction and control of multistage production and manufacturing process based on Big Data analysis and Neural Networks. Adv. Prod. Eng. Manag. 2022, 17, 326–338. [Google Scholar] [CrossRef]
  166. Tsou, C.-S.; Liou, C.; Cheng, L.; Zhou, H. Quality prediction through machine learning for the inspection and manufacturing process of blood glucose test strips. Cogent Eng. 2022, 9, 2083475. [Google Scholar] [CrossRef]
  167. Xiao, X.; Waddell, C.; Hamilton, C.; Xiao, H. Quality Prediction and Control in Wire Arc Additive Manufacturing via Novel Machine Learning Framework. Micromachines 2022, 13, 137. [Google Scholar] [CrossRef] [PubMed]
  168. Zhang, A.; Zhao, Y.; Li, X.; Fan, X.; Ren, X.; Li, Q.; Yue, L. Development of a Hybrid AI Model for Fault Prediction in Rod Pumping System for Petroleum Well Production. Energies 2024, 17, 5422. [Google Scholar] [CrossRef]
Figure 1. Procedure model of the literature review, based on Refs. [33,35,36].
Figure 1. Procedure model of the literature review, based on Refs. [33,35,36].
Sustainability 17 06367 g001
Figure 2. Linked search terms for return quantity and timing forecasting.
Figure 2. Linked search terms for return quantity and timing forecasting.
Sustainability 17 06367 g002
Figure 3. PRISMA four-phase flow diagram for return quantity and timing forecasting, based on Refs. [36,39].
Figure 3. PRISMA four-phase flow diagram for return quantity and timing forecasting, based on Refs. [36,39].
Sustainability 17 06367 g003
Figure 4. Linked search terms for condition forecasting.
Figure 4. Linked search terms for condition forecasting.
Sustainability 17 06367 g004
Figure 5. PRISMA four-phase flow diagram for condition forecasting, based on Refs. [36,39].
Figure 5. PRISMA four-phase flow diagram for condition forecasting, based on Refs. [36,39].
Sustainability 17 06367 g005
Figure 6. Cause-and-effect diagram of the factors influencing the time and quantity of returns and the condition of cores.
Figure 6. Cause-and-effect diagram of the factors influencing the time and quantity of returns and the condition of cores.
Sustainability 17 06367 g006
Table 1. Participants in expert interviews.
Table 1. Participants in expert interviews.
ExpertFunctionSector
E1Product managementCommercial vehicle
E2Quality managementRail transport
E3Business developmentPassenger vehicle
E4Product managementCommercial vehicle
E5Technical project managerBike systems
Table 2. Primary source of process data to train models.
Table 2. Primary source of process data to train models.
Data SourcePublicationsPercentage [%]
Simulation[20,24,47,48,49,50,51,52]11
Benchmark[53,54,55,56,57]7
Real-world data[15,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115]82
Table 3. Overview of the ten most frequently used models.
Table 3. Overview of the ten most frequently used models.
Traditional Statistical ModelsTraditional ML ModelsDeep Learning Models
Publications *MAARIMAESSBACRRSVMRFANNLSTM
Saraswati et al. [15] x
Tsiliyannis [48] x
Zhu et al. [51] xx
Van der Auweraer & Boute [52] xx
Kim et al. [53]x xxxxx
Choi & Suh [54] xxxx
Kim [55] xx xx x
Chandriah & Naraganahalli [56] xxx x
Chien et al. [58] x x
Guo et al. [59] x x
do Rego & de Mesquita [60]x x
Dombi et al. [61] xx x
Amirkolaii et al. [62]x xxx x
Ifraz et al. [64] xxxx
Mobarakeh et al. [65]x xxx
AlAlaween et al. [66] x x
Babaveisi et al. [68] xx x
Boukhtouta & Jentsch [69]xx x
Dali & Chengcheng [70] xx x
Han et al. [71] x x
Lee & Kim [73]x x x
Liu et al. [74] x x
Ma et al. [75] xx
Melo et al. [76]x x x
Pawar & Tiple [78] xxxx
Wang et al. [81] x x
Wu & Bian [82] x
Xing & Shi [83] x x x
Cao & Li [85]x x x
Carmo et al. [86] x
Caserta & D’Angelo [87] x xx
Ding et al. [88]x x x
Fan et al. [89] x x x x
Guimaraes et al. [90] x
Hong et al. [91] x x x x
Hu et al. [93] x
Huang et al. [94] xx x x
Jiang et al. [96]x xxx x xx
Kacmary et al. [97] x x
Kim et al. [98]x xxxxx
Li et al. [99]xxx x x
Liu [100] x
Lucht et al. [101] x
Ma & Kim [102] x
Mao et al. [103] x
Özbay et al. [104]x x
Qiu et al. [105] x
Ren & Zhang [106] x
Rosienkiewicz [107]xxxx x x
Tsao et al. [109]x xxxxxxx
Tsao et al. [110]x xxxxxxx
Vaitkus et al. [111]x x x x
Vasumathi & Saradha [112] x
Xu et al. [113] xxx
Yang et al. [114] x x
Zhu et al. [115] x
Total1715211315122112319
ANN = Artificial Neural Network, LSTM = Long-Short-Term-Memory, MA = Moving Average, ARIMA = Autoregressive Integrated Moving Average, ES = Exponential Smoothing, SBA = Syntetos–Boylan Approximation, R = Regression, SVM = Support Vector Machine, CR = Croston. * The following publications did not apply any of the most frequently applied models and are therefore not included in the table: Refs. [20,24,47,49,50,57,63,67,72,77,79,80,84,92,95,108].
Table 4. Primary source of process data to train models.
Table 4. Primary source of process data to train models.
Data SourcePublicationsPercentage [%]
Simulation[116,117]3
Benchmark[118,119,120,121,122,123,124,125,126,127,128,129]23
Real-world data[130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160,161,162,163,164,165,166,167,168]74
Table 5. Overview of the ten most frequently used models.
Table 5. Overview of the ten most frequently used models.
RegressionClassification
Publications *RSVMRFANNLSTMDTRFXGBSVMANN
Alenezi et al. [116]xxx
Wang et al. [117] x
Psarommatis et al. [120]x x
Bai et al. [121] x x
Caihong et al. [122] x
Liu et al. [123] x
Zhang et al. [125]xxxx
Bai et al. [126] x x
Bai et al. [127] x
Demirel et al. [128] x
Deng et al. [129] x x
Chien et al. [130]
Wang et al. [131] xxx
Zhang et al. [133]x x x
Wang et al. [134]xx x
Wang et al. [136] xx
Nikita et al. [137] xxx
Schorr et al. [138] x
Kobayashi et al. [139]x x
Sun et al. [140]x x x
Yang et al. [141] x xx
Bai et al. [143] x x
Jiang & Yen [145] xx
Ju et al. [146] xx
Trappey & Chien [150]x
Beckschulte et al. [152] x
Kim et al. [159]xxxx
Li et al. [161] x x
Tercan & Meisen [164]xxxx
Xiao et al. [167]x x
Zhou et al. [118] x
Kao et al. [119] x xx
Shim et al. [132] x x
Huynh [135] x
Forsberg at al. [144] xxx
Lee et al. [147] xx
Matzka [148] x xx
Mohammadi & Wang [149] x
Yuan et al. [151] xx
Chen et al. [153] xxxx
Deuse et al. [155] x x
Huang et al. [156] xx
Jun et al. [157] xx xx
Jung et al. [158] xxx
Lee et al. [160] xx xx
Olowe et al. [162] xxxx
Sankhye & Hu [163] xx
Tian et al. [165] x
Zhang et al. [168] xx
Total11161416610106128
ANN = Artificial Neural Network, LSTM = Long-Short-Term-Memory, XGB = Extreme Gradient Boosting, R = Regression, SVM = Support Vector Machine, DT = Decision Tree, RF = Random Forest. * The following publications did not apply any of the most frequently applied models and are therefore not included in the table: Refs. [124,130,142,154,166].
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Grosse Erdmann, J.; Ahmeti, E.; Wolf, R.; Koller, J.; Döpper, F. Forecasting Return Quantity, Timing and Condition in Remanufacturing with Machine Learning: A Mixed-Methods Approach. Sustainability 2025, 17, 6367. https://doi.org/10.3390/su17146367

AMA Style

Grosse Erdmann J, Ahmeti E, Wolf R, Koller J, Döpper F. Forecasting Return Quantity, Timing and Condition in Remanufacturing with Machine Learning: A Mixed-Methods Approach. Sustainability. 2025; 17(14):6367. https://doi.org/10.3390/su17146367

Chicago/Turabian Style

Grosse Erdmann, Julian, Engjëll Ahmeti, Raphael Wolf, Jan Koller, and Frank Döpper. 2025. "Forecasting Return Quantity, Timing and Condition in Remanufacturing with Machine Learning: A Mixed-Methods Approach" Sustainability 17, no. 14: 6367. https://doi.org/10.3390/su17146367

APA Style

Grosse Erdmann, J., Ahmeti, E., Wolf, R., Koller, J., & Döpper, F. (2025). Forecasting Return Quantity, Timing and Condition in Remanufacturing with Machine Learning: A Mixed-Methods Approach. Sustainability, 17(14), 6367. https://doi.org/10.3390/su17146367

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop