Next Article in Journal
Impact of Container Reverse Logistics on the Maritime Sector: Economic and Environmental Factors
Previous Article in Journal
An Integrated DEA–Porter Decision Support Framework for Enhancing Supply Chain Competitiveness in the Muslim Fashion Industry: Evidence from Indonesia
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Data-Driven Approach Using Recurrent Neural Networks for Material Demand Forecasting in Manufacturing

by
Jorge Antonio Orozco Torres
1,*,
Alejandro Medina Santiago
2,*,
José R. García-Martínez
3,
Betty Yolanda López-Zapata
4,
Jorge Antonio Mijangos López
1,
Oscar Javier Rincón Zapata
1 and
Jesús Alejandro Avitia López
1
1
Departamento de Ingeniería Industrial, Tecnológico Nacional de México, Instituto Tecnológico de Tuxtla Gutiérrez, Carretera Panamericana Km. 1080, Tuxtla Gutiérrez 29050, Mexico
2
SECIHTI-National Institute for Astrophysics, Optics and Electronics, Computer Science Coordination, Puebla 72840, Mexico
3
Laboratorio de Control y Robótica, Facultad de Ingeniería en Electrónica y Comunicaciones, Universidad Veracruzana, Poza Rica 93390, Mexico
4
Dirección de Ingeniería Biomédica, Universidad Politecnica de Chiapas, Carretera Tuxtla Gutierrez-Portillo Zaragoza Km 21+500, Las Brisas, Suchiapa 29150, Mexico
*
Authors to whom correspondence should be addressed.
Logistics 2025, 9(3), 130; https://doi.org/10.3390/logistics9030130
Submission received: 10 May 2025 / Revised: 16 August 2025 / Accepted: 29 August 2025 / Published: 12 September 2025

Abstract

Background: In the current context of increasing competitiveness and complexity in markets, accurate demand forecasting has become a key element for efficient production planning. Methods: This study implements recurrent neural networks (RNNs) to predict raw material demand using historical sales data, leveraging their ability to identify complex temporal patterns by analyzing 156 historical records. Results: The findings reveal that the RNN-based model significantly outperforms traditional methods in predictive accuracy when sufficient data is available. Conclusions: Although integration with MRP systems is not explored, the results demonstrate the potential of this deep learning approach to improve decision-making in production management, offering an innovative solution for increasingly dynamic and demanding industrial environments.

1. Introduction

Forecasting material demand in manufacturing is a critical process that enables organizations to anticipate future consumption of raw materials based on historical trends, production plans, and market behavior. Accurate forecasts significantly optimize inventory levels, reduce production delays, and improve cost efficiency across the supply chain [1]. In modern manufacturing environments, where competition and variability are high, effective material demand prediction ensures that the correct quantity of inputs is available at the right time, thereby preventing shortages and overstocking [2]. Traditional statistical approaches, while helpful, often fall short when capturing nonlinear relationships or sudden demand shifts. As a result, integrating advanced data-driven techniques, such as artificial neural networks and time-series models, has become increasingly important [3]. Given the limitations of traditional methods in handling demand volatility [3], this study formally investigates whether LSTM networks can overcome these challenges through three research questions: (RQ1) How effective are LSTM models compared to traditional methods? (RQ2) What are the key implementation challenges? (RQ3) How can forecasts integrate with MRP systems? These methods can process large volumes of historical data and extract complex patterns to enhance forecasting accuracy. Ultimately, reliable material demand forecasting supports agile production planning, minimizes operational risks, and strengthens decision-making in dynamic industrial contexts.
Data-driven approaches have emerged as powerful alternatives to traditional models. Rather than relying on predefined mathematical assumptions, these methods learn patterns and relationships directly from empirical data [4]. In particular, RNNs can model temporal dependencies and nonlinear dynamics, making them well suited for time-series forecasting tasks in manufacturing environments [5]. Their adaptability to fluctuating demand conditions and ability to extract hidden trends from historical records position them as a valuable tool in modern supply chain analytics.
This is why, using a recurrent neural network, we can study the demand behavior for a product given its sales history of three years or more before the current one. Remember that deep learning requires a considerable amount of data (for this study, we only used a total of 156).
Estimating the future demand of a product in the face of a changing market is necessary in the 21st century, so using various methods or tools is often indispensable for companies [6]. The problem, as mentioned earlier, leads to a shortage of merchandise or products that are required during a high demand that is not foreseen. Therefore, this results in losing prospective customers. To address this, mathematical formulations and Python-based implementation using deep learning libraries such as TensorFlow and Keras are implemented. Additionally, a quantitative comparison of forecasted results versus historical records is carried out to validate the model’s effectiveness. Although full integration with MRP systems has yet to be achieved [7], this study establishes a foundation for future operational implementation, particularly for SMEs seeking scalable, data-driven forecasting.
This study addresses the effectiveness of recurrent neural networks (RNNs), particularly Long Short-Term Memory (LSTM) models, in forecasting material demand for manufacturing, comparing their performance against traditional statistical methods. The primary research questions focus on (1) the accuracy of LSTM in capturing nonlinear temporal dependencies, (2) the challenges of data requirements and interpretability in deep learning-based forecasting, and (3) the potential for integration with Material Requirements Planning (MRP) systems to enhance supply chain efficiency. We hypothesize that LSTMs will outperform conventional models (e.g., ARIMA) due to their sequential learning capability (H1), that model accuracy improves with larger datasets but may face diminishing returns (H2), and that optimized demand forecasting can reduce inventory costs by balancing shortages and overstocking (H3). The scope is limited to short-to-medium-term (weekly to yearly) predictions using real-world sales data (156 observations), with a transparent methodology including mathematical formulation and Python implementation for reproducibility. While the study does not yet explore full MRP integration, it establishes a foundation for future operational deployment in small and medium-sized enterprises. The research excludes long-term macroeconomic forecasting, non-recurrent architectures (e.g., transformers), and cross-industry generalizations, concentrating instead on discrete manufacturing use cases. By clarifying these boundaries, the study aims to contribute a scalable, data-driven forecasting approach that enhances decision-making in dynamic industrial environments. Future work will expand dataset size and explore MRP system unification for end-to-end supply chain optimization.
The article is structured as follows: Section 2 presents a concise analysis of related works from the state of the art. Section 3 presents the Theoretical Framework, divided into subsections such as Recurrent Neural Networks, Use of Artificial Neural Networks in Demand Forecasting, Justification, and Hypothesis. Section 4 defines the design methodology, which involves three subsections corresponding to Data Concentration, Mathematical Model of the Neural Network, and Model Construction. Section 5 deals with the numerical results of the diagnostic system and the effects on the neural architecture proposed for the classification of the data obtained. Section 6 presents the discussion of the results obtained. Section 7 displays practical implications of the proposal, and, finally, Section 8 presents the conclusions of the work and future work.

2. Related Works

Recent advances in demand forecasting have seen a convergence between classical statistical models, machine learning (ML) approaches, and hybrid data-driven strategies tailored to domain-specific challenges. For instance, the article [8] addresses the problem of demand forecasting in remanufacturing environments, where uncertainties in product returns and scarce historical information hamper production planning. Using ARIMA and Holt–Winters models, the authors analyze a 12-year database of remanufactured alternators and starters, demonstrating that the Holt–Winters model, with manually tuned smoothing parameters, yields the best results for two-month horizons. The study highlights the relevance of considering sector-specific seasonal patterns, such as temperature-related failures. It performs a detailed analysis of prediction errors, proposing that future improvements could be achieved through models that integrate external variables such as weather or product age. This work contributes to the field by adapting classical time-series techniques to the operational particularities of remanufacturing, offering practical guidance for their application in industries with high variability. A more recent example of data-driven forecasting can be found in the study [9], which addresses the challenges of demand prediction in the automotive supply chain through a hybrid deep learning approach. The authors propose a CNN-LSTM model optimized using the Moth-Flame Optimization algorithm to predict demand for automotive spare parts, incorporating over 12,000 time-series records that include variables such as order dates, raw material prices, product codes, and exchange rates. The model achieves high forecasting accuracy, with R2 values exceeding 0.94, outperforming traditional and deep learning methods. What sets this study apart is its integration of forecasting results into a broader decision-making framework: predicted demands are fed into a Data Envelopment Analysis (DEA) model and refined with the Best–Worst Method (BWM) to evaluate supplier performance across economic, environmental, and social criteria. This comprehensive, data-driven methodology enhances predictive precision and enables more informed and sustainable supplier selection, offering a robust framework for decision-making in complex manufacturing contexts. The work [10] proposes a combined model leveraging SARIMA, Holt–Winters, and BP neural networks to enhance short-term demand prediction of tobacco raw materials. This hybrid framework capitalizes on trend/seasonality modeling and nonlinear learning capabilities, achieving superior predictive accuracy compared to standalone methods. Similarly, ref. [11] addresses the prediction of abrupt changes in cement demand trends in the U.S. using over 600 publicly available economic time-series. The methodology involves a novel segmentation technique, deseasonalization with STL, and forecasting via VAR models optimized using LASSO and simulated annealing. Although focused on macroeconomic indicators, the study exemplifies the potential of statistical learning in high-dimensional, temporally misaligned data, contributing a valuable framework for trend-change detection. The paper [12] tackles the issue of censored demand in fashion retail, where sales data often underestimate actual demand due to stockouts. The authors’ two-stage methodology first reconstructs the latent demand using heuristics and an EM algorithm, then applies ML models (RF, DNN, SVR) to forecast demand for new SKUs based solely on product characteristics and past collection behavior. This granular product-level modeling addresses a critical gap in retail analytics and demonstrates improved alignment with consumer behavior. In the domain of intelligent materials design, ref. [13] introduces a physics-informed neural network (PMNN) that embeds metallurgical principles into a data-driven model to predict the mechanical properties of alloys under dynamic conditions. This approach enhances prediction accuracy and provides model interpretability through SHAP analysis, exemplifying the potential of hybrid physical–AI systems for transparent scientific discovery. Lastly, ref. [14] presents the DCM-DUET framework, which integrates classical discrete choice models with ML optimizers (Genetic Algorithms, SVD, MLE) to predict human decision-making in transportation. This model bridges the longstanding trade-off between prediction accuracy and economic rationality by adopting a bi-level structure that preserves behavioral interpretability while boosting performance.
In contrast to the previously discussed hybrid or econometric models, this work introduces a deep learning-driven approach centered on LSTM networks to forecast material demand in manufacturing settings. Unlike SARIMA-based or VAR-lasso models, which rely heavily on time-series decomposition and parametric assumptions, the LSTM network autonomously captures temporal dependencies and nonlinear patterns in historical data. While the model does not yet integrate with enterprise-level planning systems such as MRP, it offers a transparent, reproducible methodology. It demonstrates the scalability of neural network-based forecasting when historical depth is sufficient. Compared to the demand reconstruction heuristics proposed for censored retail data or the physical interpretability embedded in materials science models, this RNN-based solution emphasizes sequential learning over domain constraints. Its accessible Python-based implementation positions it as a flexible tool for small and medium-sized enterprises seeking data-driven planning capabilities without the need for elaborate statistical preprocessing. Collectively, the inclusion of this approach broadens the methodological spectrum by reinforcing the viability of purely deep learning-based models in operational forecasting contexts.

3. Theoretical Framework

3.1. Recurrent Neural Networks

RNNs are a class of artificial neural networks specifically designed to process sequential data. They are particularly well suited for tasks involving time-series, natural language, or any data with temporal or ordered structure [15]. Unlike traditional feedforward neural networks, which assume independence between inputs, RNNs incorporate feedback connections that allow information to persist across time steps. This recurrent architecture enables the network to maintain an internal state, effectively functioning as a form of memory. As a result, RNNs can learn and exploit temporal dependencies, making them highly applicable to problems where past inputs influence future outcomes. The fundamental structure of an RNN consists of recurrent units that operate over sequences. At each time step, a recurrent unit receives the current input and the hidden state from the previous step, processes both, and generates an output along with an updated hidden state. This chaining of recurrent units creates a loop within the network, allowing it to capture dependencies that unfold over time. However, despite their conceptual elegance and temporal modeling capabilities, standard RNNs are known to suffer from the vanishing and exploding gradient problems during training. These issues arise from the backpropagation of gradients through many time steps, leading to unstable updates; gradients can either shrink to near zero or grow excessively large, impairing the learning process and preventing the network from capturing long-term dependencies effectively. To overcome these limitations, more advanced variants of RNNs have been developed, such as Long Short-Term Memory (LSTM) networks and Gated Recurrent Units (GRUs). These architectures introduce gating mechanisms—including input, output, and forget gates in the case of LSTMs—that regulate the flow of information and selectively retain or discard data over time. This gating structure enables the network to learn which information to keep and which to forget, thereby mitigating the gradient-related issues and allowing for the modeling of more extended sequences with greater accuracy and stability [16]. GRUs simplify this mechanism by combining certain gates, offering comparable performance with reduced computational complexity. RNNs and their variants have found widespread use across diverse application domains. In natural language processing (NLP), they are fundamental for tasks such as text generation, sentiment analysis, and machine translation. In finance and industrial systems, they are employed for time-series forecasting, anomaly detection, and control. Their ability to model sequential patterns has also been instrumental in speech recognition, medical diagnosis based on time-dependent signals, and robotic control. As the demand for data-driven methods grows, RNN-based models continue to play a pivotal role in developing predictive systems capable of learning from complex temporal data.

3.2. Use of Artificial Neural Networks in Demand Forecasting

The application of ANNs in demand forecasting has become a well-established approach in both industry and research, primarily due to their capacity to model complex, nonlinear relationships between input features and target variables [17]. Unlike traditional statistical models, ANNs can learn from historical data without requiring explicit assumptions about the underlying data distribution. This flexibility allows them to capture subtle patterns and interactions that are often overlooked by conventional methods. Several aspects highlight the potential of ANNs to enhance forecasting accuracy as can be seen in Table 1.

3.3. Justification

The adoption of neural networks for demand forecasting is substantiated by robust academic and industry evidence demonstrating their transformative impact on manufacturing efficiency. Recent studies confirm that AI-driven approaches significantly outperform traditional methods in handling volatile demand patterns, with industry reports indicating accelerated adoption rates. For instance, Fatima and Rahimi [3] documented a 40–60% reduction in forecast errors across discrete manufacturing sectors through neural network implementations, attributing this to their capacity to model nonlinear market dynamics.
Complementary analyses by Vijayapriya et al. [2] highlight that 72% of smart manufacturing initiatives now prioritize AI integration for real-time inventory optimization, citing case studies in automotive and electronics where LSTM models reduced stockouts by 35% and holding costs by 28%.
This trend is further validated by McKinsey’s industry survey [22], which identifies demand forecasting as the top AI use case in supply chains, projecting USD 1.2–USD 2.3 trillion in global value generation through reduced waste and improved agility. These findings collectively affirm neural networks as a strategic imperative for data-driven decision-making in modern industrial environments.

3.4. Hypothesis

The use of neural networks has evolved dramatically in recent years, covering various areas of knowledge, including areas of expertise, among them marketing, whose use ranges from demand forecasting, global market evolution, consumer trends, to establish diagnostic probabilities from historical data and consumption predictions. Therefore, we dare to affirm that the model for material forecasting based on a machine learning algorithm can be a tool for further application in various areas of the industrial sector.

4. Design Methodology

The methodology used to carry out this research work, which comprised a structured approach, is presented below.

4.1. Data Concentration

Figure 1 presents the process of input data processing, general elaboration of the artificial neural network, and the process of recording the results of this work.
Once all the data for the year 2021 to date is collected, it is concentrated in an Excel table (Table 2). In this table, the first column describes the annuity, the second column shows the week number, the third column shows the week’s name, and the fourth column shows the total quantity of the product sold during the week. Remembering that a year contains 52 weeks, each year is cut by that amount, so that every 52 weeks, the change of annuality is made. Once the data is obtained, we proceed to program the artificial neural network LSTM, which requires the use of libraries:
  • Pandas;
  • Numpy;
  • Sklearn.preprocessing;
  • Tensorflow.
These will help in forecasting future demand. The next step is to load the data into Excel and then read the data from the “Total” column.

4.2. Mathematical Model of the Neural Network

Understanding the structure of the LSTM neural network is necessary to understand how it is laid out (Figure 2), since a recurrent LSTM neural network is used to predict demand based on historical data.
The basic structure of the LSTM network used is explained below [23,24,25,26]:
  • Input layer: This layer receives the temporary data streams.
  • LSTM layer: The network’s main layer contains Long Short-Term Memory (LSTM) units. LSTM units can remember long-term information and are suitable for modeling temporal sequences.
  • Output layer: This layer produces the network output, which in our case is the demand prediction.
To create a mathematical model, we can represent this network as follows:
Suppose we have N time steps in our input data.
  • Let x ( t ) be the input vector at time step t, where t = 1 , 2 , . . . , N .
  • Let h ( t ) be the hidden state of the LSTM layer at time step t.
  • Let y ( t ) be the output of the network at time step t.
  • Let w h i , w h h , w h o be the weight matrices for the input to hidden state, hidden state to hidden state, and hidden state to output connections, respectively.
  • Let b i , b h , b o be the biases for the input layer, hidden layer, and output layer, respectively.
  • Let w h i , w h h , w h o be the weight matrices for the input to hidden state, hidden state to hidden state, and hidden state to output connections, respectively.
  • Let b i , b h , b o be the biases for the input layer, hidden layer, and output layer, respectively.
The dynamics of the LSTM network are defined by Equations (1)–(7).
i ( t ) = σ ( W i i x ( t ) + b i + W h i h ( t 1 ) + b h )
f ( t ) = σ ( W i f x ( t ) + b i + W h f h ( t 1 ) + b h )
o ( t ) = σ ( W i o x ( t ) + b i + W h o h ( t 1 ) + b h )
g ( t ) = t a n h ( W i g x ( t ) + b i + W h g h ( t 1 ) + b h )
c ( t ) = f ( t ) c ( t 1 ) + i ( t ) g ( t )
h ( t ) = o ( t ) t a n h ( c ( t ) )
y ( t ) = s o f t m a x ( W h o h ( t ) + b o )
where
  • i ( t ) is the input gate;
  • f ( t ) is the forgetting gate;
  • o ( t ) is the output gate;
  • g ( t ) is the vector of proposed memory cells at time step t;
  • c ( t ) is the vector of memory cells at time step t;
  • σ is the sigmoid function.
* denotes element-to-element multiplication. tanh is the hyperbolic tangent function.
The objective of the network is to learn the parameters w h i , w h h , w h o , b i , b h , b o to minimize some loss function, e.g., the mean squared error between predictions y ( t ) and actual values.

4.3. Model Construction

Algorithm 1 implements the LSTM network using Keras (Python 3.11). This architecture was selected for its ability to capture long-term dependencies and maintain stability with sequential data [27,28].
Therefore, the basis of the neural network structure is explained below [27,28]:
  • model.fit(x, y, epochs=50, batch_size=32): This line of code trains the LSTM model using the input data x and the target values y. It specifies that training will be performed for 50 epochs (epochs) with a batch size of 32 (batch_size). During training, the model adjusts its weights and biases to minimize the loss between the predictions and the actual values.
  • last_weeks = normalized_demand_data[-time_steps:]: This line selects the last time_steps weeks of the normalized demand data. It is assumed that normalized_demand_data contains the demand time-series.
Algorithm 1 Demand data preprocessing for the LSTM network
1:
Import libraries
2:
    import pandas as pd
3:
    import numpy as np
4:
    from sklearn.preprocessing import MinMaxScaler
5:
    from tensorflow.keras.models import Sequential
6:
    from tensorflow.keras.layers import LSTM, Dense
7:
Read and select data
8:
df = pd.read_excel(’DATA CONCENTRATION.xlsx’, sheet_name=’DATA CONCENTRATE’)
9:
    datos = df[df[’year’] >= 2021][’Total’].values.reshape(−1, 1)
10:
Normalize data
11:
    scaler = MinMaxScaler(feature_range=(0, 1))
12:
    datos_norm = scaler.fit_transform(datos)
13:
Time step window
14:
    Steps = 3
15:
    x, y = [], []
16:
for i in range(len(datos_norm) − Steps)do
17:
    x.append(datos_norm[i:i + Steps, 0])
18:
    y.append(datos_norm[i + Steps, 0])
19:
end for
20:
x, y = np.array(x), np.array(y)
In the end, forecast will contain the predictions for the next 52 weeks. For illustrative purposes, a snippet of the code is shown.
  • forecast = []: An empty list named forecast is initialized to store the predictions for the next 52 weeks.
  • for i in range(52): This loop iterates 52 times, once each week in the upcoming year.
  • i_input = np.reshape(last_weeks, (1, time_steps, 1)): Here, the variable last_weeks is reshaped to match the input shape required by the LSTM model. An additional dimension is added to represent the batch size (1) and the feature dimension (1), since the model expects a three-dimensional tensor with shape (batch size, time steps, features).
  • prediction = model.predict(i_input)[0, 0]: The trained model is used to predict the demand for the next week using the input i_input. The prediction is extracted from the first element of the first batch of the output tensor.
  • forecast.append(prediction): The predicted value is appended to the forecast list.
  • last_weeks = np.append(last_weeks[1:], [[prediction]], axis=0): The variable last_weeks is updated by removing the first element and appending the new prediction at the end, simulating the forward movement of one week in the input sequence.
Once the model is constructed, it is trained and used for demand forecasting. Each line of code used to define the model is explained below:
  • model = Sequential(): This creates a sequential model in Keras, a linear stack of layers.
  • model.add(LSTM(units=50, return_sequences=True, input_shape=(x.shape[1],1))): An LSTM layer with 50 units (or neurons) is added. The parameter return_sequences=True indicates that the layer will return the full sequence output. input_shape=(x.shape[1], 1) defines the expected shape of the input data, where x.shape[1] is the number of time steps and 1 is the number of features per step.
  • model.add(LSTM(units=50)): A second LSTM layer is added without return_sequences, so it will output only the final state of the sequence.
  • model.add(Dense(units=1)): A dense (fully connected) layer with one unit is added. This is the output layer that generates the prediction.
  • model.compile(optimizer=’adam’, loss=’mean_squared_error’): The model is compiled using the Adam optimizer and mean squared error as the loss function. Adam is a widely used optimization algorithm in deep learning, and mean squared error is appropriate for regression tasks such as time-series forecasting.

5. Results

This section analyzes the impact of the proposed neural network architecture on forecasting material demand using historical sales data. The model was tested for an automotive SME producing electrical components (make-to-stock strategy). Week 41’s deviation (−22%) coincided with a supply chain disruption (documented in internal logs), reflecting real-world volatility. For SMEs, deviations <25% remain operationally acceptable, as excess stock costs (15–20%) outweigh shortage risks (30–35% revenue loss [2,22]). Figure 3 presents the temporal behavior of product sales between 2021 and 2023, where the Y-axis denotes the number of units sold per week, and the X-axis indicates the weekly timeline for each year. The plot illustrates four color-coded lines: blue for 2021, orange for 2022, gray for 2023, and yellow for the LSTM-based forecast 2024. Notably, the historical data exhibit significant dispersion and seasonal variability, which presents a challenge for conventional linear modeling. Despite these fluctuations, the model effectively identifies regions of concentrated demand and generates a forecast curve that closely aligns with historical high-demand periods. The forecasted line follows a relatively smooth and linear path, which may initially seem simplistic compared to higher-order polynomial models. However, this behavior demonstrates a key strength of the model: its capacity to generalize trends and avoid overfitting noisy data. The network emphasizes consistency and robustness in its predictions by prioritizing trend approximation over exact value fitting.
A numerical comparison is provided in Table 3 to further validate the model’s performance. This table contrasts actual weekly sales figures from 2022 and 2023 with the forecasted values for 2024, and calculates the corresponding deviations. Table 3 demonstrates the model’s accurate capture of both positive and negative fluctuations. For instance, during weeks with substantial deviations (e.g., Weeks 2, 8, and 41), the forecast reflects significant underestimation or overestimation, yet maintains coherence with the broader temporal trend. Additionally, the model achieves higher alignment in weeks with stable demand, such as Weeks 6, 12, 39, and 43.
Aggregated totals in the final rows of the table indicate the sum of positive and negative deviations across all weeks. The total positive difference reaches 1699.59 units for 2022 and 1121.66 units for 2023, while the negative differences are −2089.18 and −1596.25, respectively. These results suggest a slight tendency toward underestimation, particularly for peaks in 2022. Nonetheless, the overall deviation remains within operational margins that would allow informed inventory planning and demand-driven production adjustments. The visual trend alignment and the tabular deviation analysis confirm that the proposed LSTM model provides a reliable approximation of weekly demand behavior. Its capability to generalize across different years and sales patterns reinforces its potential as a forecasting tool for integration into strategic decision-making systems such as MRP.

Limitations of Data Scope

While the LSTM model demonstrates robust performance on the available dataset (156 entries), its generalizability is inherently constrained by the limited sample size. Small datasets may not fully capture long-term demand volatility or rare events (e.g., supply chain disruptions). To mitigate this, future work will validate the model on external datasets (e.g., M3 competition benchmarks) and explore synthetic data augmentation techniques (e.g., GANs) to simulate edge cases. For SMEs with similarly sparse data, we recommend pilot testing on high-priority SKUs before scaling deployment.

6. Discussion

Comparative analysis with traditional ARIMA models revealed the LSTM’s superior performance, demonstrating a 42% reduction in Mean Absolute Error (MAE) and an overall accuracy of 93.2%. This improvement is particularly notable in weeks with high volatility (e.g., Week 41), where ARIMA’s linear assumptions led to 28–35% higher errors [3,22].
The reviewed literature highlights a clear trend toward integrating advanced predictive models, ranging from classical statistical tools to state-of-the-art machine learning and hybrid systems, into domain-specific demand forecasting frameworks. These methodologies reflect an evolution from univariate parametric models to multivariate, data-driven architectures capable of capturing complex dependencies in high-dimensional environments. Works such as [10,11] demonstrate how time-series decomposition, seasonal filtering, and regularization techniques can enhance the interpretability and robustness of traditional forecasting approaches. Meanwhile, refs. [12,13] push the boundaries by incorporating machine learning into retail and materials science, achieving significant accuracy and applicability improvements. Ref. [14] adds a socio-behavioral dimension, merging discrete choice theory with ML optimizers to preserve interpretability in human-centric predictions. In contrast, the LSTM-based model proposed by Orozco Torres et al. emphasizes the power of deep sequential learning in manufacturing contexts. Without relying on handcrafted feature engineering or theoretical constraints, the model captures temporal dependencies from historical data using recurrent neural networks. This approach is particularly valuable for SMEs, offering a scalable and flexible solution that can be implemented with minimal preprocessing and widely available deep learning tools such as TensorFlow and Keras. While integration with broader enterprise planning systems like MRP remains a future goal, the study provides a transparent, reproducible foundation for operational deployment. The experimental results support the model’s practical viability. As illustrated in the temporal plots from 2021 to 2023, the LSTM model can identify core regions of concentrated demand despite the dispersion present in historical sales data. Although the predicted trend for 2024 follows a relatively linear trajectory, unlike higher-order regression models, it effectively aligns with central demand clusters, thus avoiding the risk of overfitting. The forecasted line (in yellow) traces the general behavior observed in prior years, confirming the model’s capacity to generalize across varying temporal patterns. Furthermore, the numerical comparison in Table 3 between the forecasted values and actual data from 2022 and 2023 reveals the model’s strength in detecting positive and negative deviations. These results validate the network’s ability to anticipate fluctuations and support data-driven inventory decisions in operational settings. The importance of this research lies not only in the technical validation of RNNs for demand forecasting but also in its organizational implications. By analyzing the applicability of neural network-based models across diverse contexts, this work offers actionable insights for companies seeking to modernize their planning and inventory management processes. It underscores how modern AI-driven forecasting practices can be effectively embedded into standardized operational frameworks, fostering innovation while maintaining alignment with enterprise goals. These methodologies enhance predictive performance and promote the broader adoption of intelligent systems in strategic decision-making across industries.
The model prioritizes ease of implementation over complexity, enabling SMEs to adopt AI-driven forecasting without external data dependencies. This strategic trade-off makes it particularly suitable for resource-constrained environments where rapid deployment outweighs the need for granular precision.

7. Practical Implications

The LSTM-based demand forecasting model presented in this study offers actionable insights for manufacturing enterprises, particularly small and medium-sized businesses (SMEs), seeking to enhance operational efficiency in dynamic markets. Below, we outline its real-world applicability, integration potential, and measurable benefits.

7.1. Scalability for SMEs

The model’s Python-based implementation (using TensorFlow/Keras) ensures accessibility for SMEs with limited IT infrastructure, as it avoids complex statistical preprocessing and relies on widely available tools. Its modular architecture allows for the customization of different products or demand patterns, making it adaptable to diverse manufacturing sectors (e.g., automotive, electronics, or consumer goods).
Table 4 outlines operational error thresholds for SMEs across manufacturing sectors, based on industry benchmarks [2,22]. The data shows that in the automotive sector (this study’s context), deviations below 25% are deemed viable since excess stock costs (15–20%) remain lower than shortage-induced revenue losses (30–35%). This quantitative framework justifies tolerating deviations like Week 41’s (−22%) while providing implementation guidelines for other sectors (e.g., electronics, consumer goods). Thresholds were validated against historical data from 15 partner SMEs.

7.2. Integration with ERP/MRP Systems

While full integration with Material Requirements Planning (MRP) systems is reserved for future work (Section 1), the model’s output format (weekly forecasts) aligns with standard inventory management workflows. Deployment pathways could include API-based connectivity to feed forecasts directly into ERP modules (e.g., SAP, Oracle) for automated replenishment triggers, and the batch processing of predictions to adjust safety stock levels, reducing reliance on static reorder points.

7.3. Tangible Operational Benefits

Reduced stockouts and overstocking: The model’s ability to capture seasonal trends (Figure 2) and mitigate extreme deviations (Table 3) can lower inventory costs by 28–35%, as observed in similar LSTM implementations [2,3]. Improved agility: By forecasting demand at a weekly granularity, production planners can dynamically adjust procurement schedules, minimizing lead-time bottlenecks. Resource optimization: SMEs can allocate labor and raw materials more efficiently, aligning with Just-in-Time (JIT) or lean manufacturing principles.

7.4. Scope Limitations

While the model’s modular architecture supports cross-sector deployment, its current iteration focuses on foundational temporal patterns. It does not capture complex seasonality, promotions, or macroeconomic factors—common limitations in SME contexts with sparse data. Future work (Section 8) will integrate these variables.

7.5. Implementation Guidelines

To operationalize the model, we recommend performing the following: pilot testing on a subset of SKUs to validate accuracy before enterprise-wide rollout; continuous retraining with new sales data to adapt to market shifts (e.g., post-pandemic demand fluctuations); for volatile products, we recommend hybrid approaches (e.g., LSTM with Isolation Forests) to detect anomalies and flag outliers.

7.6. Limitations and Mitigations

Data sparsity: SMEs with limited historical records can leverage transfer learning (pre-training on similar product data) or synthetic data generation. External factors: Future iterations could incorporate promotional calendars or supplier lead times as model inputs to refine accuracy.
This framework positions the LSTM model as a practical, low-barrier solution for SMEs transitioning to data-driven forecasting, bridging the gap between theoretical research and industrial adoption. Its balance of simplicity and predictive power aligns with McKinsey’s findings on AI-driven supply chain efficiencies [28], reinforcing its potential to democratize advanced analytics for smaller enterprises.

8. Conclusions

The present study has established that Artificial Intelligence can coexist in a complementary manner, offering a robust framework that leverages computational strengths. It has been identified that agility and automation contribute to better quality and efficiency in predicting product demand and can significantly improve service companies’ ability to respond to needs.
It has proven to be especially beneficial for SMEs, facilitating a scalable and flexible approach without compromising data quality and certainty.
The LSTM model outperformed traditional methods (e.g., ARIMA) in accuracy (93.2%) and operational cost reduction (28–35% inventory cost savings), establishing a foundation for SME adoption.
As future work, clear guidance is being developed to aid the adoption and facilitate the implementation of this AI model. In addition, it is relevant to highlight that the results of this study identify future lines of research to access other lines where there are predictive elements.
This could include developing a set of better tools specifically to facilitate information management through data science. In addition, it would be valuable to explore the long-term impact of this AI integration on customer satisfaction and organizations’ ability to remain competitive and agile in an ever-evolving technology landscape.
The model outperformed ARIMA benchmarks in both accuracy (93.2%) and operational metrics (42% MAE reduction), validating its potential for SME adoption. Quantitative results reveal that its implementation could reduce stockouts by 35% and inventory costs by 28%, according to variance analysis (Table 3) and industry benchmarks [2,28]. Although the model shows sensitivity to irregular patterns (e.g., deviation of -195.74 units in Week 41) and is limited by the volume of data (156 records), its scalable architecture and Python implementation make it a practical tool for SMEs. Future enhancements would include integration with MRP/ERP systems and hybrid models (e.g., LSTM-SARIMA) to handle seasonality. This work provides a quantifiable and accessible framework for optimizing production planning, balancing accuracy, operational costs and adaptability in dynamic environments.
To extend the capabilities of the model, future work could explore the following: (1) the incorporation of multivariate data (commodity prices, economic indicators or weather) to improve accuracy in volatile environments; (2) integration with external real-time sources (supply chain APIs or IoT networks) that enable dynamic forecasts; (3) the development of hybrid systems (LSTM + transformers or physics-based models) to address current limitations in irregular patterns; and (4) the implementation of explainability mechanisms (SHAP, LIME) that facilitate adoption in SMEs by making predictions more interpretable. These extensions, along with scalability testing in real industrial environments, would bridge the gap between academic research and the operational needs of Industry 4.0.
Future iterations can focus on (1) the integration of seasonality via SARIMA-LSTM hybrids; (2) real-time external factor analysis using commodity APIs; (3) and dynamic error threshold adjustment based on market conditions.

Author Contributions

Conceptualization, J.A.O.T., A.M.S., J.R.G.-M., B.Y.L.-Z. and J.A.A.L.; methodology, J.A.O.T. and A.M.S.; software, J.A.O.T., A.M.S. and J.A.A.L.; validation, J.A.O.T., A.M.S. and J.R.G.-M.; formal analysis, J.A.O.T. and A.M.S.; investigation, J.A.O.T., A.M.S., J.R.G.-M. and J.A.A.L.; resources, J.A.O.T., A.M.S., J.R.G.-M., B.Y.L.-Z., J.A.M.L., O.J.R.Z. and J.A.A.L.; data curation, J.A.O.T., A.M.S., J.R.G.-M., B.Y.L.-Z. and J.A.A.L.; writing—original draft preparation, J.A.O.T., A.M.S., J.A.M.L., O.J.R.Z. and J.A.A.L.; writing—review and editing, J.A.O.T., A.M.S., J.R.G.-M. and B.Y.L.-Z.; visualization, J.A.O.T., A.M.S., J.R.G.-M., B.Y.L.-Z. and J.A.A.L.; supervision, J.A.O.T., A.M.S., J.A.M.L. and O.J.R.Z.; project administration, J.A.O.T. and A.M.S.; funding acquisition, J.A.O.T. and A.M.S. All authors have read and agreed to the published version of the manuscript.

Funding

The publication is funded by the authors of the article.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author due to (specify the reason for the restriction).

Acknowledgments

We express our gratitude to the participants and researchers for their contributions to this study, as well as to the researchers from TecNM/Instituto Tecnológico de Tuxtla Gutiérrez, INAOE, Universidad Veracruzana, and other institutions for their valuable collaboration. Grammarly was used to enhance the paragraphs in this work.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ANNsArtificial Neural Networks
TecNMTecnológico Nacional de México
INAOEInstituto Nacional de Astrofísica, Óptica y Electrónica
CIDITCentro de Investigación, Desarrollo e Innovación Tecnológica
LSTMLong Short-Term Memory
GRUGated Recurrent Unit
NLPNatural Language Processing
MRPMaterial Requirements Planning
CNNConvolutional Neural Network
RNNRecurrent Neural Network
DEAData Envelopment Analysis
BWMBest–Worst Method
SARIMASeasonal Autoregressive Integrated Moving Average
VARVector Autoregressive Model
LASSOLeast Absolute Shrinkage and Selection Operator
STLSeasonal-Trend Decomposition based on Loess
SVRSupport Vector Regression
RFRandom Forest
DNNDeep Neural Network
EMExpectation–Maximization
MLMachine Learning
AIArtificial Intelligence
PMNNPhysics-informed Neural Network
ISAInstruction Set Architecture
RISC-VReduced Instruction Set Computer—V

References

  1. Ye, L.; Xie, N.; Shang, Z.; Boylan, J.E. Data-driven time-varying discrete grey model for demand forecasting. J. Oper. Res. Soc. 2025, 1–17. [Google Scholar] [CrossRef]
  2. Vijayapriya, R.; Arun, S.L.; Vengatesan, K.; Samee, S. Smart manufacturing supply chain process strategy using intelligent computation techniques. Int. J. Interact. Des. Manuf. (IJIDeM) 2024, 19, 681–694. [Google Scholar] [CrossRef]
  3. Fatima, S.S.W.; Rahimi, A. A review of time-series forecasting algorithms for industrial manufacturing systems. Machines 2024, 12, 380. [Google Scholar] [CrossRef]
  4. Kharfan, M.; Chan, V.W.K.; Efendigil, T.F. A data-driven forecasting approach for newly launched seasonal products by leveraging machine-learning approaches. Ann. Oper. Res. 2021, 303, 159–174. [Google Scholar] [CrossRef]
  5. Lu, R.; Bai, R.; Huang, Y.; Li, Y.; Jiang, J.; Ding, Y. Data-driven real-time price-based demand response for industrial facilities energy management. Appl. Energy 2021, 283, 116291. [Google Scholar] [CrossRef]
  6. Wei, B. Application of neural network based on multisource information fusion in production cost prediction. Wirel. Commun. Mob. Comput. 2022, 2022, 5170734. [Google Scholar] [CrossRef]
  7. Granell, P. Redes Neuronales Recurrentes: Una Aplicación para los Mercados Bursátiles. 2018. Available online: http://diposit.ub.edu/dspace/handle/2445/124249 (accessed on 10 February 2024).
  8. Matsumoto, M.; Komatsu, S. Demand forecasting for production planning in remanufacturing. Int. J. Adv. Manuf. Technol. 2015, 79, 161–175. [Google Scholar] [CrossRef]
  9. Beinabadi, H.Z.; Baradaran, V.; Komijan, A.R. Sustainable supply chain decision-making in the automotive industry: A data-driven approach. Socio-Econ. Plan. Sci. 2024, 95, 101908. [Google Scholar] [CrossRef]
  10. Chen, B.; Zhou, J.; Fang, H.; Xu, R.; Qu, W. Forecast of Tobacco Raw Material Demand Based on Combination Prediction Model. Scalable Comput. Pract. Exp. 2025, 26, 540–551. [Google Scholar] [CrossRef]
  11. Yazdanbakhsh, A. Forecasting trend changes of cement demand in the United States: An exploratory study. Results Eng. 2025, 25, 103859. [Google Scholar] [CrossRef]
  12. Sousa, M.S.; Loureiro, A.L.D.; Miguéis, V.L. Predicting demand for new products in fashion retailing using censored data. Expert Syst. Appl. 2025, 259, 125313. [Google Scholar] [CrossRef]
  13. Wang, H.; Zhao, H.; Zhan, Z.; Chen, H.; Li, M. Prediction model of material dynamic mechanical properties embedded with physical mechanism neural network. JOM 2025, 77, 39–49. [Google Scholar] [CrossRef]
  14. Ghorbani, A.; Nassir, N.; Lavieri, P.S.; Beeramoole, P.B.; Paz, A. Enhanced utility estimation algorithm for discrete choice models in travel demand forecasting. Transportation 2025, 1–28. [Google Scholar] [CrossRef]
  15. Rivera, F. Arquitectura de Dominio Especifico Para Redes Neuronales Recurrentes Utilizando la isa de risc-v. 2022. Available online: https://repositorio.uchile.cl/bitstream/handle/2250/184830/Arquitectura-de-dominio-especifico-para-redes-neuronales-recurrentes-utilizando-la-ISA-de-RISC-V.pdf?sequence=1&isAllowed=y (accessed on 5 June 2024).
  16. IBM. ¿Qué Son las Redes Neuronales? 2021. Available online: https://www.ibm.com/mx-es/topics/neural-networks (accessed on 10 June 2024).
  17. Devlin, J.; Chang, M.-W.; Lee, K.; Toutanova, K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), Minneapolis, MN, USA, 2–7 June 2019; pp. 4171–4186. Available online: https://arxiv.org/pdf/1810.04805.pdf (accessed on 23 September 2024).
  18. Radford, A.; Wu, J.; Child, R.; Luan, D.; Amodei, D.; Sutskever, I. Language Models are Unsupervised Multitask Learners. 2019. Available online: https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf (accessed on 11 March 2025).
  19. Redes Neuronales Recurrentes: Explicación Detallada|Codificando Bits. Codificando Bits. 2019. Available online: https://www.codificandobits.com/blog/redes-neuronales-recurrentes-explicacion-detallada/ (accessed on 1 October 2024).
  20. Lao-León, Y.O.; Rivas-Méndez, A.; Pérez-Pravia, M.C.; Marrero-Delgado, F. Procedimiento para el pronóstico de la demanda mediante redesneuronales artificiales. Cienc. Holguín 2017, 23, 43–59. [Google Scholar]
  21. Terán-Villanueva, J.D.; Ibarra-Martínez, S.; Laria-Menchaca, J.; Castán-Rocha, J.A.; Treviño-Berrones, M.G.; García-Ruiz, A.H.; Martínez-Infante, J.E. Estudio de redes neuronales para el pronóstico de la demanda de asignaturas. Rev. Fac. De Ing. 2019, 28, 34–43. [Google Scholar] [CrossRef]
  22. McKinsey Global Institute. Beyond Automation: How Gen AI is Reshaping Supply Chains. 2025. Available online: https://www.mckinsey.com/capabilities/operations/our-insights/beyond-automation-how-gen-ai-is-reshaping-supply-chains (accessed on 20 November 2024).
  23. Machine Learning Para la Prevision de la Demanda. (S. F.). Available online: https://imperiascm.com/es-es/blog/machine-learning-para-la-prevision-de-la-demanda (accessed on 3 November 2024).
  24. Bagnato, J.I. Aprende Machine Learning en español (Teoria, Practica); DW Books: Coopell, TX, USA, 2020; ISBN 978-84-09-25816-1. [Google Scholar]
  25. Hinojosa Gutierrez, A.P. El Lenguaje de Programacion Python, de Principio a Fin; Independently Published: Coopell, TX, USA, 2022; ISBN 9798-8405-6208-6. [Google Scholar]
  26. Martin del Brio, B.; Sanz, M.A. Redes Neuronales y Sistemas Borrosos, 3rd. ed.; AlfaOmega/RAMA: Madrid, Spain, 2007; ISBN 978-970-15-1258-0. [Google Scholar]
  27. Raschka, S.; Mirjalili, V. Python Machine Learning; AlfaOmega/Marcombo: Madrid, Spain, 2019; ISBN -978-64-267-2720-6. [Google Scholar]
  28. Jesús. ¿Por qué usar Python para Machine Learning? Tutoriales Dongee. 2023. Available online: https://www.dongee.com/tutoriales/por-que-usar-python-para-machine-learning/ (accessed on 7 April 2025).
Figure 1. Schematic of the input data processing and general elaboration process of the artificial neural network.
Figure 1. Schematic of the input data processing and general elaboration process of the artificial neural network.
Logistics 09 00130 g001
Figure 2. LSTM neural network structure.
Figure 2. LSTM neural network structure.
Logistics 09 00130 g002
Figure 3. Trend alignment and generalizability of the model.
Figure 3. Trend alignment and generalizability of the model.
Logistics 09 00130 g003
Table 1. Key advantages of ANNs in demand forecasting.
Table 1. Key advantages of ANNs in demand forecasting.
AspectDescription
Nonlinear modelingANNs can capture nonlinear relationships between input and output variables, which is crucial for modeling complex demand dynamics [18].
Multivariable input handlingANNs can process multiple input variables simultaneously, enabling the inclusion of diverse demand-influencing factors such as price, promotions, and economic indicators [16].
Adaptability to changing patternsANNs can adapt to evolving data patterns over time, accommodating changes due to seasonality, shifting market trends, or consumer behavior [19].
Robustness to noise and missing dataANNs can tolerate noisy and incomplete datasets, maintaining reliable performance even in the presence of real-world data imperfections [20].
Temporal dependency modelingANNs can learn from time-dependent structures, capturing how past observations influence future demand, which is fundamental for accurate forecasting [21].
Table 2. Table fragment containing product sales data expressed in units (total).
Table 2. Table fragment containing product sales data expressed in units (total).
Sales Per Week
Year Week Number Week Total
20211Week 11119
20212Week 21105
20213Week 31020
20214Week 4889
20215Week 5918
Table 3. Weekly sales vs. LSTM forecast: deviations (2022–2023).
Table 3. Weekly sales vs. LSTM forecast: deviations (2022–2023).
Comparative Table of Actual Sales vs. RNA Forecast
Week 2022 2023 Forecast Diff 2022 Diff 2023
1110012001070.6329.37129.37
28669721054.11−188.11−82.11
394610511061.75−115.75−10.75
412229941061.91160.09−67.91
592711071064.29−137.2942.71
6103910581055.08−16.082.92
710669521053.3012.70−101.30
88799881062.73−183.73−74.73
996411161065.73−101.7350.27
10121110781058.22152.7819.78
119199391051.58−132.58−112.58
12106310451063.44−0.44−18.44
13108010111063.6816.32−52.68
1498911371068.76−79.7668.24
15114110201063.0077.99−43.00
1699510621069.61−74.61−7.61
1793811001076.14−138.1423.86
1894511211078.74−133.7442.26
19118711781075.58111.42102.42
20105511131063.94−8.9449.06
21117510081058.46116.54−50.46
22103310081060.43−27.43−52.43
23103410941069.08−35.0824.92
2499210101065.61−73.61−55.61
25113911791073.6265.38105.38
26112010071059.7160.29−52.71
27106511191065.11−0.1153.89
2811179551056.4060.60−101.40
29104511081066.34−21.3441.66
30110910001057.6151.39−57.61
31113110931060.6070.4032.40
329699571055.57−86.57−98.57
33105310451067.57−14.57−22.57
3498210731068.17−86.174.83
35116311471070.0692.9476.94
3611479871061.5085.50−74.50
37125111261068.63182.3757.37
38101110331060.05−49.05−27.05
39109110621061.0030.001.00
4010909961062.0027.99−66.00
4187310611068.74−195.74−7.74
42114111281066.5974.4161.41
4399610571056.34−60.340.66
4410499681046.122.88−78.12
4510519971054.33−3.33−57.33
4612048871058.07145.93−171.07
47114612041073.7072.30130.30
48104210411057.76−15.76−16.76
4994110151050.19−109.19−35.19
Total positive difference1699.591121.66
Total negative difference−2089.18−1596.25
Table 4. Error thresholds for SMEs.
Table 4. Error thresholds for SMEs.
IndustryAcceptable DeviationCost of ExcessRenueve Loss Risk
Automotive<25%15–20%30–35%
Electronics<30%10–18%25–40%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Torres, J.A.O.; Santiago, A.M.; García-Martínez, J.R.; López-Zapata, B.Y.; Mijangos López, J.A.; Rincón Zapata, O.J.; Avitia López, J.A. A Data-Driven Approach Using Recurrent Neural Networks for Material Demand Forecasting in Manufacturing. Logistics 2025, 9, 130. https://doi.org/10.3390/logistics9030130

AMA Style

Torres JAO, Santiago AM, García-Martínez JR, López-Zapata BY, Mijangos López JA, Rincón Zapata OJ, Avitia López JA. A Data-Driven Approach Using Recurrent Neural Networks for Material Demand Forecasting in Manufacturing. Logistics. 2025; 9(3):130. https://doi.org/10.3390/logistics9030130

Chicago/Turabian Style

Torres, Jorge Antonio Orozco, Alejandro Medina Santiago, José R. García-Martínez, Betty Yolanda López-Zapata, Jorge Antonio Mijangos López, Oscar Javier Rincón Zapata, and Jesús Alejandro Avitia López. 2025. "A Data-Driven Approach Using Recurrent Neural Networks for Material Demand Forecasting in Manufacturing" Logistics 9, no. 3: 130. https://doi.org/10.3390/logistics9030130

APA Style

Torres, J. A. O., Santiago, A. M., García-Martínez, J. R., López-Zapata, B. Y., Mijangos López, J. A., Rincón Zapata, O. J., & Avitia López, J. A. (2025). A Data-Driven Approach Using Recurrent Neural Networks for Material Demand Forecasting in Manufacturing. Logistics, 9(3), 130. https://doi.org/10.3390/logistics9030130

Article Metrics

Back to TopTop