Next Article in Journal
Determining Water Resource Formation at the “Delegen” Nuclear Test Site Using Stable Isotope Analysis
Previous Article in Journal
pH-Dependent Reactivity, Radical Pathways, and Nitrosamine Byproduct Formation in Peroxynitrite-Mediated Advanced Oxidation Processes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Monthly Temperature Prediction in the Han River Basin, South Korea, Using Long Short-Term Memory (LSTM) and Multiple Linear Regression (MLR) Models

Department of Hydro Science and Engineering Research, Korea Institute of Civil Engineering and Building Technology, 283, Goyang-daero, Goyang-si 10223, Republic of Korea
*
Author to whom correspondence should be addressed.
Water 2026, 18(1), 98; https://doi.org/10.3390/w18010098
Submission received: 4 November 2025 / Revised: 19 December 2025 / Accepted: 30 December 2025 / Published: 31 December 2025
(This article belongs to the Section Hydrology)

Abstract

This study compares and evaluates the performance of a statistical model, Multiple Linear Regression (MLR), and a deep learning model, Long Short-Term Memory (LSTM), for predicting monthly mean temperature in the Han River Basin, South Korea. Predictor variables were dynamically selected based on lagged correlation analysis between climate indices and temperature over the past 40 years, identifying the top ten variables with the highest correlations for lag times ranging from 1 to 18 months. The MLR model was developed through stepwise regression with cross-validation, while the LSTM model was constructed using an 18-month input sequence to capture temporal dependencies in the data. Model performance was evaluated using percent bias (PBIAS), Nash–Sutcliffe efficiency (NSE), Pearson’s correlation coefficient (r), and tercile-based probability metrics. Both models reproduced the seasonal variability of monthly temperature with high accuracy (NSE > 0.97, r > 0.98). The LSTM model showed slightly higher predictive skill in several periods but also exhibited larger prediction variance, reflecting the sensitivity of nonlinear architectures to variations in predictor–response relationships. In contrast, the MLR model demonstrated more stable predictive behavior with narrower uncertainty bounds, particularly under low signal-to-noise conditions, owing to its structural simplicity. These findings indicate that the two approaches are complementary; the LSTM model better captures nonlinear temporal dynamics, while the MLR model provides interpretability and robustness. Future work will explore advanced hybrid architectures such as CNN–LSTM and Transformer-based models, as well as multi-model ensemble methods, to further enhance the accuracy and reliability of medium-range temperature prediction.

1. Introduction

Monthly mean temperature is a key indicator reflecting long-term climate trends and seasonal variability, directly influencing various hydrologic and environmental processes such as evapotranspiration, soil moisture dynamics, drought onset and propagation, snow accumulation and melt, and basin-scale water balance. Reliable monthly temperature forecasts therefore play an essential role not only in practical water management—including agricultural production, reservoir operation, irrigation planning, and water-supply management—but also in broader socio-environmental applications such as energy-demand planning, ecosystem seasonality assessment, and climate disaster preparedness.
The Han River Basin, which encompasses Seoul and the greater metropolitan area, is characterized by a mixture of mountains, urban areas, and river systems, along with high population and economic density [1]. Thus, accurately estimating basin-averaged temperature is of substantial importance as foundational information for water supply regulation, drought preparedness, hydropower management, environmental conservation, and climate-adaptation planning. In South Korea, temperature exhibits pronounced seasonality and variability due to the nation’s mid-latitude location, complex mountainous terrain, and the influence of the East Asian monsoon. These characteristics lead to substantial year-to-year fluctuations driven by global climate modes such as the El Niño–Southern Oscillation (ENSO), Pacific Decadal Oscillation (PDO), and Arctic Oscillation (AO) [2,3,4,5,6]. Unlike climate change attribution studies focusing on long-term trends (decades), this work addresses the distinct challenge of subseasonal-to-seasonal temperature predictability, critical for operational water management yet underexplored in Korean basins.
To capture these regional and seasonal features, both physically based climate models and statistical approaches have been widely employed for temperature prediction. Physical models—including operational seasonal forecasting systems such as ECMWF-SEAS5—explicitly simulate coupled ocean–atmosphere processes, but they require substantial computational resources and show limited forecast skill in mid-latitude regions [7,8]. In contrast, traditional statistical approaches such as Autoregressive Integrated Moving Average (ARIMA), Seasonal ARIMA (SARIMA), and Multiple Linear Regression (MLR) have been commonly applied in climate and temperature forecasting [9,10]. Although statistically based models offer interpretability and ease of implementation, they have well-known limitations in representing nonlinear interactions among climate variables and long-term dependencies [11,12].
To overcome these limitations, Long Short-Term Memory (LSTM) deep learning models have gained increasing attention in meteorological and hydrological forecasting [13,14]. Several recent studies have demonstrated that LSTM models using global climate indices—such as ENSO, PDO, and the North Atlantic Oscillation (NAO) provided by the National Oceanic and Atmospheric Administration (NOAA)—achieve higher predictive skill than traditional statistical models [15,16,17]. However, LSTM models require large training datasets and involve practical challenges related to overfitting prevention and hyperparameter configuration [13]. Such improvements are not uniform across regions, seasons, or forecast horizons, and the increased complexity of deep learning models introduces practical challenges related to data requirements, overfitting control, computational cost, and operational interpretability.
Moreover, when extended lag structures and multiple climate indices are employed, statistically significant associations may arise even in the absence of robust physical causality. While such associations may still contribute to forecast skill, they require careful interpretation, particularly in applied forecasting contexts. From a water-resources management perspective, the key challenge is not to explain climate processes, but to assess whether usable and stable forecast skill can be obtained at decision-relevant lead times under realistic data and operational constraints
In practical climate- and water-resource planning, securing lead times of several months to a year is particularly important, and predictors with meaningful statistical relationships may support such extended forecasts even in the absence of explicit physical causality. From this perspective, approaches that emphasize predictive skill—rather than mechanistic interpretation—remain valuable, especially for basins where long-lead physical predictability is inherently limited. Indeed, many prior studies have used statistical associations rather than explicit physical causality to conduct long-term climate predictions [3,5,18,19,20,21,22,23,24,25,26,27,28,29], demonstrating the practical utility of exploratory, skill-focused prediction approaches.
Motivated by this background, the present study evaluates the predictive performance and stability of two contrasting modeling approaches—MLR and LSTM—for basin-averaged monthly temperature forecasting in the Han River Basin. The primary contributions are (1) an MLR-LSTM comparison revealing complementary operational strengths for 1–12-month water management lead times; (2) basin-specific predictability assessment for Korean hydroclimatic planning; (3) uncertainty characterization enabling risk-informed decision-making. These address underexplored gaps in regional water-resource forecasting where climate-change studies dominate but operational predictability remains limited. Rather than seeking to establish the universal superiority of one method over another, the analysis focuses on how model complexity, predictor-lag structure, and uncertainty characteristics influence forecast behavior across seasons and lead times relevant to water-resources planning. By examining both models within a unified framework, this study aims to provide practical insights into the conditions under which relatively simple statistical models or more flexible deep-learning approaches may be appropriate for applied hydroclimatic forecasting.

2. Materials and Methods

2.1. Study Area and Data

The Han River Basin, the target area of this study, is one of South Korea’s most critical water resource areas, characterized by a high concentration of population and industrial activities in the Seoul metropolitan region. As the economic and administrative center of the country, variations in air temperature exert a substantial influence across social and economic sectors [1]. According to long-term climate records from the Korea Meteorological Administration (KMA), the mean air temperature over the Korean Peninsula has exhibited a consistent upward trend over the past several decades. In particular, the frequency and intensity of extreme temperature events—such as summer heatwaves and winter cold spells—have been increasing [30,31,32].
These evolving climatic characteristics have significant implications for water resources management [33], energy demand [34], and agricultural productivity [35]. Accordingly, reliable mid- to long-term prediction of monthly temperature serves as an essential foundation for climate change adaptation and disaster risk management planning. Figure 1 illustrates the spatial distribution of the Han River Basin, which covers a total area of approximately 41,947 km2. Table 1 summarizes the latitude, longitude, and elevation information of the meteorological stations located within and around the target basin.
As candidate predictors, 37 types of monthly global climate indices provided by the National Oceanic and Atmospheric Administration (NOAA)—including ENSO, NAO, PDO, and AO (Arctic Oscillation)—were utilized. These indices largely follow those employed in Kim et al. [27,28], with the addition of AMO5 to broaden the representation of large-scale climate variability (see Table 2). In parallel, eight local meteorological variables representing the study area were used: precipitation, temperature, mean wind speed, relative humidity, mean sea-level pressure, sunshine duration, total cloud cover, and pan evaporation. The basin-wide monthly values of these eight variables were derived from daily observations at 31 Automated Synoptic Observing System (ASOS) stations, which were selected from the stations used in Kim et al. [27] after excluding three locations based on considerations of basin representativeness (see Figure 1). The spatial averaging was performed using the Thiessen polygon area-weighted method. Table 2 summarizes the global climate indices and local meteorological factors used as predictor variables in this study.

2.2. Preprocessing of Climate Indices

In this study, we first performed a systematic imputation of missing values in the climate indices used as predictors to ensure stable monthly temperature forecasting. Since the climate indices are provided as monthly time series, an appropriate interpolation technique was selected by considering both the proportion of missing values and the length of missing intervals for each index. To this end, four methods were applied: STL-based interpolation (seasonal-trend decomposition using LOESS), simple linear interpolation, the Kalman filter, and ARIMA-based interpolation.
The STL-based method decomposes the time series into seasonal, trend, and remainder components, interpolates the seasonal and trend components separately, and then reconstructs the series, thereby preserving the underlying seasonal pattern while smoothly filling the gaps [10]. Simple linear interpolation, which connects two adjacent observations with a straight line, is the most basic approach and is suitable for series with moderate variability. The Kalman filter estimates missing values within a state-space framework, enabling simultaneous consideration of dynamic trends and noise, making it advantageous for highly variable indices [36]. ARIMA-based interpolation uses the autocorrelation structure of the series to predict missing segments, demonstrating robust performance, particularly when the gaps span longer periods. Additionally, for supplementary comparison, the mean value of the same calendar month over the surrounding 10-year period was computed. Among the four interpolated values and the climatological mean, the value with the smallest deviation from this reference was selected as the final imputed estimate.
After completing the imputation process, both the climate indices and the target variable were normalized to standardize the range of model inputs. Because the indices differ in units and variability, min-max normalization was applied to ensure comparability and to prevent any individual index from exerting disproportionate influence during model training. This normalization step is essential for enhancing parameter stability and improving training efficiency in both the MLR and LSTM models.
For the target variable—monthly mean temperature—we employed temperature anomalies rather than raw values due to the strong seasonal cycle. The anomalies were computed by subtracting the long-term monthly climatology from the observed temperature values. Using anomalies removes the seasonal periodicity and allows the models to more explicitly capture co-variability with climate indices. It also prevents the models from overfitting to recurrent seasonal patterns, thereby improving the robustness and overall predictive skill.

2.3. Selection of Predictors

Unlike most previous studies [37,38,39,40] that employed a fixed set of predictors throughout the entire analysis period, this study adopted a flexible predictor-selection framework in which the predictors were dynamically determined according to both the prediction time and the target month [27,28]. This approach reflects the fact that the influence of large-scale climate indices on regional temperature is neither constant over time nor uniform across seasons. Teleconnection patterns often exhibit time-varying strengths and lagged responses, and thus a static set of predictors cannot fully capture the evolving climate–temperature relationships.
To incorporate these dynamics, correlation analyses were first conducted between monthly temperature and multiple global climate indices over the past 40 years, using each prediction time as the reference point. To account for lagged teleconnection effects, correlation coefficients were calculated for each climate index with lag times ranging from 1 to 18 months. This enabled the identification of climate signals that exert their strongest influence not only contemporaneously but also with delayed responses, which is critical for forecasting monthly temperature several months ahead.
Following the lagged-correlation analysis, the ten climate indices exhibiting the strongest associations for each prediction time and target month were initially selected as candidate predictors. Because correlation-based selection inherently carries the risk of spurious relationships, additional safeguards were implemented. To mitigate spurious correlations from multiple testing across 37 indices and 18 lags, we implemented (1) dynamic lead-time specific selection, (2) Variance Inflation Factor (VIF) screening, and (3) cross-validation ensembles, reducing overfitting risk. This procedure ensured that the final predictor set consisted of indices that provided independent and non-redundant information to the forecasting models. This dynamic selection strategy, combined with multicollinearity screening, offers several advantages over traditional fixed-variable approaches. It is emphasized that the selected predictors and lag structures should be interpreted as statistical contributors to forecast skill, not as evidence of direct physical forcing or causal dominance. The primary objective of this selection strategy is to enhance predictive stability and robustness under long-term forecast horizons. It allows the forecasting model to reflect year-to-year changes in teleconnection strength, captures season-specific climate drivers more effectively, and prevents the inclusion of irrelevant or mutually redundant predictors that may degrade model stability and forecasting skill. By adapting the predictor set to the prevailing climate context while simultaneously enforcing statistical soundness, the method enhances both the physical interpretability of the selected predictors and the predictive performance of the temperature forecasting models.
Based on these analyses, for each prediction time and target month (i.e., 1–12 months ahead of the prediction point), the ten climate indices showing the highest correlations were selected as predictors for temperature forecasting. Figure 2a,b illustrate examples of lagged correlation analyses results for forecasting the temperature in January and July 2020, respectively. Each figure presents the correlations between the past 40 years of observed monthly temperature and climate index data at various lag times (1–18 months). Red and blue colors indicate positive and negative correlations, respectively, whereas gray areas denote missing data for which correlations could not be computed. Correlation coefficients with absolute values greater than 0.4 are provided numerically on the figures.
As shown in Figure 2a, the 5-month-lagged AvgSLP index exhibited the strongest correlation (r = −0.52), indicating that February AvgSLP during 1982–2021 was most strongly associated with July temperatures of the same years. The next strongest correlations were found for the 16-month-lagged WND (+0.49) and WHWP (+0.47). In Figure 2b, the 5-month-lagged POL index showed the strongest negative correlation (r = −0.55), followed by the 7-month-lagged SLP_IND (−0.51) and the 14-month-lagged TMP (+0.50).
Figure 3 summarizes the climate indices that exhibited the strongest correlations with monthly temperature over the study period. For each month, mean absolute correlation values were computed across all lag times, and the ten indices with the highest mean absolute values were selected. The results indicate that GML, TNI, and NINI1+2 were most strongly associated with January temperatures, whereas AMO5, GML, and WND showed the strongest correlations with July temperatures. Indices such as AMO, AMO5, SmallEV, WHWP, and WND generally displayed strong positive correlations across most months, while SLP_IND and SLP_EEP exhibited predominant negative correlations.
However, as noted by Kim et al. [27,28], the strength and direction of these correlations may vary depending on the analysis period and the lag time of each climate index. Therefore, in this study, the climate indices showing strong correlations were flexibly selected according to the prediction time and target month to improve the adaptability and predictive performance of the models.

2.4. Development of the LSTM-Based Model

The LSTM-based monthly temperature prediction model was implemented in R using the keras and tensorflow packages [41,42]. To ensure reproducibility, random seeds were set as needed throughout the experimental workflow [43].
The input dataset consisted of ten dynamically selected climate predictors, from which a rolling window of the previous 18 months was constructed. The upper bound of the lag windows was constrained to this range to balance potential long-range climate memory against rapidly diminishing statistical robustness and increasing risk of chance correlations at longer lags, while remaining consistent with planning horizons commonly used in drought preparedness and long-term water-resources management. This window was reshaped into a three-dimensional tensor (batch size × time steps × features), enabling the LSTM to capture temporal dependencies and long-term interactions among the predictors [44,45].
Given the substantial computational cost associated with training recurrent neural networks under ensemble settings, extensive hyperparameter optimization was not pursued. Instead, a parsimonious architecture was adopted to ensure reproducibility and operational feasibility. The final structure consisted of a single LSTM layer with 16 units, which provided a reasonable balance between model complexity and computational feasibility. To mitigate overfitting under this compact architecture, input dropout (0.2) and recurrent dropout (0.1) were incorporated, both of which are widely used regularization techniques in recurrent neural network training [46,47]. A dense output layer with a linear activation function was used to generate the final temperature prediction.
The model was trained using the Adam (Adaptive Moment Estimation) optimizer, selected for its stable convergence properties in nonlinear time series learning [48]. Mean squared error (MSE) was adopted as the loss function, consistent with continuous regression tasks.
To address predictive uncertainty, an ensemble forecasting strategy was employed [49]. This ensemble-based approach explicitly acknowledges stochastic variability arising from random initialization, dropout regularization, and data resampling, and allows uncertainty characteristics to be quantified rather than suppressed. For each target month, 600 candidate LSTM models were trained using cross-validation, each initialized with different random seeds and slightly varied training subsets. Because training costs scale directly with ensemble size, 600 models were set as a practical upper limit under the available computational resources. From these candidates, the best-performing 300 models were selected based on RMSE, MAE, and percent bias (PBIAS). The final ensemble prediction was computed as the average of these 300 models, enhancing robustness and reducing sensitivity to initialization, data splits, and stochastic training effects.
Importantly, the selection of the ten input predictors for each target period was also dynamic; feature selection was based on lagged correlation analysis between the target month and leading climate indices [27,50,51]. For example, to forecast temperature in July 2020 using data up to December 2019 (a seven-month lag), predictors were chosen by evaluating lagged correlations with the target over 7–18 months (see Figure 4).

2.5. Development of the MLR-Based Model

The multiple linear regression (MLR) model was formulated following the approach proposed by Kim et al. [27,28], expressed as
Y = β 0 + i = 1 10 β i X i + ϵ
where Y represents the target monthly temperature (°C), β 0 ,   β i denote regression coefficients, X i are the selected climate indices, and ϵ is the residual term.
Predictor selection was based on lagged correlation analysis, consistent with previous studies [52,53] demonstrating the cumulative or delayed effects of climate indices on temperature over various time lags. Correlation analyses were conducted between the monthly temperature and multivariate climate indices lagged from 1 to 18 months, using a 40-year historical dataset (as depicted in Figure 2). Ten climate indices showing statistically significant correlations were chosen as predictors.
Model construction employed cross-validation-based stepwise regression. For each target month, the 40-year dataset was randomly split into 20 years for model training and 20 years for validation. A total of 1000 MLR models were generated and evaluated based on the coefficient of determination (R2), Nash–Sutcliffe Efficiency (NSE) [54], ratio of RMSE to the standard deviation of the observations (RSR), and multicollinearity diagnostics. Only models meeting the statistical adequacy criteria were retained.
The optimal model selection procedure followed that of Kim et al. [28], incorporating random sampling-based cross-validation. As a result, although the same period was used, minor variations in the final model configurations may arise across different runs due to stochastic sampling effects.

3. Results

3.1. Monthly Temperature Prediction

Figure 5 and Figure 6 show the monthly temperature forecasts for 2010 and 2020, generated from the predictions issued in December 2009 and December 2019, respectively. In each figure, the red boxplots represent the distribution of predicted values, illustrating the forecast uncertainty, while the shaded gray areas and thick blue lines denote the 30-year historical distribution and the corresponding climatological median. The blue dots indicate the observed temperatures. In Figure 5, the January forecast corresponds to a 1-month lead time, increasing to a 12-month lead for December. Figure 6 follows the same structure for the forecasts targeting conditions in 2020.
As shown in Figure 5 and Figure 6, the LSTM model produces wider prediction intervals than the MLR model. This is largely due to the stochastic components inherent in LSTM training—such as random weight initialization, dropout, and mini-batch optimization—which introduce variability across repeated runs. In addition, because the LSTM processes sequential information to capture long-term dependencies, small stepwise errors can accumulate over time, contributing further to the spread of ensemble forecasts. These characteristics are consistent with previous findings that stochastic training mechanisms in deep learning can increase predictive variance [55].
Furthermore, due to computational constraints, this study was unable to explore a wide range of LSTM architectures or hyperparameter configurations, which may also have influenced the width of the prediction intervals. Therefore, the broader ranges observed in Figure 5 and Figure 6 are better interpreted as variability arising from the specific LSTM training procedure and model configuration used in this study, rather than as evidence of intrinsic climatic uncertainty. More rigorous uncertainty attribution would require additional analyses, such as structural sensitivity tests, hyperparameter perturbation experiments, and the application of Bayesian or probabilistic deep learning methods.
Figure 7 and Figure 8 present the observed and predicted monthly temperatures for the full analysis period (1991–2024). The gray lines represent forecasts issued with lead times ranging from 1 to 12 months, and the red line denotes the observations. While forecast discrepancies appear across lead times, both models reproduce the dominant seasonal cycle and major interannual fluctuations. Despite their overall similarity, notable periods of differential performance are evident; the LSTM model more accurately captures temperature variability during summer 1994, winter 1999, and winter 2000, whereas the MLR model shows comparatively better skill in summer 1995, summer 1996, and winter 2012. These differences reflect the distinct structural characteristics of the two modeling approaches rather than systematic superiority.

3.2. Comparison of Predictive Performance by Lead Time

Figure 9 presents the predictive performance of monthly temperature forecasts for the period 1991–2023, depending on the forecast lead time. Model performance was evaluated using three common statistical indicators: Percent Bias (PBIAS) for estimating systematic bias, Nash–Sutcliffe Efficiency (NSE) for overall model efficiency, and the Pearson correlation coefficient (r) for linear association between observed and predicted values [56,57]. The LSTM model demonstrated robust predictive performance, with PBIAS ranging from −0.2% to +0.9%, NSE of 0.97, and r between 0.98 and 0.99. The MLR model also showed comparable performance, with PBIAS from −1.1% to −1.7%, NSE of 0.98, and r of 0.99. These high values indicate that the monthly temperature, characterized by pronounced seasonality and autocorrelation, maintains a stable statistical structure even for long-term forecasting [58].
Figure 10 illustrates the tercile forecast probabilities according to the lead time, which are widely used indicators in probabilistic climate prediction [59]. The observed temperatures for each month were categorized into lower, middle, and upper terciles based on the 30-year climatological record, and the probability that the forecasted month fell within each category was calculated. When compared with the random-expectation baseline of 33.3%, both LSTM and MLR models exceeded this threshold across all lead times, confirming the reliability of their forecasts. In certain lead periods, the MLR model even showed slightly higher forecast probabilities, indicating that simple linear regression-based models can still provide competitive performance relative to deep learning approaches.
In general, the LSTM model, as a recurrent neural network designed to capture long-term dependencies in time series data, has been widely applied in meteorological and hydrological forecasting because of its ability to model nonlinear climatic signals [14,41]. However, in cases where data availability is limited or noise levels are high, overfitting and instability can occur [43], and in such cases, MLR models may yield more stable outcomes [60,61]—as also observed in this study. Specifically, the LSTM model produced broader uncertainty ranges, while the MLR model demonstrated more consistent predictions. This finding implies that nonlinear deep learning models are not universally superior; instead, depending on data characteristics and the signal-to-noise ratio (SNR), traditional statistical regression models can serve as efficient and reliable alternatives.

4. Discussion

This study examined the performance of Multiple Linear Regression (MLR) and Long Short-Term Memory (LSTM) models for monthly temperature forecasting in the Han River Basin. Both approaches demonstrated meaningful predictive skill while exhibiting distinct characteristics in terms of variability, robustness, and uncertainty. Rather than competition, MLR and LSTM exhibit complementary strengths; MLR offers stability and transparency for operational use, while LSTM captures nonlinear variability during anomalous periods. The results are not intended to establish the general superiority of one modeling paradigm over another, but rather to illustrate how different model structures respond to the same predictor information under long-range forecasting horizons.
The LSTM model showed enhanced responsiveness during periods characterized by stronger variability and rapid temperature transitions, reflecting its ability to represent nonlinear relationships and temporal dependencies. However, its forecasts were accompanied by wider prediction intervals, attributable to stochastic components inherent in neural-network training, including random weight initialization, dropout regularization, mini-batch optimization, and the accumulation of errors over sequential inputs. Computational constraints further limited the exploration of alternative architectures and hyperparameter configurations, which likely influenced the magnitude of uncertainty observed.
In contrast, the MLR model produced comparatively stable forecasts with narrower uncertainty bounds, due to its deterministic formulation and reduced parameter space. Although its linear structure constrains its ability to capture complex climate–temperature interactions, the consistency of its performance highlights the continued relevance of simple and interpretable models under operational constraints. Importantly, the identified predictor–lag relationships should be interpreted as empirical constructs that enhance forecast skill rather than as indicators of physical causality. The strength and sign of statistical associations between climate indices and temperature were found to vary with forecast initialization and target season, suggesting that no single index–lag combination can be interpreted as universally dominant. Within a prediction-oriented framework, such relationships are primarily evaluated based on their contribution to forecast skill and stability, rather than their ability to represent specific dynamical mechanisms.
Overall, model selection should align with operational constraints rather than methodological sophistication: MLR for stable baseline forecasting, LSTM for variability-sensitive periods, ensuring practical utility in water management.

5. Conclusions

This study evaluated the performance of a statistical regression model (MLR) and a deep-learning model (LSTM) for forecasting the basin-averaged monthly temperature in the Han River Basin using large-scale climate indices and regional meteorological variables. Predictor selection was based on lagged statistical relationships, and model performance was assessed within an ensemble-based framework to account for forecast uncertainty.
Both models successfully reproduced the observed seasonal cycle and demonstrated high predictive skill across multiple forecast horizons. While the LSTM model captured nonlinear variability and exhibited improved performance during certain periods, the MLR model consistently produced stable forecasts with narrower uncertainty bounds. These results indicate that increased model complexity does not automatically translate into superior performance and that simpler statistical approaches may remain highly effective for specific forecasting objectives.
From an operational perspective, the proposed forecasting framework can be directly incorporated into water resource management by providing probabilistic temperature outlooks at planning-relevant lead times, which can be used as inputs for drought risk assessment, reservoir operation rule curves, and scenario-based water allocation planning.
The empirical predictor relationships employed in this study provide practical value for long-lead temperature forecasting, particularly in applied contexts where physically based predictability is limited and operational constraints are significant. By highlighting the complementary behavior of MLR and LSTM, this study underscores the importance of informed model selection rather than methodological replacement.
In conclusion, effective temperature forecasting for water resource management depends not only on model sophistication but also on forecast stability, uncertainty characteristics, interpretability, and computational feasibility. This study supports informed, context-dependent model selection rather than methodological replacement, and provides a transparent basis for applied hydroclimatic decision-making. Future research may integrate physically interpretable diagnostics with prediction-oriented frameworks to further bridge the gap between forecast skill and process understanding.

Author Contributions

All authors substantially contributed to conceiving and designing the research and realizing this manuscript. Conceptualization and research design, data analysis, C.-G.K.; methodology and validation of results, J.L.; formal analysis and data curation, J.-E.L.; funding acquisition and supervision, H.K. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by Korea Environment Industry & Technology Institute (KEITI) through Water Management Program for Drought Project, funded by Korea Ministry of Climate, Energy and Environment (MCEE) (2022003610002).

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Ministry of Environment, Han River Flood Control Office (MOE HRFCO). Statistical and Analytical Report of the 2023 River Basin Survey; Han River Flood Control Office: Seoul, Republic of Korea, 2024. (In Korean) [Google Scholar]
  2. Jeong, J.-H.; Ho, C.-H. Changes in occurrence of cold surges over East Asia in association with Arctic Oscillation. Geophys. Res. Lett. 2005, 32, L14704. [Google Scholar] [CrossRef]
  3. Li, F.; Wang, H.; Gao, Y. On the strengthened relationship between the East Asian winter monsoon and Arctic Oscillation: A comparison of 1950–70 and 1983–2012. J. Clim. 2014, 27, 5075–5091. [Google Scholar] [CrossRef]
  4. Dong, X. Influences of the Pacific Decadal Oscillation on the East Asian summer monsoon in non-ENSO years. Atmos. Sci. Lett. 2016, 17, 115–120. [Google Scholar] [CrossRef]
  5. He, S.; Gao, Y.; Li, F.; Wang, H.; He, Y. Impact of Arctic Oscillation on the East Asian climate: A review. Earth-Sci. Rev. 2017, 164, 48–62. [Google Scholar] [CrossRef]
  6. Chen, W.; Feng, J.; Wu, R. Roles of ENSO and PDO in the link of the East Asian winter monsoon to the following summer monsoon. J. Clim. 2022, 26, 622–635. [Google Scholar] [CrossRef]
  7. Molteni, F.; Stockdale, T.N.; Vitart, F. Understanding and modelling extra-tropical teleconnections with the Indo-Pacific region during the northern winter. Clim. Dyn. 2015, 45, 3119–3140. [Google Scholar] [CrossRef]
  8. Johnson, S.J.; Stockdale, T.N.; Ferranti, L.; Balmaseda, M.A.; Molteni, F.; Magnusson, L.; Tietsche, S.; Decremer, D.; Weisheimer, A.; Balsamo, G.; et al. SEAS5: The new ECMWF seasonal forecast system. Geosci. Model Dev. 2019, 12, 1087–1117. [Google Scholar] [CrossRef]
  9. Wei, W.W.S. Time Series Analysis: Univariate and Multivariate Methods, 2nd ed.; Pearson Addison Wesley: Boston, MA, USA, 2006. [Google Scholar]
  10. Hyndman, R.J.; Athanasopoulos, G. Forecasting: Principles and Practice, 2nd ed.; OTexts: Melbourne, Australia, 2018; Available online: https://otexts.com/fpp2/ (accessed on 3 November 2025).
  11. Zhang, G.P. Time series forecasting using a hybrid ARIMA and neural network model. Neurocomputing 2003, 50, 159–175. [Google Scholar] [CrossRef]
  12. Khashei, M.; Bijari, M. A novel hybridization of artificial neural networks and ARIMA models for time series forecasting. Appl. Soft Comput. 2011, 11, 2664–2675. [Google Scholar] [CrossRef]
  13. Shi, X.; Chen, Z.; Wang, H.; Yeung, D.-Y.; Wong, W.-K.; Woo, W.-C. Convolutional LSTM network: A machine learning approach for precipitation nowcasting. In Proceedings of the 29th Annual Conference on Neural Information Processing Systems (NIPS), Montreal, QC, Canada, 7–12 December 2015; pp. 802–810. [Google Scholar]
  14. Kratzert, F.; Klotz, D.; Brenner, C.; Schulz, K.; Herrnegger, M. Rainfall–runoff modelling using Long Short-Term Memory (LSTM) networks. Hydrol. Earth Syst. Sci. 2018, 22, 6005–6022. [Google Scholar] [CrossRef]
  15. Mu, B.; Qin, B.; Yuan, S. ENSO-ASC 1.0.0: ENSO deep learning forecast model with a multivariate air-sea coupler. Geosci. Model Dev. 2021, 14, 6977–6999. [Google Scholar] [CrossRef]
  16. Ibebuchi, C.C.; Richman, M.B. Deep learning with autoencoders and LSTM for ENSO forecasting. Clim. Dyn. 2024, 62, 5683–5697. [Google Scholar] [CrossRef]
  17. Waqas, M.; Humphries, U.W.; Hlaing, P.T.; Ahmad, S. Seasonal WaveNet-LSTM: A deep learning framework for precipitation forecasting with integrated large scale climate drivers. Water 2024, 16, 3194. [Google Scholar] [CrossRef]
  18. Halpert, M.S.; Ropelewski, C.F. Surface temperature patterns associated with the Southern Oscillation. J. Clim. 1992, 5, 577–593. [Google Scholar] [CrossRef]
  19. Thompson, D.W.J.; Wallace, J.M. The Arctic Oscillation signature in the wintertime geopotential height and temperature fields. Geophys. Res. Lett. 1998, 25, 1297–1300. [Google Scholar] [CrossRef]
  20. Wakabayashi, S.; Kawamura, R. Extraction of major teleconnection patterns possibly associated with the anomalous summer climate in Japan. J. Meteorol. Soc. Jpn. 2004, 82, 1577–1588. [Google Scholar] [CrossRef]
  21. Katz, S.L.; Hampton, S.E.; Izmest’eva, L.R.; Moore, M.V. Influence of long-distance climate teleconnection on seasonality of water temperature in the world’s largest lake—Lake Baikal, Siberia. PLoS ONE 2011, 6, e14688. [Google Scholar] [CrossRef]
  22. Lim, Y.-K.; Kim, H.-D. Impact of the dominant large-scale teleconnections on winter temperature variability over East Asia. J. Geophys. Res. Atmos. 2013, 118, 7835–7848. [Google Scholar] [CrossRef]
  23. Park, H.-J.; Ahn, J.-B. Combined effect of the Arctic Oscillation and the Western Pacific pattern on East Asia winter temperature. Clim. Dyn. 2016, 46, 3205–3221. [Google Scholar] [CrossRef]
  24. Han, B.-R.; Lim, Y.; Kim, H.-J.; Son, S.-W. Development and evaluation of statistical prediction model of monthly-mean winter surface air temperature in Korea. Atmosphere 2018, 28, 153–162. (In Korean) [Google Scholar]
  25. Lee, J.H.; Julien, P.Y.; Maloney, E.D. The variability of South Korean temperature associated with climate indicators. Theor. Appl. Climatol. 2019, 138, 469–489. [Google Scholar] [CrossRef]
  26. Jung, E.; Jeong, J.-H.; Woo, S.-H.; Kim, B.-M.; Yoon, J.-H.; Lim, G.-H. Impacts of the Arctic-midlatitude teleconnection on wintertime seasonal climate forecasts. Environ. Res. Lett. 2020, 15, 094019. [Google Scholar] [CrossRef]
  27. Kim, C.-G.; Lee, J.; Lee, J.E.; Kim, N.W.; Kim, H. Monthly precipitation forecasting in the Han River Basin, South Korea, using large-scale teleconnections and multiple regression models. Water 2020, 12, 1590. [Google Scholar] [CrossRef]
  28. Kim, C.-G.; Lee, J.; Lee, J.E.; Kim, N.W.; Kim, H. Monthly temperature forecasting using large-scale climate teleconnections and multiple regression models. J. Korea Water Resour. Assoc. 2021, 54, 731–745. (In Korean) [Google Scholar]
  29. Lee, Y.; Cho, D.; Im, J.; Yoo, C.; Lee, J.; Ham, Y.-G.; Lee, M.-I. Unveiling teleconnection drivers for heatwave prediction in South Korea using explainable artificial intelligence. npj Clim. Atmos. Sci. 2024, 7, 51. [Google Scholar] [CrossRef]
  30. IPCC. Climate Change 2021—The Physical Science Basis: Working Group I Contribution to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change; Cambridge Univ. Press: Cambridge, UK; New York, NY, USA, 2021; 2391p. [Google Scholar] [CrossRef]
  31. Korea Meteorological Administration (KMA). Annual Report on Climate Change Monitoring 2020; KMA: Seoul, Republic of Korea, 2021; ISSN 2799-4937. (In Korean) [Google Scholar]
  32. Korea Meteorological Administration (KMA). Annual Report on Climate Characteristics 2024; KMA: Seoul, Republic of Korea, 2025; ISSN 2765-3714. (In Korean) [Google Scholar]
  33. Trenberth, K.E.; Dai, A.; van der Schrier, G.; Jones, P.D.; Barichivich, J.; Briffa, K.R.; Sheffield, J. Global warming and changes in drought. Nat. Clim. Change 2014, 4, 17–22. [Google Scholar] [CrossRef]
  34. Rübbelke, D.; Vögele, S. Short-term distributional consequences of climate change impacts on the power sector: Who gains and who loses? Clim. Change 2013, 116, 191–206. [Google Scholar] [CrossRef]
  35. Wheeler, T.; von Braun, J. Climate change impacts on global food security. Science 2013, 341, 508–513. [Google Scholar] [CrossRef]
  36. Shumway, R.H.; Stoffer, D.S. Time Series Analysis and Its Applications: With R Examples, 4th ed.; Springer: Cham, Switzerland, 2017. [Google Scholar]
  37. Navid, M.A.I.; Niloy, N.H. Multiple linear regressions for predicting rainfall for Bangladesh. Communications 2018, 6, 1–4. [Google Scholar] [CrossRef]
  38. Jo, S.; Ahn, J.-B. Statistical forecast of early spring precipitation over South Korea using multiple linear regression. Clim. Res. 2017, 12, 53–71. (In Korean) [Google Scholar] [CrossRef]
  39. Kim, J.-Y.; Seo, K.-H.; Son, J.-H.; Ha, K.-J. Development of statistical prediction models for Changma precipitation: An ensemble approach. Asia-Pac. J. Atmos. Sci. 2017, 53, 207–216. [Google Scholar] [CrossRef]
  40. Mekanik, F.; Imteaz, M.A.; Gato-Trinidad, S.; Elmahdi, A. Multiple regression and artificial neural network for long-term rainfall forecasting using large scale climate modes. J. Hydrol. 2013, 503, 11–21. [Google Scholar] [CrossRef]
  41. Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef]
  42. Chollet, F.; Allaire, J.J. Deep Learning with R; Manning Publications: Shelter Island, NY, USA, 2017. [Google Scholar]
  43. Brownlee, J. Deep Learning for Time Series Forecasting: Predict the Future with MLPs, CNNs and LSTMs in Python; Machine Learning Mastery: San Francisco, CA, USA, 2018. [Google Scholar]
  44. Greff, K.; Srivastava, R.K.; Koutnik, J.; Steunebrink, B.R.; Schmidhuber, J. LSTM: A search space odyssey. IEEE Trans. Neural Netw. Learn. Syst. 2015, 28, 2222–2232. [Google Scholar] [CrossRef]
  45. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
  46. Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 2014, 15, 1929–1958. [Google Scholar]
  47. Zaremba, W.; Sutskever, I.; Vinyals, O. Recurrent Neural Network Regularization. arXiv 2014, arXiv:1409.2329. [Google Scholar]
  48. Kingma, D.P.; Ba, J.L. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Representations (ICLR), Banff, AB, Canada, 14–16 April 2014. [Google Scholar]
  49. Bishop, C. Pattern Recognition and Machine Learning; Springer: Cambridge, UK, 2006. [Google Scholar]
  50. Chiew, F.H.S.; McMahon, T.A. Global ENSO–streamflow teleconnection, streamflow forecasting and interannual variability. Hydrol. Sci. J. 2002, 47, 505–522. [Google Scholar] [CrossRef]
  51. Schepen, A.; Wang, Q.J.; Robertson, D. Evidence for using lagged climate indices to forecast Australian seasonal rainfall. J. Clim. 2012, 25, 1230–1246. [Google Scholar] [CrossRef]
  52. Wang, B.; Wu, R.; Fu, X. Pacific–East Asian teleconnection: How does ENSO affect East Asian climate? J. Clim. 2000, 13, 1517–1536. [Google Scholar] [CrossRef]
  53. Liu, Z.; Alexander, M. Atmospheric bridge, oceanic tunnel, and global climatic teleconnections. Rev. Geophys. 2007, 45, RG2005. [Google Scholar] [CrossRef]
  54. Nash, J.E.; Sutcliffe, J.V. River flow forecasting through conceptual models. Part 1: A discussion of principles. J. Hydrol. 1970, 10, 282–290. [Google Scholar] [CrossRef]
  55. Gal, Y.; Ghahramani, Z. Dropout as a Bayesian approximation: Representing model uncertainty in deep learning. In Proceedings of the 33rd International Conference on Machine Learning (ICML), New York, NY, USA, 19–24 June 2016; PMLR 48. pp. 1050–1059. [Google Scholar]
  56. Hipel, K.W.; McLeod, A.I. Time Series Modelling of Water Resources and Environmental Systems; Elsevier: Amsterdam, The Netherlands, 1994. [Google Scholar]
  57. Moriasi, D.N.; Arnold, J.G.; Van Liew, M.W.; Bingner, R.L.; Harmel, R.D.; Veith, T.L. Model evaluation guidelines for systematic quantification of accuracy in watershed simulations. Trans. ASABE 2007, 50, 885–900. [Google Scholar] [CrossRef]
  58. Wilks, D.S. Statistical Methods in the Atmospheric Sciences, 3rd ed.; Academic Press: San Diego, CA, USA, 2011. [Google Scholar]
  59. Goddard, L.; Kumar, A.; Solomon, A.; Smith, D.; Boer, G.; Gonzalez, P.; Kharin, V.; Merryfield, W.; Deser, C.; Mason, S.J.; et al. A verification framework for interannual-to-decadal predictions experiments. Clim. Dyn. 2013, 40, 245–272. [Google Scholar] [CrossRef]
  60. Pasini, A. Artificial neural networks for small dataset analysis. Information 2015, 6, 208–225. [Google Scholar] [CrossRef]
  61. Schulz, M.-A.; Yeo, B.T.T.; Vogelstein, J.T.; Mourao-Miranada, J.; Kather, J.N.; Kording, K.; Richards, B.; Bzdok, D. Different scaling of linear models and deep learning in UK Biobank brain images versus established machine-learning benchmarks. Nat. Commun. 2020, 11, 4238. [Google Scholar] [CrossRef]
Figure 1. Study area (Han River Basin).
Figure 1. Study area (Han River Basin).
Water 18 00098 g001
Figure 2. Lagged correlation analysis results between historical monthly mean temperature of the same month and climate indices for predicting monthly mean temperature in (a) January 2020 and (b) July 2020.
Figure 2. Lagged correlation analysis results between historical monthly mean temperature of the same month and climate indices for predicting monthly mean temperature in (a) January 2020 and (b) July 2020.
Water 18 00098 g002aWater 18 00098 g002b
Figure 3. Top ten climate indices exhibiting the strongest correlations with monthly temperature for each month. Blue and red colors denote negative and positive correlations, respectively, while the shaded areas represent the range of correlation coefficients associated with each climate index.
Figure 3. Top ten climate indices exhibiting the strongest correlations with monthly temperature for each month. Blue and red colors denote negative and positive correlations, respectively, while the shaded areas represent the range of correlation coefficients associated with each climate index.
Water 18 00098 g003
Figure 4. Climate indices used for predicting monthly temperatures from January to December 2020. Blue and red colors denote negative and positive correlations, respectively, while the shaded areas represent the range of correlation coefficients associated with each climate index.
Figure 4. Climate indices used for predicting monthly temperatures from January to December 2020. Blue and red colors denote negative and positive correlations, respectively, while the shaded areas represent the range of correlation coefficients associated with each climate index.
Water 18 00098 g004
Figure 5. Monthly temperature predictions for January to December 2010 generated using (a) the LSTM-based model and (b) the MLR-based model.
Figure 5. Monthly temperature predictions for January to December 2010 generated using (a) the LSTM-based model and (b) the MLR-based model.
Water 18 00098 g005aWater 18 00098 g005b
Figure 6. Monthly temperature predictions for January to December 2020 generated using (a) the LSTM-based model and (b) the MLR-based model.
Figure 6. Monthly temperature predictions for January to December 2020 generated using (a) the LSTM-based model and (b) the MLR-based model.
Water 18 00098 g006
Figure 7. Monthly mean temperature predictions using the LSTM-based model for the periods (a) 1991–1999, (b) 2000–2008, (c) 2009–2017, and (d) 2018–2023.
Figure 7. Monthly mean temperature predictions using the LSTM-based model for the periods (a) 1991–1999, (b) 2000–2008, (c) 2009–2017, and (d) 2018–2023.
Water 18 00098 g007aWater 18 00098 g007b
Figure 8. Monthly mean temperature predictions using the MLR-based model for the periods (a) 1991–1999, (b) 2000–2008, (c) 2009–2017, and (d) 2018–2023.
Figure 8. Monthly mean temperature predictions using the MLR-based model for the periods (a) 1991–1999, (b) 2000–2008, (c) 2009–2017, and (d) 2018–2023.
Water 18 00098 g008aWater 18 00098 g008b
Figure 9. Comparison of model performance by lead time.
Figure 9. Comparison of model performance by lead time.
Water 18 00098 g009
Figure 10. Comparison of tercile forecast probabilities by model and lead time. The red line denotes the baseline value of 33.3%.
Figure 10. Comparison of tercile forecast probabilities by model and lead time. The red line denotes the baseline value of 33.3%.
Water 18 00098 g010
Table 1. ASOS stations used in this study [27].
Table 1. ASOS stations used in this study [27].
IDStation NameLatitude (°N)Longitude (°E)Elevation (m a.s.l)
90Sokcho38.25128.5618.06
93Bukchuncheon37.95127.7595.61
95Cheorwon38.15127.30155.48
98Dongducheon37.90127.06115.62
99Paju37.89126.7730.59
100Daegwallyeong37.68128.72772.57
101Chuncheon37.90127.7476.47
104Bukgangneung37.80128.8678.90
105Gangneung37.75128.8926.04
106Donghae37.51129.1239.91
108Seoul37.57126.9785.67
112Incheon37.48126.6268.99
114Wonju37.34127.95148.60
116Gwanaksan37.44126.96626.76
119Suwon37.27126.9934.84
121Yeongwol37.18128.46240.60
127Chungju36.97127.95116.30
130Uljin36.99129.4150.00
131Cheongju36.64127.4458.70
201Ganghwa37.71126.4547.84
202Yangpyeong37.49127.4947.26
203Icheon37.26127.4880.09
211Inje38.06128.17200.16
212Hongcheon37.68127.88139.95
214Samcheok37.37129.223.90
216Taebaek37.17128.99712.82
217Jeongseongun37.38128.65307.40
221Jecheon37.16128.19259.80
226Boeun36.49127.73174.99
232Cheonan36.76127.2981.50
272Yeongju36.87128.52210.79
Table 2. Predictors used in this study [27,28].
Table 2. Predictors used in this study [27,28].
PredictorDescriptionProvider
Global climate indexAAOAntarctic oscillationNOAA
AMMAtlantic meridional modeNOAA
AMOAtlantic multidecadal oscillationNOAA
AMO5ERSST AMO (North Atlantic 0–60 N SSTA)NOAA
AOArctic oscillationNOAA
BESTBivariate ENSO timeseriesNOAA
CPOLRMonthly central Pacific outgoing long wave radiation index (170 E–140 W, 5 S–5 N)NOAA
EAEast Atlantic patternNOAA
EAWREast Atlantic/Western Russia patternNOAA
EPNPEast Pacific/North Pacific oscillationNOAA
GMLGlobal mean land-ocean temperature indexNOAA
MEI.v2Multivariate ENSO index version 2NOAA
NAONorth Atlantic OscillationNOAA
NINO1+2Extreme eastern tropical Pacific SST (0–10 S, 90 W–80 W)NOAA
NINO3Eastern tropical Pacific SST (5 N–5 S, 150 W–90 W)NOAA
NINO3.4East central tropical Pacific SST (5 N–5 S, 170–120 W)NOAA
NINO4Central tropical Pacific SST (5 N–5 S, 160 E–150 W)NOAA
NOINorthern Oscillation IndexNOAA
NPNorth Pacific patternNOAA
ONIOceanic Niño IndexNOAA
PNAPacific American IndexNOAA
POLPolar/Eurasia patternNOAA
QBOQuasi-biennial oscillationNOAA
SCANDScandinavia patternNOAA
SLP_DARDarwin sea level pressureNOAA
SLP_EEPEquatorial eastern Pacific sea level pressureNOAA
SLP_INDIndonesia sea level pressureNOAA
SLP_TAHTahiti sea level pressureNOAA
SOISouthern Oscillation IndexNOAA
SOI_EQEquatorial SOINOAA
SOLARSolar flux (10.7 cm)NOAA
TNATropical Northern Atlantic IndexNOAA
TNITrans-Niño IndexNOAA
TPITripole index for the interdecadal Pacific oscillationNOAA
TSATropical Southern Atlantic IndexNOAA
WHWPWestern Hemisphere warm poolNOAA
WPWestern Pacific IndexNOAA
Local climate indexPCPMonthly precipitationKMA
TMPMonthly average temperatureKMA
HMDMonthly average relative humidityKMA
AvgSLPMonthly average sea level pressureKMA
DLhrMonthly sum of daylight hoursKMA
WNDMonthly average wind speedKMA
CLOUDMonthly average cloud coverKMA
SmallEVMonthly sum of small pan evaporationKMA
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kim, C.-G.; Lee, J.; Lee, J.-E.; Kim, H. Monthly Temperature Prediction in the Han River Basin, South Korea, Using Long Short-Term Memory (LSTM) and Multiple Linear Regression (MLR) Models. Water 2026, 18, 98. https://doi.org/10.3390/w18010098

AMA Style

Kim C-G, Lee J, Lee J-E, Kim H. Monthly Temperature Prediction in the Han River Basin, South Korea, Using Long Short-Term Memory (LSTM) and Multiple Linear Regression (MLR) Models. Water. 2026; 18(1):98. https://doi.org/10.3390/w18010098

Chicago/Turabian Style

Kim, Chul-Gyum, Jeongwoo Lee, Jeong-Eun Lee, and Hyeonjun Kim. 2026. "Monthly Temperature Prediction in the Han River Basin, South Korea, Using Long Short-Term Memory (LSTM) and Multiple Linear Regression (MLR) Models" Water 18, no. 1: 98. https://doi.org/10.3390/w18010098

APA Style

Kim, C.-G., Lee, J., Lee, J.-E., & Kim, H. (2026). Monthly Temperature Prediction in the Han River Basin, South Korea, Using Long Short-Term Memory (LSTM) and Multiple Linear Regression (MLR) Models. Water, 18(1), 98. https://doi.org/10.3390/w18010098

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop