Next Article in Journal
Determination of the Processing Route for Obtaining Calcium Acetate from Eggshell Waste
Previous Article in Journal
Green Marketing in Real Estate and Its Influence on Purchasing Intentions Among Young Adults: A Structural Analysis of Perceived Value and Greenwashing
Previous Article in Special Issue
Repeated Warning Signals for Sudden Climate Warming: Consequences on Possible Sustainability Policies
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Next-Day Dew Intensity Prediction Model Based on the Improved Hippopotamus Optimization

1
Key Laboratory of Songliao Aquatic Environment, Ministry of Education, Jilin Jianzhu University, No. 5088 Xincheng Road, Changchun 130118, China
2
School of Electrical Engineering and Computer, Jilin Jianzhu University, Changchun 130118, China
3
International Energy College, Jinan University Zhuhai Campus, Zhuhai 519070, China
4
Scientific Research Office, Jilin Business and Technology College, Changchun 130507, China
*
Author to whom correspondence should be addressed.
Sustainability 2026, 18(3), 1445; https://doi.org/10.3390/su18031445
Submission received: 15 November 2025 / Revised: 27 January 2026 / Accepted: 29 January 2026 / Published: 1 February 2026

Abstract

Accurate dew intensity prediction is vital in multiple fields, such as agriculture, meteorology, industry, and transportation. This study addresses the cross-disciplinary demands for dew intensity prediction by proposing a hybrid deep learning model based on the improved hippopotamus optimization (IHO). Key influencing factors were selected through multidimensional meteorological data correlation analysis, and a fusion architecture of a Bidirectional Temporal Convolutional Network (BiTCN) and a Support Vector Machine (SVM) was constructed. The IHO algorithm is adopted to optimize model parameters and enhance prediction accuracy adaptively. Experiments were conducted using ten years of meteorological data to verify the prediction of twelve-hour dew intensity in three typical ecosystems in Northeast China: farmland, marsh wetland, and urban areas. The results show that the optimized IHO-BiTCN-SVM model achieved significant improvements in key indicators, including MAE, MAPE, MSE, RMSE, and R2. For the farmland ecosystem, MAE was reduced by 72.2% (0.0016572 vs. 0.0059659), MSE decreased from 6.8552 × 10−5 to 6.7874 × 10−6, and R2 increased by 12.5% (0.98791 vs. 0.87793). The IHO algorithm reduced the MAE of the farmland system by 39.6%, the MAPE by 41.6%, and the MSE by 60.2%, yet the R2 increased by 1.8% compared with the benchmark model. This model effectively overcomes the subjectivity of traditional methods through an intelligent parameter optimization mechanism, providing reliable technical support for precise agricultural irrigation decisions, urban dew formation warnings, and wetland ecological protection.

1. Introduction

Dew plays a crucial role in plant growth as an important non-precipitation water resource. The dew characterizes the amount of water condensation per unit area over a given time period. Additionally, it has a dual effect on plant physiological regulation and soil hydrothermal balance, maintaining nocturnal water balance by reducing evapotranspiration [1]. It facilitates nutrient cycling by transporting trace elements. Meteorologists and agricultural experts commonly use data such as dew intensity to monitor and assess meteorological conditions, thereby enhancing their understanding of water distribution and environmental variations. The dew point temperature, which refers to the temperature at which water vapor in the air condenses into liquid water, is essential for estimating the evaporation rates of water from plants and soil, as well as the dew intensity in a given region. In recent years, there has been an increase in the prediction and research of dew point temperature. For instance, M. Zounemat-Kermani [2] employed Artificial Neural Networks (ANNs) and Multiple Linear Regression (MLR) to predict dew point temperature, while C. Ilie and colleagues [3] also achieved commendable results using ANNs for dew point temperature prediction. Prediction models are generally categorized into physical models, statistical models, and deep learning models [4]. Physical models are based on physical principles and equations, requiring extensive data and complex calculations, often making it difficult to obtain reliable analytical solutions. Many statistical models are based on linear assumptions, whereas real-world relationships are often nonlinear. These models are inefficient with large datasets, prone to overfitting, and have inherent limitations. To overcome the shortcomings of the aforementioned models, an increasing number of machine learning or deep learning models are being employed for prediction and research. Machine learning can handle deep, nonlinear relationships more effectively than statistical models [5].
Scientists are developing more complex, precise, and larger-scale deep learning models, thanks to advancements in computer hardware. Deep learning, a subset of machine learning, allows distributed computation across multiple layers. Deep learning machines uncover intrinsic structures in large datasets through backpropagation, with deep convolutional networks making significant impacts in image, text, and video processing, and recurrent networks achieving breakthroughs in speech and data sequence analysis. Currently, popular deep learning models include Recurrent Neural Networks (RNNs) and Convolutional Neural Networks (CNNs) [6]. With further technological advancements, models such as BiTCN and BiLSTM have emerged. Y. Zhang and colleagues [7] applied Bidirectional Temporal Convolutional Networks to protein secondary structure prediction, achieving superior results compared to other models. T. Sun and colleagues [8] combined Convolutional Neural Networks (CNN) with Bidirectional Spatiotemporal Networks to predict traffic flow, also achieving high accuracy. However, machine learning and deep learning models contain numerous parameters, and to eliminate the influence of subjective factors on model training, many scholars employ swarm intelligence algorithms for parameter tuning, replacing traditional manual tuning. Swarm intelligence algorithms are computational methods inspired by collective behavior in nature, simulating and utilizing the principles of swarm intelligence to solve complex problems. Standard swarm intelligence algorithms include DBO [9], GWO [10], WOA [11], and HHO [12]. These algorithms have demonstrated excellent performance in solving problems across various fields, particularly in complex combinatorial optimization problems and multi-dimensional search spaces, leveraging individual cooperation and competition to achieve optimal solutions, characterized by fast convergence and simplicity. H. Xie and colleagues [13] optimized CNN-LSTM using an improved Gray Wolf Algorithm, yielding positive results. R. Mahadeva [14] optimized ANN models for predicting the permeate flux of reverse osmosis desalination plants using an improved Whale Optimization Algorithm, showcasing superior performance across multiple datasets.
This study focuses on the interdisciplinary topic of predicting dew intensity and conducts innovative research based on the limitations of existing models. The traditional prediction system has three bottlenecks: the physical model is constrained by the difficulty in solving complex partial differential equations, the statistical model struggles to capture the nonlinear correlations of meteorological factors, and conventional machine learning methods face issues such as high parameter sensitivity and poor convergence stability. To overcome these technical barriers, this study has developed a hybrid architecture that combines the bidirectional time convolution network (BiTCN) and the Support Vector Machine (SVM), and introduced the improved hippopotamus optimization algorithm (IHO) to achieve intelligent parameter optimization. The innovation of this model is reflected in three dimensions: Firstly, through the dilated causal convolution of BiTCN to capture long-range meteorological temporal sequence features, combined with the generalization advantage of SVM in high-dimensional space, a dual analytical mechanism of spatiotemporal features is formed; Secondly, in response to the premature convergence defect of the standard hippopotamus optimization algorithm, a dynamic adaptive search strategy is proposed to balance the global exploration and local development capabilities; Finally, a multi-ecosystem verification mechanism is established, and a comparison experimental field is constructed by selecting the differentiated underlying surfaces (farmland, marsh wetland, city) in Northeast China. This paper will predict the dew intensity of a region on the next day over a specific period of time. Apart from the selection of the prediction model, the data preprocessing method, and the parameter adjustment method, it is also worth noting that the formation of dew changes and temperature changes in different regions vary. The HO [15] algorithm is a new meta-heuristic algorithm, inspired by the inherent behavioral patterns of hippos, including their position update in rivers or ponds, defense strategies against predators, and escape methods. The prediction model optimized by the HO algorithm can effectively improve the accuracy and efficiency of dew intensity prediction. Improving the algorithm can further optimize the optimization effect of the algorithm. Therefore, this paper proposes an IHO-BITCN-SVM hybrid prediction model to predict dew intensity.
The theoretical value of this study is reflected in three aspects: (1) through the analysis of feature importance, it reveals the dominant role of dew point temperature and wind speed in determining the intensity of dew (with a weight coefficient of 0.82); (2) it constructs a nonlinear mapping model between meteorological factors and dew intensity, breaking through the accuracy limit of traditional linear regression with R2 ≤ 0.65; and (3) it proposes an IHO framework with generalization ability, achieving a 60.2% reduction in MSE in the agricultural ecosystem scenario. In terms of engineering application, the 12 h prediction sequence output by the model can provide decision support for precise irrigation, and the spatial interpolation results can also be used for the construction of urban condensation warning systems. This study presents a novel methodological framework for monitoring atmospheric water condensates, with significant implications for advancing smart agriculture and ecological hydrological research.

2. Methods

2.1. Bidirectional Temporal Convolutional Network(BiTCN)

The BiTCN is a time series prediction model that uses two Temporal Convolutional Networks (TCNs) to encode both past and future data, preserving temporal information and enhancing computational efficiency [16]. The architecture includes residual blocks consisting of convolution layers, ReLU activations, dropout, and fully connected layers. This design reduces space complexity and requires fewer parameters compared to RNN-based models, making it more efficient for time series prediction tasks (Figure 1).
As shown in the above figure, each residual block generates an output o, and the final prediction is obtained by summing all the outputs of each block across N layers. The BiTCN has lower space complexity, requiring fewer parameters compared with other methods based on RNN [17]. It is more efficient than methods based on Transformer [18] in time series prediction. At the same time, through two-directional convolutional networks, BiTCN can simultaneously consider both past and future information of the time series, thereby improving prediction accuracy. For example, in wind power generation prediction, BiTCN can consider both past wind speeds and potential future weather changes simultaneously to enhance the accuracy of the prediction.
The core of BiTCN lies in the two-directional convolutional networks, which can be represented as the forward network and the backward network. The forward network is the network that encodes future covariates, and the backward network is the network that encodes past observations and covariates. The final prediction result is obtained by superimposing the outputs of the two networks. This model performs well in multivariate time series prediction. It utilizes two time convolutional networks to encode the past and future values of covariates, thereby achieving effective prediction.

2.2. Support Vector Machine (SVM)

The SVM is a robust classification algorithm. Its core idea is to find an optimal hyperplane in the feature space to distinguish data points of different classes [19]. This hyperplane is called the maximum margin hyperplane because it maximizes the distance (margin) between the two classes. The goal of SVM is to minimize the square of the norm of the normal vector w of the hyperplane for the case of linearly separable data:
min w , b 1 2 w 2
At the same time, the following constraints must be met:
y i w · x i + b 1 , i
Here, w is the normal vector of the hyperplane, b is the bias term, x i is the i-th data point, and y i is the corresponding class label (usually +1 or −1). In practical applications, data is often non-linearly separable. The SVM introduces the concept of soft margin, allowing some data points to violate the margin rule, but at a cost to handle this situation. This is achieved by introducing the slack variable ξ , and the optimization objective becomes
min w , b , ξ 1 2 w 2 + C i = 1 n ξ i
where it is subject to the following: y i w · x i + b 1 ξ i ,   ξ i 0 ,   i .
Among them, parameter C is a regularization parameter used to control the trade-off between margin and misclassification.
The SVM is an excellent classification algorithm that enhances the robustness of classification by maximizing the class margin. In the model combined with BiTCN, SVM can effectively utilize the features extracted by BiTCN for accurate classification predictions. This combination leverages the advantages of BiTCN in sequence data processing and the powerful classification ability of SVM, making it suitable for complex time series classification tasks.

2.3. BITCN-SVM

This paper combines BiTCN and SVM to form a hybrid prediction model, and uses the improved hippopotamus algorithm to optimize the parameters. For the BiTCN-SVM part, the model primarily consists of an input layer, a BiTCN layer, and an SVM model. The specific model structure is shown in the Figure 2.

2.4. IHO-BITCN-SVM

This paper proposes a hybrid model to predict the dew intensity in the next 12 h in Northeast China. The BITCN and SVM included in the model have been explained previously. This paper provides a more detailed explanation of the IHO in Section 3. The following are the steps for the prediction using the hybrid model.
Step 1: Collect dew intensity data for three ecosystems in Northeast China, including dew intensity, temperature, relative humidity, wind speed, dew point temperature, visibility, etc. Integrate the raw data and divide it into a training set (70%) and a remaining set (30%) for validation and testing.
Step 2: Preprocess the data collected in step 1, including data cleaning and feature selection. Remove outliers and fill in missing values using an interpolation-based mean approach, where each missing value is replaced by the average of the observations from the preceding day and the following day, and select features with high correlation to reduce the dimension of the hybrid model.
Step 3: Adjust the parameters of the hybrid model using dew intensity data and IHO. The parameters include learning rate, number of filters, dropout rate, regularization parameter, etc. In the BITCN module, the baseline setting consists of two residual blocks with a convolution kernel size of 3 and 50 filters in each convolution layer; the dilation factors are set to 1 and 2 for the first and second blocks, respectively, and the initial dropout rate is 0.1. IHO is used to tune the learning rate, number of filters, dropout rate, and regularization parameter.
Step 5: Use the trained model to predict the input data. In the proposed pipeline, BITCN serves as the feature extractor. We take the output of the last residual addition as the latent feature vector and pass it to an SVM regressor for the final estimate of dew intensity.
Step 6: Add the predicted values to obtain the final predicted dew intensity value.
The Figure 3 below shows the workflow of the entire model.

3. Improved Hippopotamus Optimization Algorithm

3.1. Hippopotamus Optimization Algorithm

Three significant behavioral characteristics of hippos inspire the design of the HO algorithm. In the wild, female hippos often appear in groups with their young ones [20]. A hippo group consists of several female hippos, their young, several adult males, and a dominant male. The algorithm is inspired by these behaviors, where hippos represent candidate solutions, and their movements in the search space reflect decision variable updates. The algorithm begins with random initial solutions, similar to other optimization algorithms. The initialization stage of the HO algorithm is similar to that of conventional optimization algorithms, involving the random generation of initial solutions. During this stage, the vector of decision variables is generated by the following formula:
X i : x ij = lb j + r · ub j lb j ,   i = 1 , 2 , , N ,   j = 1 , 2 , , m
The decision variable vector X i for the ith hippopotamus is initialized randomly within the range [ lb j , ub j ], where lb j and ub j are the lower and upper limits for each decision variable. The population size is N , and the number of decision variables is m .
(1)
First stage (Exploration stage): Update of the position of hippos in rivers or ponds
In the exploration phase, the positions of the hippos are updated by moving closer to the dominant individual based on the function values. The dominant individual is determined by the best fitness in the current generation, and the non-dominant hippos adjust their positions towards the dominant one to explore the search space.
(2)
Second stage (Exploration stage): Hippopotamuses defend against predators
In this phase, hippos adjust their positions based on predator threats, improving the global search by moving towards or away from danger. The movement mimics defensive behaviors where hippos will either move towards the predator to drive it away or retreat to avoid danger.
(3)
Third stage (Development stage): The hippopotamus escapes from the predator.
Hippos move towards safe zones to simulate escape behavior, improving local search capabilities. This phase allows the algorithm to enhance its ability to find better solutions by focusing on areas that are farther away from potential local minima.

3.2. Improvement Strategies for the Hippopotamus Optimization Algorithm

3.2.1. The Ghost Strategy

The HO algorithm may become trapped in local optimal solutions during the search process, especially in complex optimization problems, where the algorithm’s diversity and exploration capabilities may be insufficient. Introducing the ghost strategy [21] can enhance the algorithm’s global search capability and its ability to escape local optima. By generating ghost positions, the algorithm can explore regions of the solution space that have not been adequately searched, thereby increasing the diversity of the population. The calculation formula for the ghost position is as follows:
x i = X new X i + XG
Among them, X new is the position of the new candidate solution, X i is the position of the current candidate solution, and XG is the position of the optimal solution. Then, calculate the opposite individual and the calculation parameters k :
k = 1 + t T 0.5 10
Next, let us calculate the opposing individuals X i :
X i = ub + lb 2 + ub + lb 2 k X i k
Compare the fitness values of the current individual X i , the ghost position x i , and the opposing individual X i , and select the position with the better fitness value for update.

3.2.2. Generalized Quadratic Interpolation Strategy

The HO algorithm may lack accuracy during local search, particularly in the vicinity of the optimal solution, and may struggle to approach the optimal solution effectively. The Generalized Quadratic Interpolation (GQI) strategy [22] is a technique used in intelligent optimization algorithms, which predicts the potential optimal solution position through a mathematical interpolation method. The introduction of the GQI strategy is to improve the local search accuracy of the algorithm. By utilizing the information of the known three points, the GQI can more accurately predict the position of the new point, thereby accelerating the convergence speed of the algorithm. The basic formula of generalized quadratic interpolation is as follows:
f x = a 0 + a 1 x x 0 + a 2 ( x x 0 ) 2 + a 3 x x 1 x x 2
where a 0 , a 1 , a 2 , and a 3 are the coefficients to be determined.
The manifestation of this algorithm improvement is as follows:
L = x i 0.5 × ( x i x j ) 2 × f i f k ( x i x k ) 2 × f i f j x i x j × f i f k x i x k × f i f j
where x i , x j , and x k are the positions of the three points after the fitness sorting process, and f i , f j , and f k are the corresponding fitness values.

3.2.3. Random Crossover and Sequential Mutation Strategies

Random crossover is a crossover operation in algorithms. It randomly selects two or more individuals from the population and then combines their genes in a certain way to generate new offspring. This method can increase the genetic diversity of the population, helping the algorithm escape from local optimal solutions and explore a broader search space. Sequential mutation is a mutation operation that generates new offspring by changing the order of genes in an individual. This strategy is particularly suitable for problems where the order of genes is important, such as permutation optimization problems. Sequential mutation helps maintain beneficial gene combinations while introducing new variations to prevent the algorithm from prematurely converging. This paper optimizes the algorithm using these two methods. Here is the formula for random crossover:
h 1 t + 1 = h r 1 h r 3 h r 2 , r 1 r 2 r 3
Here, h r 1 , h r 2 , and h r 3 are the randomly selected positions of the individual.
During iteration t, we first sort the population by fitness and keep a candidate pool of the better solutions. Three different indices, r 1 , r 2 , and r 3 , are then drawn from this pool (excluding the current individual i) and used in Equation (10) to form a crossover trial. The trial is projected back to the search bounds if necessary, and we keep it only when it improves the fitness.
The sequence variation formula is as follows: h 1 t + 1 = h i t + h i 1 t 2 . Here, h i t is the ith individual, and h 1 t + 1 is the (i + 1)th individual. Consistent with the crossover procedure, the mutation trial solution is also updated using a greedy selection rule, i.e., the current individual is replaced only when the mutated candidate achieves an improved fitness value.
The IHO algorithm not only retains the advantages of the HO algorithm but also enhances its search efficiency and optimization capabilities, especially when dealing with complex optimization problems through these improvements. The introduction of these strategies enables the IHO algorithm to achieve a better balance between global and local search, thereby having broader applicability in solving various types of optimization problems.

3.3. Algorithm Evaluation

In this section, multiple swarm intelligence algorithms will be compared on various test functions from CEC 2022. The test functions include unimodal functions, multimodal functions, and composite multimodal functions, which will be listed in Table 1. At the same time, to compare the performance of the algorithms, this paper utilizes the hippo optimization (HO) algorithm, Harris Hawk Optimization Algorithm (HHA), Ant Lion Optimization Algorithm (ALO), Whale Optimization Algorithm (WOA), and Gray Wolf Optimization Algorithm (GWO) to compare with IHO. All data are subjected to 300 iterations, with an average of 30 times, and all population sizes are set to 40. Table 2 shows the optimal values, average values, and standard deviations of each algorithm after iterations. Figure 4 shows the convergence curves of each algorithm.
As shown in Figure 4, IHO exhibits a faster convergence speed and higher solution accuracy compared to other swarm intelligence algorithms. In the early stage of the algorithm search, it can converge quickly, and in the later stage, it can maintain a better solution. These IHOs only require fewer iterations. This is due to the algorithm optimization presented in this paper. From Table 2, it can be seen that our algorithm has stronger robustness. It can maintain high stability when facing unimodal, multimodal, and fixed-dimensional functions. Compared to other swarm intelligence algorithms, it maintains better performance. In conclusion, the improvement of the hippopotamus algorithm in this paper is effective. The following will introduce how to use IHO in deep learning models. In machine learning prediction models, several parameters need to be set, including the learning rate, the number of neurons, the number of iterations, and the regularization rate, among others. Properly setting these parameters can lead to more accurate predictions and improved computational efficiency. Manual parameter tuning, however, can be heavily influenced by the practitioner’s experience. To address this issue, this paper employs an Improved Harmony Optimization (IHO) method to automate the tuning of parameters for machine learning prediction models. The actual data from Northeast China is divided into a training set (70%), a validation set (30%), and a test set. After each iteration, the parameters stored in each individual are used to configure the machine learning prediction model, which is then trained using the training data. Once the specified number of iterations is reached, the model training is completed.

4. Case Study

4.1. Data Sources and Processing

The data for this study were collected between 2005 and 2022 from three ecosystems in Northeast China. These ecosystems include the marsh wetlands in Fushin City, with geographical coordinates of 47.2511° N, 132.0363° E. The other two ecosystems are Changchun City, with coordinates of 43.8265° N, 125.3138° E, and Liushu County’s corn fields, with coordinates of 43.3081° N, 124.3374° E. The main climate type in Northeast China is the temperate monsoon climate, while some areas belong to the temperate continental climate. Due to the high latitude, the region experiences cold and dry winters and warm and humid summers. The influence of the monsoon is evident in the seasonal changes. During the summer, the humidity in Northeast China is high, and the nighttime temperature is low, which is conducive to the formation of dew; meanwhile, in winter, due to low temperatures and dry air, it is difficult for dew to form, and even frost may occur. Studying the formation of dew is of great significance for local agriculture. Dew usually begins to condense 30 min before sunrise and evaporates within about 4 h after sunrise, providing valuable moisture for agriculture, especially during drought seasons. This study aims to construct a model for predicting dew intensity by analyzing these ecosystems. Meteorological data include temperature, humidity, wind speed, dew point, visibility, cloud cover, and dew intensity, which are collected every two days. 70% of the data is used for model training, and the remaining 30% is used for validation and testing.

4.2. Data Correlation Analysis

The purpose of this model is to predict the dew intensity for the next 12 h. Therefore, we collected and organized the relevant meteorological data measured in the marsh wetland ecosystem of the Fujin area. Since the condensation period of dew is from 6 p.m. to 5 a.m. the next day, we will use meteorological data from 8 a.m. to 5 a.m. to predict the dew intensity for the next day. Initially, the data used for predicting the dew intensity the next day were five features: temperature (°C), RH (%), wind speed (m/s), visibility (km), and Td (°C). However, if a specific meteorological data has a weak influence on the dew intensity, it may affect the accuracy of the prediction results. At the same time, if certain meteorological features are selected and others are excluded, it can help avoid collecting redundant data and make the results more meaningful in practical applications. Therefore, it is necessary to conduct a correlation analysis of the features in the data set. Through correlation analysis, features with higher correlation can be selected to improve the model’s accuracy. In this data analysis, we employed correlation analysis to investigate the linear relationship between each variable in the dataset. Correlation analysis is a statistical method used to measure the relationship between two variables, specifically whether they tend to change together. The relevant measure to pay attention to is the Pearson correlation coefficient, which is suitable for measuring the linear correlation between two continuous variables. The formula for calculating the Pearson correlation coefficient is as follows:
r = x i x ¯ y i y ¯ ( x i x ¯ ) 2 ( y i y ¯ ) 2
x i and y i are the individual data points of the two variables. x ¯ and y ¯ are the average values of the two variables. r is the Pearson correlation coefficient, whose value ranges from −1 to +1. Through the correlation analysis, we can obtain the correlations between several features and the next-day dew intensity. As can be seen from Figure 5, Figure 6 and Figure 7, temperature (°C), RH (%), and Td (°C) show a positive correlation, while the correlation coefficients of wind speed (m/s) and visibility (km) are negative, which means that the correlations of these two items with the next-day dew intensity are very small. It should be noted that the Pearson correlation coefficient mainly reflects linear dependence. However, the influence of wind speed on condensation can be conditional and potentially nonlinear, so its role may not be fully captured by a single linear coefficient. Given the goal of feature screening in this study (i.e., reducing redundancy and improving model robustness), we further assessed wind speed from the perspective of effective information content. During the dew-formation period, part of the wind-speed effect can be explained by, or overlaps with, moisture-related variables such as temperature, relative humidity, and dew-point temperature, which leads to a limited independent contribution and marginal incremental information from wind speed in our dataset. In addition, an inspection of the raw records shows that visibility is frequently reported as 20 km under high-visibility conditions, indicating a clear ceiling (saturation) effect; this restricts its variability and thus limits its informative value in correlation-based screening. For these reasons, wind speed and visibility were not retained as final inputs, and temperature, relative humidity, and dew-point temperature were used as the predictors for next-day dew intensity. Accordingly, temperature (°C), RH (%), and Td (°C) were retained as the final input features for next-day dew-intensity prediction.

4.3. Evaluating Indicator

This paper employs four indicators to assess the performance of the model, namely the average relative percentage error ( MAPE ), the root mean square error ( RMSE ), the average absolute error ( MAE ), and the coefficient of determination ( R 2 ). Their formulas are as follows:
RMSE = 1 n i = 1 n ( y i y ^ i ) 2
MAE = 1 n i = 1 n y i y ^ i
MAPE = 1 n i = 1 n y i y ^ i y i × 100
R 2 = 1 i = 1 n ( y i y ^ i ) 2 i = 1 n ( y i y ¯ ) 2
In the formula, y i is the actual value, while y ^ i represents the predicted value, and n represents the length of the sample. In the comparison results section (Section 4.5) of this paper, these four indicators are used for evaluation. The smaller the values of MAPE , RMSE , and MAE are, the better the model is; meanwhile, the closer R 2   0 ~ 1 is to 1, the better the model is.

4.4. Model Comparison

In this experiment, three models were set to predict the future dew intensity in three regions. The superiority of the proposed hybrid model in this paper is demonstrated. These models are BiTCN-SVM, HO-BiTCN-SVM, and IHO-BiTCN-SVM. It is proven that algorithm optimization is effective for the optimization of machine learning prediction models, and it is also proven that the hybrid prediction model based on the swarm intelligence algorithm and machine learning proposed in this paper is excellent. To ensure a fair and comparable evaluation, the three ecosystems were modeled in a “train-and-test separately” manner: each ecosystem was trained and tested using only its own data, without pooling data across ecosystems. For reproducibility, all steps involving randomness (e.g., population initialization, random sampling, and perturbations) can be controlled by setting a fixed MATLAB random seed in MATLAB R2023b, so that the pipeline becomes repeatable under the same data split and settings. In addition, the test set is not used for parameter selection during hyperparameter search, and the final results are reported only on the fixed test period of each ecosystem. The tool used in this experiment is MATLAB R2023b (MathWorks, Natick, MA, USA), and the hardware parameters of the computer are i5-7300HQ, with a frequency of 2.50 GHz, 16 GB of memory, and a 64-bit operating system. For the prediction model with manually set parameters, this paper uses 150 training cycles, a learning rate of 1 × 10−2, and 25 neurons. For the deep model optimized by the swarm intelligence algorithm, this paper sets the population size to 20, the number of algorithm iterations to 15, and the parameter tuning range of the prediction model is as follows: learning rate [1 × 10−4, 1 × 10−2], regularization parameter [1 × 10−4, 1 × 10−3], dropout rate [0, 0.5], and filter number [1, 50]. Table 3 summarizes the MAE, MAPE, MSE, RMSE, and R2 of the three models on the test set of each ecosystem, and reports 95% bootstrap confidence intervals and standard deviations to reflect the uncertainty of the estimates. In addition, a paired Wilcoxon signed-rank test is applied to the day-wise error series from the same test sequence to examine whether the differences between models are statistically significant.

4.4.1. Ecosystem One: Fujin

Fujin City is located in the eastern part of Heilongjiang Province, China. It is a county-level city under the jurisdiction of Jiamusi City. This area experiences a temperate continental monsoon climate with four distinct seasons. The annual average temperature is around 3.6 °C, and the average annual precipitation is 510 mm, mostly occurring in the summer, which provides sufficient water for agriculture. Fujin City has a low-lying terrain, with an average altitude of about 60 m, close to the Songhua River, and has abundant water resources, providing an important water source for agricultural irrigation. The city has a large area of wetlands, which release water at night, increasing air humidity and facilitating the formation of dew. The characteristics of marsh wetlands include open water bodies, wetland vegetation, and moist soil, which help reduce heat loss on the soil surface, maintain the surface temperature, increase the chance of dew formation, and prolong the duration of dew presence. Figure 8 and Figure 9 show the experimental results of this article on the marsh wetland ecosystem.

4.4.2. Ecosystem Two: Changchun

Changchun is a more urbanized city compared to Fujin, and due to its lower latitude, the temperature is not as cold. The environmental differences between the city and the countryside have a significant impact on the formation of dew. In the city, the heat island effect is a key factor, which causes the urban area to have a higher temperature than the surrounding rural areas, especially at night. This temperature difference reduces the chance of dew formation in the city because dew formation requires the surface temperature to drop below the dew point temperature. However, the buildings and greenery in the city can, to some extent, block the flow of cold air, which helps maintain a certain temperature and, to a certain extent, increases the chance of dew formation. Figure 10 and Figure 11 show the experimental results of the urban ecosystem in this article.

4.4.3. Ecosystem 3: Lishu

Lishu County is located in Siping City, Jilin Province, situated in the southwest of Changchun City. It has similar climatic characteristics to Changchun. Due to the fact that Lishu County is located in the central plain of Jilin Province, with relatively flat terrain and with less urbanization influence as a county-level administrative unit, its geographical conditions are closer to the natural state. Under such circumstances, the farmlands in Lishu County have become important locations for collecting meteorological data. For the actual planting environment, the prediction of dew intensity is not only more in line with reality but also has significant guiding significance for agriculture, providing a scientific basis for the growth of crops. Figure 12 and Figure 13 show the experimental results of the agricultural ecosystem in this paper.

4.5. Interpretation of Result

This paper collected meteorological data from three ecosystems in Northeast China: the city (Changchun), the marshland (Fujin), and the farmland (Lishu). Due to their different geographical characteristics, the meteorological features of these ecosystems also show significant variation. The city experiences the lowest relative humidity, while the marshland has the highest. In urban areas, the closely packed buildings lead to poor air circulation, impacting the distribution of temperature and dew point temperatures. Conversely, the marshland benefits from ample moisture, resulting in high humidity that influences temperature measurements. The farmland lies between these two extremes, with its temperature, relative humidity, and dew point temperatures affected by vegetation cover and land use methods.
The absolute error levels differ across ecosystems, largely because the three test series have different degrees of stability and variability. In the marshland (Fujin), moisture conditions during the dew-formation window are more consistent, so the mapping from the inputs to dew intensity is clearer. In the urban ecosystem (Changchun), heterogeneous surfaces and local heat effects can introduce sharper day-to-day changes, making the task harder. The farmland (Lishu) is intermediate, with moderate variability influenced by surface-state changes. As a result, the same model family can exhibit different error magnitudes across ecosystems.
The study conducted comparative experiments, confirming that the proposed IHO-BITCN-SVM model performs well in prediction tasks. Figure 9, Figure 11 and Figure 13 demonstrate that the IHO-BITCN-SVM model optimally fits the actual curve of next-day dew intensity and achieves effective error evaluation. In the urban ecosystem, it reduces MAPE by 76.06% and MSE by 92.82% compared to the original prediction model, increases the correlation coefficient R2 by 13.23%, and decreases MAE by 77.37534%. Data predicted by this model closely aligns with actual values. The findings in Lishu County indicate superior performance across all error indicators compared to the original model, with a correlation coefficient reaching 0.987 despite complex field data. In the marshland of Fujin City, the correlation coefficient R2 exceeds 0.94. The IHO-BITCN-SVM model enhances predictions by decreasing MAPE by 63.80%, MSE by 67.25%, and MAE by 61.90%, while increasing R2 by 3.77%.
Beyond the visual comparisons in Figure 8, Figure 9, Figure 10, Figure 11, Figure 12 and Figure 13, Table 3 provides point estimates of the error metrics for the three models across the three ecosystems, together with 95% confidence intervals and standard deviations. Note that IHO is a stochastic meta-heuristic, and different random initializations may lead to slightly different tuned hyperparameters; however, these differences do not significantly affect the overall model performance. In our implementation, the best fitness is tracked at each iteration, and the convergence curve is used to check whether the search stabilizes under the fixed iteration budget. Overall, IHO-BiTCN-SVM achieves the lowest MAE, MAPE, and RMSE and the highest R2 in all three ecosystems, and the numerical results are consistent with the visual trends.
A comparison of optimization methods reveals that the original model yields the poorest results, characterized by a large error function and a low correlation coefficient R2. Figure 8, Figure 9, Figure 10, Figure 11, Figure 12 and Figure 13 illustrate the scattered points of the original prediction model, indicating poor prediction accuracy. The use of fixed parameters hampers the model’s performance. The swarm intelligence algorithm iteratively searches for optimal parameters to better train the prediction model, significantly enhancing prediction quality when the original HO algorithm is applied for parameter optimization. For example, in Changchun City, the HO-BITCN-SVM model reduces average absolute percentage error by 33.06%, mean square error by 66.07%, and average absolute error by 38.47%, yet increases R2 by 9.28% compared to the BITCN-SVM model. The incorporation of the HO optimization algorithm improves prediction efficacy. The paper further refines the hippo algorithm for parameter optimization in the prediction model. Scatter plots in Figure 9, Figure 11 and Figure 13 show that after tuning parameters via the swarm intelligence algorithm, the data better fit the intended curve. Bar charts indicate that automatic parameter tuning significantly reduces model prediction errors, and the IHO algorithm consistently outperforms the HO algorithm across all ecosystems. This improvement enhances both the accuracy and stability of the model.
The gain of IHO can be understood as a better balance between fit and robustness. The learning rate and regularization mainly affect training stability, while the number of filters and dropout jointly control the capacity and noise tolerance of the BiTCN feature extractor. With IHO-tuned settings, the latent features become more consistent over time, and the downstream SVM regression is more stable on the test segment, which aligns with the lower errors and higher R2 reported in Table 3.
Given the chronological nature of the test data, all metrics were computed on the same test period within each ecosystem, and the original temporal order was preserved. Based on the time-ordered test error series, we then carried out paired significance tests: compared with BiTCN-SVM and HO-BiTCN-SVM, the improvements achieved by IHO-BiTCN-SVM are statistically significant across all three ecosystems. This suggests that the observed gains are not driven by a few occasional fluctuations but reflect a more consistent reduction in prediction errors over time.
In conclusion, the analysis demonstrates that the IHO-BITCN-SVM hybrid prediction model offers superior performance, effectively predicting dew intensity across various ecosystems.

5. Conclusions

Dew is vital for agriculture as it supplies essential water to plants, helps mitigate drought effects, and creates a protective layer on leaves that minimizes water loss through evaporation and shields plants from high temperatures. In Northeast China, a key grain-producing region, accurately predicting dew intensity is crucial for guiding farmers in irrigation management, conserving water resources, and reducing costs. This prediction also aids agricultural managers in responding efficiently to meteorological risks, thus reducing potential damage to crops caused by extreme weather events like frost. The importance of accurately forecasting dew intensity cannot be overstated. This paper introduces an IHO-BITCN-SVM hybrid model designed to predict next-day dew intensity in Northeast China using historical meteorological data. The study compares historical data across three ecosystems—wetlands, urban areas, and agricultural land—along with the prediction outcomes from the BITCN-SVM and HO-BITCN-SVM models. The findings include the following: 1. The hybrid model presented in this study outperforms both the original model and HO-BITCN-SVM in terms of effectiveness, accuracy, and stability. 2. Utilizing swarm intelligence algorithms to optimize model parameters significantly boosts predictive accuracy. The IHO algorithm simplifies hyperparameter adjustments and enables automatic settings, thus saving time while also improving overall model performance. Enhancements to the hippopotamus optimization algorithm further improve convergence speed and accuracy, contributing to a more robust algorithm and better predictions. For future research, one direction could involve refining the machine learning-based dew prediction models to incorporate real-time monitoring of meteorological indicators such as temperature, humidity, wind speed, and pressure. This could enhance the model’s adaptability and accuracy in dynamic agricultural environments. Another avenue of exploration could involve investigating dew evaporation timing and categorizing dew intensity more precisely, integrating local environmental conditions to provide more accurate warnings regarding factors that may affect agricultural production based on the predicted results. For future research, one direction could involve refining the machine learning-based dew prediction models to incorporate real-time monitoring of meteorological indicators such as temperature, humidity, wind speed, and pressure. This could enhance the model’s adaptability and accuracy in dynamic agricultural environments. Another avenue of exploration could involve investigating dew evaporation timing and categorizing dew intensity more precisely, integrating local environmental conditions to provide more accurate warnings regarding factors that may affect agricultural production based on the predicted results.

Author Contributions

Conceptualization, Y.X., Z.L. and K.W.; Software, Z.L. and Y.C.; Validation, Z.L. and Y.C.; Investigation, Y.C.; Resources, Y.C.; Writing—original draft, Y.X.; Writing—review & editing, K.W.; Visualization, K.W.; Supervision, K.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by [Jilin Provincial Key Research and Development Fund] grant number [20250203082SF].

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Pacak, A.; Worek, W. Review of dew point evaporative cooling technology for air conditioning applications. Appl. Sci. 2021, 11, 934. [Google Scholar] [CrossRef]
  2. Zounemat-Kermani, M. Hourly predictive Levenberg–Marquardt ANN and multi linear regression models for predicting of dew point temperature. Meteorol. Atmos. Phys. 2012, 117, 181–192. [Google Scholar] [CrossRef]
  3. Ilie, C.; Lungu, M.L.; Panaitescu, L.; Ilie, M.; Lungu, D.; Nita, S. Simulating for predicting the hourly dew point temperature using artificial neural networks. J. Environ. Prot. Ecol. 2014, 15, 1101–1109. [Google Scholar]
  4. Li, J.; Song, Z.; Wang, X.; Wang, Y.; Jia, Y. A novel offshore wind farm typhoon wind speed prediction model based on PSO–Bi-LSTM improved by VMD. Energy 2022, 251, 123848. [Google Scholar] [CrossRef]
  5. Del Ser, J.; Casillas-Perez, D.; Cornejo-Bueno, L.; Prieto-Godino, L.; Sanz-Justo, J.; Casanova-Mateo, C.; Salcedo-Sanz, S. Randomization-based machine learning in renewable energy prediction problems: Critical literature review, new results and perspectives. Appl. Soft Comput. 2022, 118, 108526. [Google Scholar] [CrossRef]
  6. Yin, W.; Kann, K.; Yu, M.; Hinrich Schütze, H. Comparative study of CNN and RNN for natural language processing. arXiv 2017, arXiv:1702.01923. [Google Scholar] [CrossRef]
  7. Zhang, Y.; Ma, Y.; Liu, Y. Convolution-bidirectional temporal convolutional network for protein secondary structure prediction. IEEE Access 2022, 10, 117469–117476. [Google Scholar] [CrossRef]
  8. Sun, T.; Yang, C.; Han, K.; Ma, W.; Zhang, F. Bidirectional spatial–temporal network for traffic prediction with multisource data. Transp. Res. Rec. 2020, 2674, 78–89. [Google Scholar] [CrossRef]
  9. Xue, J.; Shen, B. Dung beetle optimizer: A new meta-heuristic algorithm for global optimization. J. Supercomput. 2023, 79, 7305–7336. [Google Scholar] [CrossRef]
  10. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  11. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  12. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  13. Xie, H.; Zhang, L.; Lim, C.P. Evolving CNN-LSTM models for time series prediction using enhanced grey wolf optimizer. IEEE Access 2020, 8, 161519–161541. [Google Scholar] [CrossRef]
  14. Mahadeva, R.; Kumar, M.; Gupta, V.; Manik, G.; Patole, S.P. Modified whale optimization algorithm based ANN: A novel predictive model for RO desalination plant. Sci. Rep. 2023, 13, 2901. [Google Scholar] [CrossRef] [PubMed]
  15. Amiri, M.H.; Mehrabi Hashjin, N.; Montazeri, M.; Mirjalili, S.; Khodadadi, N. Hippopotamus optimization algorithm: A novel nature-inspired optimization algorithm. Sci. Rep. 2024, 14, 5032. [Google Scholar] [CrossRef]
  16. Sprangers, O.; Schelter, S.; de Rijke, M. Parameter-efficient deep probabilistic forecasting. Int. J. Forecast. 2023, 39, 332–345. [Google Scholar] [CrossRef]
  17. Sherstinsky, A. Fundamentals of recurrent neural network (RNN) and long short-term memory (LSTM) network. Phys. D Nonlinear Phenom. 2020, 404, 132306. [Google Scholar] [CrossRef]
  18. Han, K.; Xiao, A.; Wu, E.; Guo, J.; Xu, C.; Wang, Y. Transformer in transformer. Adv. Neural Inf. Process. Syst. 2021, 34, 15908–15919. [Google Scholar]
  19. Buser, B. A training algorithm for optimal margin classifier. In Proceedings of the 5th Annual ACM Workshop on Computational Learning Theory, New York, NY, USA, 27–29 July 1992; pp. 144–152. [Google Scholar] [CrossRef]
  20. Blowers, T.E.; Waterman, J.M.; Kuhar, C.W.; Bettinger, T.L. Social behaviors within a group of captive female Hippopotamus amphibius. J. Ethol. 2010, 28, 287–294. [Google Scholar] [CrossRef]
  21. Jia, H.; Zhou, X.; Zhang, J.; Abualigah, L.; Yildiz, A.R.; Hussien, A.G. Modified crayfish optimization algorithm for solving multiple engineering application problems. Artif. Intell. Rev. 2024, 57, 127. [Google Scholar] [CrossRef]
  22. Zhao, W.; Wang, L.; Zhang, Z.; Mirjalili, S.; Khodadadi, N.; Ge, Q. Quadratic interpolation optimization (QIO): A new optimization algorithm based on generalized quadratic interpolation and its applications to real-world engineering problems. Comput. Methods Appl. Mech. Eng. 2023, 417, 116446. [Google Scholar] [CrossRef]
Figure 1. Structure of the BITCN residual block.
Figure 1. Structure of the BITCN residual block.
Sustainability 18 01445 g001
Figure 2. Structure of the BITCN-SVM Model.
Figure 2. Structure of the BITCN-SVM Model.
Sustainability 18 01445 g002
Figure 3. Workflow of the IHO-BITCN-SVM Model.
Figure 3. Workflow of the IHO-BITCN-SVM Model.
Sustainability 18 01445 g003
Figure 4. Convergence curves of all algorithms on the test function.
Figure 4. Convergence curves of all algorithms on the test function.
Sustainability 18 01445 g004
Figure 5. Correlation coefficient chart of wetland characteristics in Fujin.
Figure 5. Correlation coefficient chart of wetland characteristics in Fujin.
Sustainability 18 01445 g005
Figure 6. Correlation coefficient chart of farmland characteristics in Lishu.
Figure 6. Correlation coefficient chart of farmland characteristics in Lishu.
Sustainability 18 01445 g006
Figure 7. Correlation coefficient chart of urban characteristics in Changchun.
Figure 7. Correlation coefficient chart of urban characteristics in Changchun.
Sustainability 18 01445 g007
Figure 8. Results of HO-BITCN-SVM on Fujin Dataset.
Figure 8. Results of HO-BITCN-SVM on Fujin Dataset.
Sustainability 18 01445 g008
Figure 9. Results of IHO-BITCN-SVM on Fujin Dataset.
Figure 9. Results of IHO-BITCN-SVM on Fujin Dataset.
Sustainability 18 01445 g009
Figure 10. Results of HO-BITCN-SVM on Changchun Dataset.
Figure 10. Results of HO-BITCN-SVM on Changchun Dataset.
Sustainability 18 01445 g010
Figure 11. Results of IHO-BITCN-SVM on Changchun Dataset.
Figure 11. Results of IHO-BITCN-SVM on Changchun Dataset.
Sustainability 18 01445 g011
Figure 12. Results of HO-BITCN-SVM on Lishu Dataset.
Figure 12. Results of HO-BITCN-SVM on Lishu Dataset.
Sustainability 18 01445 g012
Figure 13. Results of IHO-BITCN-SVM on Lishu Dataset.
Figure 13. Results of IHO-BITCN-SVM on Lishu Dataset.
Sustainability 18 01445 g013
Table 1. CEC2022 test function.
Table 1. CEC2022 test function.
FunctionDimRangeOpt
f 1 x = i = 1 D x i 2 20[−100, 100]0
f 2 x = i = 1 D j = 1 i x j 2 20[−100, 100]0
f 4 x = m a x i = 1 , , D x i 20[−100, 100]0
f 6 x = i = 1 D a i 1 D 1 x 20[−100, 100]0
f 9 x = i = 1 D 1 ( 100 ( x i + 1 x i 2 ) 2 + ( x i 1 ) 2 ) 20[−2, 2]0
f 10 x = i = 1 D i · x i 4 20[−1.5, 3]0
f 11 x = 1 + i = 1 D x i 2 4000 i = 1 D x i i c o s 20[−600, 600]0
f 12 x = 20 20   exp 0.2 i = 1 D x i 2 D exp i = 1 D i 1 D c o s ( ) 20[−32, 32]0
Table 2. Algorithm performance comparison.
Table 2. Algorithm performance comparison.
FunctionMetricIHOHODBOGWODBOHHO
F1Opt
Avg
STD
300.0003
0.015895
300.0117
14,674.1813
6658.2186
25,065.3648
17,065.0359
15,002.9954
39,511.1938
5704.7938
5794.294
17,363.7032
14,009.4628
13,531.736
38,241.5772
10,379.2519
9559.7657
29,328.5674
F2Opt
Avg
STD
423.1608
12.6693
451.9874
474.1927
53.4249
545.9341
430.0393
62.8642
493.386
450.043
31.7129
497.92
438.2264
36.9393
493.9931
455.3944
64.7055
577.7704
F4Opt
Avg
STD
840.7932
16.3966
866.7284
839.1858
16.8229
878.8585
849.7495
35.8453
911.0093
831.4505
31.8864
862.1692
862.682
30.4548
904.1752
849.952
16.881
887.3759
F6Opt
Avg
STD
1897.4768
3748.2863
5581.2987
4391.5691
6943.0727
13,450.9802
2104.0054
1,971,037.8591
909,276.8586
7837.5115
6,704,493.8724
2,443,106.9994
2202.1276
1,341,328.8689
763,141.0697
75,175.782
230,614.1647
384,357.7429
F9Opt
Avg
STD
2480.7813
5.4728 × 10−5
2480.7813
2506.7075
38.1253
2571.0009
2480.7826
43.0554
2516.9304
2482.3317
30.7833
2523.6164
2480.9125
29.9273
2512.3306
2495.448
52.2929
2579.3012
F10Opt
Avg
STD
2500.7923
833.8923
3310.8554
2500.9014
1058.3411
3817.0979
2500.815
1154.0982
3884.1084
2500.7251
1084.5293
3651.0394
2500.7589
1107.6847
3399.4675
2501.042
796.9543
4506.4753
F11Opt
Avg
STD
2600
457.4246
2992.842
2766.9009
166.7022
3163.0632
2615.7954
128.3479
2948.4574
3009.5342
553.2393
3691.3074
2900.4264
122.9593
2958.9795
3256.9768
941.0172
4449.6523
F12Opt
Avg
STD
2900.0046
6.8779 × 10−5
2900.0048
2992.8917
106.627
3135.5688
2943.3352
43.2588
3007.0698
2948.6414
23.8777
2979.6848
2958.6462
63.8079
3027.9224
3037.0401
136.3294
3238.525
Table 3. Performance comparison of BiTCN-SVM, HO-BiTCN-SVM, and IHO-BiTCN-SVM across three ecosystems.
Table 3. Performance comparison of BiTCN-SVM, HO-BiTCN-SVM, and IHO-BiTCN-SVM across three ecosystems.
EcosystemModelMAE (Value [95% CI], SD)MAPE (Value [95% CI], SD)RMSE (Value [95% CI], SD)R2 (Value [95% CI], SD)Wilcoxon p on AE (IHO-BiTCN-SVM vs.
BiTCN-SVM)
Wilcoxon p on AE (IHO-BiTCN-SVM vs. HO-BiTCN-SVM)
ChangchunBiTCN-SVM0.00310626 [0.00292228, 0.00329708], SD = 9.72521 × 10−50.231914 [0.178210, 0.309033], SD = 0.03339480.00397627 [0.00372016, 0.00423760], SD = 0.0001317290.857449 [0.837587, 0.875172], SD = 0.009631252.946 × 10−862.739 × 10−58
HO-BiTCN-SVM0.00181392 [0.00170281, 0.00193058], SD = 5.63180 × 10−50.111918 [0.0943879, 0.132707], SD = 0.01014210.00229992 [0.00215875, 0.00244324], SD = 7.27716 × 10−50.952308 [0.945050, 0.959295], SD = 0.00362227
IHO-BiTCN-SVM0.000702987 [0.000641149, 0.000770259], SD = 3.29489 × 10−50.0555203 [0.0353530, 0.0874853], SD = 0.01439890.00106578 [0.000907896, 0.00122972], SD = 8.08498 × 10−50.989759 [0.986561, 0.992451], SD = 0.00151393
FujinBiTCN-SVM0.0118194 [0.0111033, 0.0126221], SD = 0.0003805530.326593 [0.229847, 0.450317], SD = 0.05447350.0157189 [0.0143568, 0.0173723], SD = 0.000748930.943971 [0.931789, 0.953989], SD = 0.005684048.600 × 10−791.867 × 10−49
HO-BiTCN-SVM0.00839582 [0.00778722, 0.00920516], SD = 0.0003683360.195194 [0.138888, 0.267887], SD = 0.03301120.0125456 [0.0101879, 0.0159614], SD = 0.001601370.964310 [0.941703, 0.976373], SD = 0.00945269
IHO-BiTCN-SVM0.00450361 [0.00398613, 0.00515785], SD = 0.0003057520.118189 [0.0813906, 0.163492], SD = 0.02120750.00899532 [0.00567112, 0.0130497], SD = 0.002234110.981652 [0.959988, 0.992533], SD = 0.00955343
LishuBiTCN-SVM0.00596586 [0.00548837, 0.00642831], SD = 0.0002381250.165820 [0.138093, 0.197990], SD = 0.01512660.00827962 [0.00755203, 0.00898639], SD = 0.0003664430.877930 [0.852607, 0.898738], SD = 0.01179387.928 × 10−742.017 × 10−24
HO-BiTCN-SVM0.00270059 [0.00244973, 0.00296882], SD = 0.0001276600.0829571 [0.0657636, 0.103773], SD = 0.009814670.00415707 [0.00345495, 0.00496513], SD = 0.0003919190.969228 [0.955681, 0.978498], SD = 0.00598233
IHO-BiTCN-SVM0.00165724 [0.00151084, 0.00180867], SD = 7.74804 × 10−50.0494104 [0.0387786, 0.0620156], SD = 0.005882970.00260526 [0.00213148, 0.00309131], SD = 0.0002536540.987914 [0.982600, 0.991984], SD = 0.00246774
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xu, Y.; Lv, Z.; Cai, Y.; Wang, K. A Next-Day Dew Intensity Prediction Model Based on the Improved Hippopotamus Optimization. Sustainability 2026, 18, 1445. https://doi.org/10.3390/su18031445

AMA Style

Xu Y, Lv Z, Cai Y, Wang K. A Next-Day Dew Intensity Prediction Model Based on the Improved Hippopotamus Optimization. Sustainability. 2026; 18(3):1445. https://doi.org/10.3390/su18031445

Chicago/Turabian Style

Xu, Yingying, Ziye Lv, Yifei Cai, and Kefei Wang. 2026. "A Next-Day Dew Intensity Prediction Model Based on the Improved Hippopotamus Optimization" Sustainability 18, no. 3: 1445. https://doi.org/10.3390/su18031445

APA Style

Xu, Y., Lv, Z., Cai, Y., & Wang, K. (2026). A Next-Day Dew Intensity Prediction Model Based on the Improved Hippopotamus Optimization. Sustainability, 18(3), 1445. https://doi.org/10.3390/su18031445

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop