Using the existing literature and official government documents as a reference, some original indicators affecting climate change can be selected [
26,
27]. However, the quality of the information varies greatly, therefore information from the academic literature is dominant, which is supplemented by information obtained from other channels. First, when acquiring indicators on the basis of a range of information sources, the indicators should be sufficiently independent, and its corresponding data needs to be available. Then, the frequency of each indicator should be counted. The initial index selection results are shown in
Table 3. Owing to the large workload, only some of official and literature sources are shown below.
It should be mentioned that although 17 original indicators were determined, the significance of each indicator was ambiguous. Therefore, it was necessary to use scientific mathematical methods to identify the important indicators, and only indicators that had a significant impact on the overall result were retained in the climate change model. First, Stata 15.0 and MATLAB (R2019a) software were applied to extract the corresponding data. Then, LM was used to select the indicators with significant contributions. In addition, the climate was viewed as a grey system with multivariate interaction; by considering the uncertainty and greyness of the system, the RBF neural network was used to determine the influence weight coefficient of each index, and the CC model was finally obtained.
3.2.1. Determination of the Climate Change Model
LM was applied to recognize the significant indicators. As the 17 original indicators were confirmed, the adjacency relationship of the construction diagram was subsequently established, and the conceptual diagram is shown in
Figure 6.
Using MATLAB software to conduct the feature mapping, the points with an eigenvalue of zero were removed to construct the final projection matrix using the eigenvectors of the remaining points. The dimension reduction processing of the indicator was completed when the data was mapped in a low-dimensional space. The results are shown in
Figure 7, the original data was scattered randomly in three-dimensional space, and the ordered data in two-dimensional space was obtained after the dimensionality reduction process. Finally, the identified important factors are represented in
Table 4.
The RBF neural network was applied after all the indicators were determined. The processes of adopting the RBF neural network are shown as follows.
1. Preparation of data
The MATLAB software was adopted to conduct the RBF neural network. The input and output variables for the RBF neural network were the selected 11 indicators. The eligible data of 11 indicators were put into MATLAB. Due to a large amount of data, considering the limitation of computer processing efficiency, the data was first subjected to fast Fourier transform (FFT) processing. Then, the data was standardized [
54]. Each indicator belonged to the interval [−1, 1]. The feature vector of the FFT processed sequence was later used as the training sequence. In accordance with successful experience [
55] and multiple attempts, in this paper, 60% of the data was used for training, 20% of data was used for testing, and 20% of data was used for validation. It was confirmed that the above proportion has the best training performance for the subsequent processes. Furthermore, the maximum number of training times was set to 1000, the learning precision was 0.01, and the model automatically generated the number of hidden nodes. The maximum threshold of the indicator was 1, and the minimum threshold was −1. First, a pre-experiment was undertaken to determine the appropriate number of neurons. Through several experiments, the number of neurons in the hidden layer was set to 10,
, and the performance of the model reached 0.96. When the number of neurons was greater than 10, although the training effect was better in a way, the testing effect was poor. Therefore, it was reasonable to set the number of neurons to 10. The residual and the average residual were required to be less than 0.05, and the number of iterations was 12. The specific structure of the RBF neural network is shown in
Figure 8.
2. Analysis of the error
The root mean square error (RMSE) is a primary index to evaluate the level of error [
56]. The RMSE output using MATLAB software, as is indicated in
Figure 9, quickly decreases and then tended infinitely toward zero after the second iteration, which suggests that the model worked well, and the model performance was optimal at the sixth iteration (1.0406). Then, the reliability
R was output as
R ≥ 0.85912, which indicates that the model was very reliable. In the end, after comparing the training data with the pre-retained verification set data, it can be seen that the two data sets were quite close. Hence, it was confidently confirmed that the training results were very stable.
3. End training
After the RBF neural network with preset parameters passed the error test, the model operation result could be output. Furthermore, there was a “weight connection one” between the input layer and the hidden layer, and a “weight connection two” between the hidden layer and the model calculation output layer; therefore, these two connections should be handled further. To verify the performance of the RBF neural network, confirmation of the response of the outputted 11 variables was required. As shown in
Figure 10, the training, test, and validation sets were close to each other, and the consistency shown by the three data sets was indicated by a low level of error. Therefore, the results of the operation of the RBF neural network were acceptable, and as such, the training could be terminated.
The weight connection one and the weight connection two were collated and analyzed. After calculating some relevant parameters according to Equation (10), the CC model could be obtained as follows:
where
is the
vth error term:
represents the error caused by the Pacific decadal oscillation,
represents the instability error caused by the El Niño phenomenon,
represents the instability error caused by the La Niña phenomenon, and
represents the sum of other possible instability errors.
Since there is no general standard for measuring the magnitude of climate change, the level of climate change needed to be defined after obtaining the CC model. This paper selected 189,742 sample data from different regions of the world from 1800 to 2019 to calculate the climate change value. Then, using statistical techniques for statistical analysis, the climate change values were given corresponding climate levels, as defined in
Table 5.
Averaging the known values of all the indicators of 13 stations, they were put into the established climate change model. Consequently, the current climate change level of Canada was calculated to be 0.2241. This indicates the level of Canada’s climate change is normal, which is a very intuitive indication given that climate change in Canada is still acceptable.
Through analyzing the CC model, it can be found that the amount of ice coating had a negative effect on climate change, and “the average temperature” and “global carbon dioxide concentration” were significant factors. Their relationship was interpreted under the circumstance of the global climate condition, as indicated in
Figure 11, where global temperature and carbon dioxide concentration both showed a significant upward trend from 1880 to 2018. Then, the average temperature of Canada from 1981 to 2019 was considered to obtain the temperature change rate (temperature load) and the average temperature of each region. When the temperature load was more significant than zero, the temperature had an upward trend [
57,
58]. All temperature loads and average temperatures are visualized on the map of Canada shown in
Figure 12.
In
Figure 12, the temperature loads of the research areas were greater than zero. The normal state of Canada’s climate change is a microcosm of global climate change which indicates the positive correlation between temperature and carbon dioxide. Furthermore, the area with the most significant temperature load was NU (0.0037), and the temperature loads of the southeast coastal areas were all great than zero, indicating that the region will continue to experience warming in the future. In the area of the western coast, the temperature of BC was abnormally high, which may be related to the occurrence of the El Niño and La Niña phenomena in recent years. Furthermore, the fluctuation of the temperature load was significant, which also showed a clear trend of upward fluctuation in three-dimensional space.
Also, the contribution of sea surface temperature (SST) to climate change was quite large [
59]. After obtaining the data of SST (data provided by the NOAA/OAR/ESRL PSD, Boulder, Colorado, USA, from their website at
https://www.esrl.noaa.gov/psd/), the temperature profile of the SST worldwide from 1800 to 2019 was produced. As indicated in
Figure 13, the larger the yellow area in the figure, the wider the coverage of the SST high-temperature range. Before 1880, the high-temperature center of the SST was concentrated around the area of the eastern Pacific Ocean, and then the central area of the high temperature continuously expanded [
34]. The area with the highest SST was concentrated in the central area of the Pacific Ocean and the Indian Ocean. In the eastern part of Africa and the southern part of Asia and the Indian Ocean, the average SST has increased from [
22,
27] to [
27,
30]. The increase in SST near the west coast of Canada was consistent with the rise in the average temperature in the coastal area, thereby the contribution of SST to climate change cannot be neglected.
3.2.2. Predictions of the CC Model
A prediction of the possible trend of climate change can provide more useful guiding information, and it can help the public, government, and policymakers have a holistic concept of climate change. Meanwhile, to verify the predicted performance of the established model, two different predictions were conducted, and the prediction differences were compared in detail.
First, the CC model was used to obtain the climate change values from 1800 to 2019. was used as the input variable of the RBF neural network. The maximum number of training times was set to 500 times, the learning precision was 0.01, and the model automatically generated the number of hidden nodes. The prediction time was from 2019 to 2044 (the next 25 years). The maximum and minimum thresholds of the indicator were 1 and −1, respectively. After several pre-experiments, the number of neurons was set to 7 and the number of iterations was 9. When the residual and the average residual was less than 0.05, the training was terminated; the resulting prediction result is called “Prediction 1.”
Second, the original data set was divided as a test set, a training set, and a verification set. The test set was processed by the RBF neural network directly. The maximum number of training times was set to 1000, the learning precision was 0.01, the model automatically generated the number of hidden nodes, and the forecast time was the next 25 years. The maximum and minimum thresholds of the indicator were 1 and −1, respectively. The number of hidden layer neurons and iterations were both set to 10, and the performance of the model was the best (1.002) under the above settings. When the average residual was less than 0.05, the training was terminated. The RBF neural network was used to output the initial predicted values, then the CC model was applied to process the above values to output the final predicted results (“Prediction 2”). The results of the two predictions are shown in
Figure 14.
It can be seen that the climate level will gradually transfer from “normal” to “a little bad.” In 2025, the global climate level will break through the normal level, and the future climate change will develop in the direction of “a little bad” in the long term. Additionally, the forecast result of “Prediction 1” was close to that of “Prediction 2”, indicating that the CC model was reliable and had a good applicability. In 2053, “Prediction 1” shows a downward trend, while “Prediction 2” shows a sudden upward trend. The possible reason for this phenomenon is the scale effect caused by the data. Thereby the CC model’s prediction stability remained for 34 years. Since climate change is a quite complex phenomenon, through the limited amount of data, there will be numerous uncertainties in predicting changes over decades. The subsequent optimization of the model can be conducted by improving the stable prediction performance of the model in the long term.