1. Introduction
Rising industrialization and intensified human activities have significantly exacerbated air pollution, adversely affecting global environmental quality and public health. Pollutants such as PM2.5, PM10, sulfur dioxide (
), and nitrogen dioxide (
) are known to contribute to respiratory and cardiovascular diseases, acid rain formation, and elevated cancer risks [
1,
2,
3]. Accurate monitoring and forecasting of air quality using the Air Quality Index (AQI) [
4] are essential for supporting effective environmental management and informed policy-making [
5]. While time-series forecasting techniques have been extensively applied across diverse domains such as electricity generation [
6,
7], traffic management [
8,
9], meteorology [
10,
11], and finance [
12], their application in air quality monitoring presents unique opportunities to explore the temporal dependencies and structural dynamics of pollutant behavior.
Transformer-based architectures have dominated the landscape of air quality prediction in recent years [
13,
14]. However, recent studies [
15,
16] have revealed a paradigm shift toward simpler linear models that achieve competitive or even superior performance across a variety of forecasting tasks. In particular, single-layer linear models have demonstrated notable advantages over many complex Transformer-based approaches in both predictive accuracy and computational efficiency. These findings highlight the underexplored potential of symmetrically structured linear frameworks in air quality forecasting. Unlike self-attention mechanisms, which exhibit permutation invariance and anti-ordering tendencies, linear analytical approaches inherently preserve temporal sequence symmetry. This alignment with the intrinsic structure of air quality time series makes linear models naturally compatible with such data. Their structural simplicity, training efficiency, and strong performance on standard evaluation metrics have contributed to growing attention in the recent literature.
Despite their high interpretability and low computational complexity, linear approaches encounter significant challenges in air quality forecasting. They struggle to disentangle complex symmetrical temporal structures in pollution data, frequently overlooking dynamic trends and long-range dependencies among pollution sources. Moreover, they often fail to capture periodic seasonal variations in air quality indicators and exhibit high sensitivity to random fluctuations and sensor noise, which limits their generalizability across diverse environmental monitoring contexts. Furthermore, non-decomposition forecasting models make it nearly impossible to independently evaluate the predictive performance of symmetrically related components within air quality time series. Pollution trends and periodic patterns are entangled with random fluctuations and measurement noise, hindering targeted improvements. In addition, conventional evaluation metrics typically operate in a point-wise manner. They treat each forecasted point independently and average the results, which obscures temporal correlations and makes it difficult to assess the influence of the sequential structure on overall prediction quality.
To address these limitations, we propose a symmetric decomposition–modeling–evaluation framework specifically designed for air quality prediction, as illustrated in
Figure 1. Our approach first decomposes the air quality time series into three symmetrically balanced components: a macroscopic trend term, a regular periodic term, and an undulating fluctuation term. This decomposition strategy effectively disentangles complex temporal patterns in pollution data and enables the modeling of reliable dependencies through structurally tailored designs for each component. Such structured separation is difficult to achieve using traditional linear models. Our study investigates linear modeling strategies that are aligned with the intrinsic symmetrical properties of the three decomposed components in air quality data. For the trend component, we adopt a multiscale fusion approach that integrates both macro and micro perspectives to uncover long-term patterns and underlying structures in pollution concentration sequences. Periodic features are modeled using complex-valued neural networks in the frequency domain, which effectively capture cyclical behaviors and their symmetric interactions within urban and industrial pollution systems. For the fluctuation component, we employ a patch-mixing strategy that models localized regions, enhancing robustness against random disturbances and measurement noise present in air quality sensor data. Through tri-decomposition and component-specific modeling strategies that respect inherent symmetrical relationships, our model captures temporal structures in pollution data with improved precision, thereby enhancing the overall prediction accuracy for air quality time series. Furthermore, we introduce an evaluation framework that corresponds to the triple-decomposition modeling approach. This framework includes customized metrics for each symmetrically related component, ensuring accurate and consistent assessment of prediction quality across all decomposition elements. Our approach evaluates the predicted air quality series in a holistic manner, with a focus on the predictive accuracy, interpretability, and improvability of the entire time sequence. It captures the model’s ability to preserve temporal-order relationships among symmetrically related components, which traditional evaluation metrics fail to reflect, as they independently assess each forecasted point and average the results. To assess the trend component, we introduce the Residual Distribution Symmetry (RDS) metric, constructed using the Kolmogorov–Smirnov test [
17]. RDS evaluates the distribution of the prediction residuals to determine whether the long-term trend in the predicted series consistently follows the correct direction. For the periodic component, we propose the Temporal Symmetry Correlation (TSC) metric, based on autocorrelation [
18], which measures the remaining periodicity in the residuals to assess how well periodic structures are captured. The fluctuation component is evaluated using the classical Mean Absolute Error (MAE), which is less sensitive to outliers caused by unpredictable and irreducible noise inherent in air quality sensor data. The strength of this indicator system lies in its ability to provide comprehensive and component-specific evaluations while explicitly respecting the symmetrical structure of decomposed time series. It supports targeted refinement of forecasting models and promotes the development of customized approaches that align with the natural symmetries of environmental data patterns.
Our contributions are summarized as follows:
We propose a symmetric decomposition–modeling–evaluation framework for air quality forecasting. This framework separates time series into trend, periodic, and fluctuation components, enabling our linear approach to achieve higher accuracy and improved interpretability compared to Transformer-based architectures.
We design component-specific modeling strategies that leverage the inherent symmetrical properties of air quality data. These include multiscale fusion for trends, complex-valued neural networks in the frequency domain for periodicities, and patch-mixing techniques for local fluctuation modeling. Our method achieves competitive performance on multiple real-world datasets.
We develop symmetry-aware evaluation metrics tailored to each component: Residual Distribution Symmetry (RDS) for trends, Temporal Symmetry Correlation (TSC) for periodic patterns, and MAE for fluctuations. These metrics support targeted assessment and refinement across the full spectrum of decomposition components.
2. Related Works
2.1. Sequential Prediction Methods
Time-series prediction methods for air quality forecasting have evolved substantially, progressing from traditional statistical models to advanced deep learning architectures capable of modeling the inherent symmetrical properties of environmental data. Early deep learning approaches employed Recurrent Neural Networks (RNNs) to capture temporal state transitions [
19,
20,
21], and Convolutional Neural Networks (CNNs) applied convolution kernels along the time axis to extract local temporal features [
22,
23,
24,
25]. However, both RNN- and CNN-based models suffer from limited receptive fields, which restrict their ability to capture long-range symmetrical dependencies within pollution sequences. The introduction of Transformer architectures [
26] represents a paradigm shift in air quality forecasting [
27,
28,
29] owing to their superior capacity for global dependency modeling. Several notable adaptations have emerged, such as iTransformer [
30], which restructures the embedding scheme to emphasize multivariate feature correlations over time-step progression, and PatchTST [
31], which segments time series into local patches to preserve fine-grained semantic information. Despite these advancements, the alignment-invariant nature of self-attention mechanisms fundamentally contradicts the sequential and ordered structure of air quality time series, potentially compromising temporal coherence and prediction reliability. This limitation has been substantiated by recent findings, which show that linear-based models can match or exceed Transformer-based approaches in both accuracy and efficiency [
32].
The resurgence of linear modeling has led to the development of several promising methods for environmental time-series forecasting. For instance, DLinear [
15] treats each multivariate input series as independent univariate components, while RLinear [
33] introduces reversible normalization to address distributional constraints in linear mappings. TiDE [
34] leverages multilayer perceptron-based encoder–decoder structures to model both dynamic and static environmental variables. In parallel, N-HiTS [
35] applies multirate sampling and hierarchical interpolation to facilitate smooth, multi-step forecasting of air quality indicators. Although these models contribute to improved accuracy and efficiency from various design perspectives, most lack systematic frameworks for component-wise prediction and evaluation. In contrast, our proposed tri-decomposition framework explicitly leverages the natural symmetry among the trend, periodicity, and fluctuation components in air quality data. This enables targeted modeling strategies that respect the unique symmetrical structures and temporal relationships of each component.
2.2. Decomposition-Based Approaches
Decomposition methods have attracted considerable attention in air quality forecasting, as they enable the separation of complex time series into interpretable components with distinct symmetrical structures. Traditional decomposition techniques typically divide air quality data into trend, seasonal, and irregular components, allowing for more focused and modular modeling strategies. Recent deep learning–based methods further incorporate decomposition to enhance generalization and robustness. For example, Autoformer [
36] and FEDformer [
37] apply moving average operations to extract seasonal and trend components, while DFNet [
38] adopts a three-branch structure to simultaneously learn trend, seasonal, and residual components from pollution data. These decomposition strategies have been shown to improve performance on various air quality time-series tasks. Liu et al. [
39] extended this paradigm by combining coarse-grained and fine-grained decomposition to capture temporal modes at different resolutions, highlighting the benefits of multiscale representations. The Multi-Granularity Decomposition (MGD) approach proposed by Teng et al. [
40] further enhances this idea by integrating LSTM with attention mechanisms across multiple time granularities. However, these approaches still face structural limitations. Li et al. [
41] reported that MGD struggles to accommodate the heterogeneity present in decomposed subsequences within complex air quality data. Additionally, most decomposition-based models treat each component independently, making it difficult to model the interactions and symmetrical dependencies between trend, seasonal, and residual structures. Such limitations may lead to degraded predictive performance, especially in highly dynamic pollution environments. To mitigate these issues, some studies have explored hybrid frameworks that combine decomposition with the Mixture-of-Experts (MoE) technique. For instance, Ji et al. [
42] combined LSTM and Extreme Learning Machine to model decomposed components at multiple resolutions. Wang et al. further improved prediction accuracy by integrating MoE with deep neural architectures, offering adaptive specialization for heterogeneous sequence components.
2.3. Evaluation Metrics for Time-Series Forecasting
Conventional evaluation metrics for time-series forecasting are primarily derived from regression loss functions [
43,
44,
45]. These metrics are designed to quantify point-wise discrepancies between predicted and observed values. Common examples include Mean Squared Error (MSE), Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), Symmetric Mean Absolute Percentage Error (SMAPE), and Normalized Mean Squared Error (NMSE). Although these metrics offer useful summaries of overall prediction accuracy, they assess each time point independently and ignore the sequential dependencies within time series. As a result, they fail to capture the structural coherence and symmetry properties that are essential in air quality forecasting. Specifically, point-wise metrics cannot assess how well models preserve the symmetrical characteristics of decomposed components such as trend, periodicity, and fluctuation. Recognizing this limitation, recent research has explored alternative evaluation methods that account for the temporal structure and component-specific symmetry in time-series predictions.
To advance this direction, we propose a set of specialized metrics that directly evaluate the symmetry of decomposed components in air quality time series. For the periodic component, we introduce the Temporal Symmetry Correlation (TSC) metric, which employs autocorrelation [
18] to detect residual periodicity and assess how effectively cyclical patterns are captured. For the trend component, we design the Residual Distribution Symmetry (RDS) metric, using the Kolmogorov–Smirnov test [
17] to determine whether the distribution of residuals aligns with the expected long-term trend direction. Together, these symmetry-aware metrics form a component-specific evaluation framework that enables more nuanced assessment and supports the targeted refinement of prediction models. This framework emphasizes not only accuracy but also the preservation of structural balance in symmetrically decomposed time series.
3. Methodology
3.1. Component-Wise Linear Model
By decomposing the time series into three symmetrically distinct components (trend, periodicity, and fluctuation), we enable targeted modeling strategies that capture the unique symmetrical characteristics of each component while maintaining computational efficiency, as illustrated in
Figure 2. This section details our symmetry-aware modeling methods for each decomposed component.
3.1.1. Multiscale Mixing for the Trend Component
The trend component in our model is extracted using a recursive average pooling strategy rather than conventional statistical decomposition methods such as STL (seasonal-trend decomposition using LOESS). While STL is a popular approach for extracting trend and seasonal components via local regression, it is non-differentiable and not readily adaptable for integration with neural networks. Its iterative fitting process and sensitivity to parameter settings also limit its robustness when dealing with high-frequency or multivariate environmental data. Recursive average pooling, in contrast, offers a deterministic, parameter-free decomposition that preserves scale-wise symmetry and supports differentiability. This makes it especially suitable for incorporation into end-to-end learnable forecasting frameworks. Moreover, its hierarchical structure naturally enables multi-resolution trend representation, which aligns with the design goals of our component-wise linear modeling strategy.
Trend components in air quality data exhibit natural stability across different temporal scales, with macroscopic patterns often mirroring microscopic variations in a hierarchically balanced structure. Our multiscale fusion approach exploits this symmetrical property by capturing trend patterns at different resolutions, enabling a comprehensive understanding of both long-term pollution trajectories and short-term directional shifts.
We begin by extracting the trend component
T from the original air quality time series
X of length
n through a symmetric sliding window of size
S:
The detrended series
D is then obtained by subtracting this trend from the original series:
To capture the symmetrical relationships across different temporal scales, we decompose the trend component
T into
symmetrically balanced scales through recursive average pooling. This creates a multi-resolution hierarchy
, where
represents the original fine-grained trend and
captures the most macroscopic pollution patterns:
Our modeling approach respects the symmetrical structure of this multiscale decomposition through a bilateral prediction strategy. First, we apply scale-specific linear mappings
to each resolution level, capturing the unique symmetrical patterns at each scale. Then, we implement a symmetrical interaction pathway that bidirectionally connects adjacent scales through linear mappings
:
where ⨁ denotes the symmetric fusion of scaled predictions into a unified sequence. This bidirectional approach ensures that symmetrical information flows both from coarse to fine scales and vice versa, creating a balanced representation of air quality trends across all temporal resolutions. The trend component prediction
is evaluated through our Residual Distribution Symmetry metric, which specifically assesses how well the model preserves the symmetrical distributional properties of air quality trends.
3.1.2. Frequency Reconstruction Modeling for the Periodicity Component
Periodic patterns in air quality data exhibit rotational symmetry in the frequency domain, with cyclical pollution variations repeating at regular intervals. Our frequency-domain modeling approach leverages this natural symmetry by operating directly on the spectral representation of air quality data, enabling precise capture of cyclical patterns while preserving their phase relationships.
We first transform the detrended series
D to the frequency domain using the Fast Fourier Transform (FFT):
To isolate the symmetrically periodic components, we apply a low-pass filter with a cutoff frequency
C, separating the low-frequency periodic patterns from higher-frequency fluctuations:
Traditional frequency-domain filtering approaches typically apply fixed-band filtering to isolate low-frequency components, but they lack learnable flexibility and often discard valuable phase information. Moreover, classical seasonal decomposition techniques like STL are non-differentiable and do not support gradient-based optimization, making them incompatible with end-to-end neural architectures.
Our symmetry-preserving modeling approach employs complex-valued linear transformations that maintain both amplitude and phase relationships in the frequency domain. This ensures that the rotational symmetry of periodic patterns is preserved during prediction:
where
represents our complex-valued linear transformation with frequency interpolation ratio
and
O is the prediction horizon. This approach preserves the rotational symmetry of periodic patterns by maintaining their phase relationships during prediction, a critical property for accurately forecasting cyclical pollution behaviors.
The periodicity component prediction is evaluated through our Temporal Symmetry Correlation metric, which specifically assesses how well the model preserves the symmetrical temporal correlations in periodic air quality patterns.
3.1.3. Patch Fusion for the Fluctuation Component
Fluctuation components in air quality data exhibit local reflection symmetry, with random variations often displaying self-similar patterns within confined temporal neighborhoods. Our patch-mixing approach exploits this local symmetry by processing air quality fluctuations through overlapping patches that capture localized self-similar patterns while filtering out asymmetric noise.
We first isolate the fluctuation component by removing both the trend and periodicity elements from the original series:
To leverage local symmetrical patterns, we divide the fluctuation sequence
U into overlapping patches of length
L with 50% overlap, ensuring reflective symmetry at patch boundaries:
Our symmetry-preserving modeling approach applies a set of patch-specific linear transformations
that respect the local reflection symmetry within each patch:
where
is a global linear transformation with an activation function and ⨁ represents the symmetric fusion of patch-level features into a unified sequence. This approach preserves the local symmetrical properties of fluctuation patterns while effectively filtering out asymmetric noise elements. The patch-based modeling approach assumes that short-term fluctuations in air quality data often display approximate local reflection patterns, particularly around local extrema. This is an empirically observed characteristic in AQI residuals, where mirrored deviations before and after central peaks or valleys are common. While we do not explicitly model or quantify this symmetry, our patch fusion strategy is designed to preserve such localized structures during the reconstruction process. Related behavior has been discussed in classical residual modeling literature [
21].
The fluctuation component prediction is evaluated using the Mean Absolute Error (MAE) metric, which provides a balanced assessment of prediction accuracy without being overly sensitive to outliers in the inherently noisy fluctuation component.
By employing these symmetry-aware modeling strategies tailored to each decomposed component, our framework achieves superior air quality prediction performance while maintaining computational efficiency and respecting the natural symmetrical properties inherent in environmental time-series data.
3.2. Symmetry-Aware Evaluation Metrics
3.2.1. Residual Distribution Symmetry
The symmetry-aware evaluation of air quality prediction models requires metrics that assess not only prediction accuracy but also how well models preserve the inherent symmetrical properties of environmental time-series data. When evaluating trend components specifically, the concept of distributional symmetry becomes particularly important. A perfectly captured trend should result in residuals that exhibit symmetrical distribution around zero, reflecting the balanced nature of random variations around the true trend line. Our Residual Distribution Symmetry (RDS) metric quantifies this crucial symmetrical property by analyzing how closely the residual distribution adheres to bilateral symmetry around its central axis. This approach fundamentally differs from traditional point-wise metrics by focusing on the symmetrical balance of the entire residual distribution rather than individual error magnitudes.
For each sample
, time point
, and feature
, we first calculate the residuals
as the difference between the actual pollution value
and the predicted value
:
To evaluate distributional symmetry specifically, we examine how these residuals are distributed around their median value rather than the mean, as the median is more robust to outliers and better represents the central axis of symmetry:
The key insight of our symmetry-aware approach lies in comparing the distribution of residuals above this central axis with those below it. In a perfectly symmetrical distribution, the pattern of deviations above the median should mirror those below it. To quantify this bilateral symmetry, we first calculate the distance of each residual from the median:
We then separate these distances into two groups based on whether they lie above or below the median:
The empirical cumulative distribution functions (ECDFs) for these two groups are computed as
In a perfectly symmetrical distribution, these two ECDFs would be identical. The maximum difference between them provides a measure of the deviation from perfect bilateral symmetry:
This symmetry deviation score
ranges from 0 to 1, where 0 indicates perfect bilateral symmetry (the distributions above and below the median are identical) and 1 indicates complete asymmetry. Our final Residual Distribution Symmetry (RDS) metric aggregates these scores across all features and converts them to a symmetry measure:
The RDS metric ranges from 0 to 1, where values closer to 1 indicate higher bilateral symmetry in the residual distribution around the central axis. This symmetry-aware evaluation directly addresses whether the model has captured the balanced nature of the true trend in air quality data, without making assumptions about the specific distribution type (unlike normality tests).
In practical terms, a difference of 0.05 in RDS typically corresponds to a noticeable shift in residual symmetry, indicating reduced prediction bias and more balanced trend estimation—especially valuable in environmental forecasting applications where directional error matters.
By employing this bilateral symmetry analysis, our framework provides a truly symmetry-aware methodology for evaluating trend modeling in air quality prediction systems. This approach recognizes that effective trend capture should preserve the natural symmetrical balance of random variations around the true trend line, a critical property that traditional error metrics overlook when aggregating point-wise deviations without consideration for their balanced distribution.
3.2.2. Temporal Symmetry Correlation
Cyclical patterns in air quality data exhibit fundamental symmetrical properties across time, with pollution concentrations often displaying mirror-like repetitions across days, weeks, or seasons. Effectively capturing these temporal symmetries is crucial for accurate air quality prediction. Our Temporal Symmetry Correlation (TSC) metric evaluates how well a forecasting model preserves these inherent symmetrical patterns in the periodicity components of air quality time series. The core insight of our approach lies in analyzing the residual autocorrelation structure: when a model successfully captures the symmetrical periodic patterns in air quality data, the residuals should exhibit minimal temporal correlation at any lag. Conversely, remaining autocorrelation in residuals indicates symmetrical temporal patterns that the model failed to capture, revealing specific deficiencies in modeling periodic behaviors.
For each air quality sample
, timestamp
, and pollution indicator
, we first calculate the prediction residuals:
To properly assess temporal symmetry, we need to remove any potential trend biases from these residuals. We achieve this by centering the residuals around their temporal means:
This centering operation ensures that we focus solely on the temporal symmetry properties without contamination from potential trend components. The symmetrical autocorrelation function reveals how residual values at different time points mirror each other across various time lags. For a given lag
k, we compute the symmetrical autocorrelation coefficient
as
This formulation captures the degree to which the residuals at time
t mirror those at time
, normalized to ensure comparability across different scales. The symmetry in this correlation reveals whether temporal patterns remain in the residuals that should have been captured by the model. To comprehensively evaluate symmetrical temporal relationships, we examine the correlations across all possible time lags. While the autocorrelation at lag zero (
) is always unity and trivially symmetrical, we focus on non-zero lags to assess genuine temporal symmetry patterns. Our Temporal Symmetry Correlation (TSC) metric aggregates the absolute values of these correlation coefficients:
The TSC metric ranges from 0 to 1, where values closer to 1 indicate minimal remaining autocorrelation in the residuals, suggesting the model has successfully captured the symmetrical temporal patterns in the air quality data. Lower TSC values reveal that significant temporal symmetries remain in the residuals, pointing to specific periodic patterns the model failed to address.
This symmetry-aware evaluation approach offers several advantages over traditional metrics. First, it specifically targets the periodicity component of air quality time series, allowing focused assessment of how well models capture cyclical pollution patterns like daily traffic-related peaks or seasonal variations. Second, by analyzing the symmetrical structure of temporal correlations rather than point-wise errors, TSC provides deeper insights into the model’s ability to preserve the inherent temporal symmetries in environmental data.
Moreover, the TSC metric facilitates precise model refinement by revealing exactly which temporal symmetry patterns remain uncaptured. For instance, a strong autocorrelation at specific lags might indicate daily or weekly pollution cycles that require additional modeling attention. By directly measuring how well models preserve these crucial symmetrical relationships, our approach enables more targeted improvements in air quality forecasting systems.
4. Experiments
To rigorously evaluate the performance, generalizability, and domain specificity of our proposed framework, we designed a two-stage experimental setup. In the first stage (
Section 4.1), we validate our method on a collection of standard public multivariate time-series datasets, covering domains such as electricity, traffic, finance, and meteorology. This enables direct comparison with existing state-of-the-art forecasting models under unified benchmarking conditions. In the second stage (
Section 4.2), we shift the focus to real-world air quality datasets to assess our model’s effectiveness in capturing symmetrical temporal structures and component-level interactions specific to air pollution data. This dual-phase evaluation not only ensures empirical robustness but also highlights the adaptability of our method across both general and domain-specific forecasting tasks.
4.1. Time-Series Forecasting and Model Analysis
4.1.1. Comparison Method and Dataset Descriptions
To evaluate the general forecasting capabilities of our proposed framework, we conducted a comprehensive comparison with a broad range of state-of-the-art time-series models across multiple public benchmark datasets. Our baseline selection included Transformer-based models, such as Autoformer [
36], FEDformer [
37], iTransformer [
30], Crossformer [
46], and PatchTST [
31], as well as recent linear models, including DLinear [
15], TiDE [
34], and RLinear [
33]. In addition, we included temporal convolutional network (TCN)-based models, such as SCINet [
47] and TimesNet [
48], to ensure diverse architectural coverage. All models were evaluated on a set of standard multivariate time-series datasets, spanning various real-world domains such as energy, the economy, transportation, and meteorology. These datasets included ETTh1, ETTh2, ETTm1, ETTm2, Exchange, Weather, ECL, and Traffic, each varying in data scale, temporal resolution, and domain specificity. The detailed statistics for each dataset are summarized in
Table 1, including dimensionality, prediction length, total size, and sampling frequency.
This benchmark evaluation serves to establish the overall effectiveness and robustness of our method under general forecasting conditions. It complements the domain-specific analysis in
Section 4.2, where we further assess performance on real-world air quality datasets with structural decomposition and component-level evaluation.
The ETTh1 and ETTh2 datasets [
27], which are related to electrical power transformers, span from July 2016 to July 2018 and include load and oil temperature metrics recorded at fifteen-minute intervals. These datasets were selected for their relevance in energy system forecasting, particularly for capturing short-term fluctuations and long-term trends. Similarly, the Electricity dataset [
28], which tracks hourly electricity consumption for 321 consumers from 2012 to 2014, provides a detailed view of energy demand and consumption patterns. This dataset is crucial for understanding seasonal and daily usage variations, offering valuable insights into energy forecasting. In addition, the Exchange dataset [
20], covering the diurnal exchange rates for eight nations from 1990 to 2016, was included to test our methodology’s applicability in economic forecasting. This dataset provides long-term financial data and serves as a benchmark for assessing the impact of economic factors on forecasting performance. For transportation forecasting, the Traffic dataset [
36], sourced from the California Department of Transportation, provides hourly traffic flow and occupancy data from major highways, particularly in the San Francisco Bay Area. This dataset helps analyze the relationship between traffic volume and air quality, further testing the versatility of our model. The Weather dataset [
27], consisting of twenty-one meteorological variables recorded every ten minutes during the year 2020, was used to account for environmental variables such as temperature and humidity in air quality forecasting. The ECL dataset, consisting of hourly electricity load data for 321 consumers from 2017 to 2019, was selected for its ability to represent real-world electricity consumption and peak demand patterns. Finally, we included the Traffic Flow dataset, which tracks traffic flow and occupancy data from various sensors across major highways in the San Francisco Bay Area. This dataset was chosen to investigate the effects of urban traffic on environmental conditions. Each dataset was split into training, validation, and testing subsets according to a predefined chronological scheme, ensuring the integrity and applicability of the evaluation. The extensive use of diverse datasets in this study demonstrates the effectiveness and generalizability of our framework, showcasing its applicability to a wide range of real-world forecasting tasks.
4.1.2. Full Multivariate Time-Series Forecasting Experiments
As shown in
Table 2, our method achieved the best performance among all the models on multivariate time-series forecasting tasks in most cases. Notably, on the ETTm2 dataset, our method achieved improvements of 40.9% in MSE (0.288 to 0.170) and 16.8% in MAE (0.332 to 0.276) compared to the previous SoTA method [
30]. On the ETTh2 dataset, our method obtained improvements of 37.9% in MSE (0.374 to 0.232) and 16.3% in MAE (0.398 to 0.333) compared to the second-ranked RLinear model [
33]. On the Exchange dataset, our method obtained improvements of 21.1% in MSE (0.354 to 0.279) and 8.6% in MAE (0.414 to 0.378) compared to the second-ranked DLinear model [
15]. The greater improvement in MSE compared to MAE is a strong indication that our predictions are less penalized by squares, with significantly fewer outliers and breakthroughs in predictive detail. Similarly, on the Weather dataset, our method achieved the best MSE. Compared to iTransformer [
30], which had the best MAE, our method achieved an improvement of 9.6% in MAE (0.258 to 0.233). On the ETTm1 and ETTh1 datasets, there was likely a large distributional shift in the dataset, making the predicted data range far beyond the lookback data range. This was also reflected in the excellent performance of RLinear [
33] in dealing with distributional shifts. On the Electricity and Traffic datasets, our approach fell a little short of the global best. This may come from the complex multivariate inter-relationships in these two datasets (321 variables for Electricity and 862 variables for Traffic). While linear models are not good at handling and interacting with multivariate relationships due to their structural properties, Transformer-based models are the opposite. This was also evidenced by the fact that iTransformer consistently outperformed linear models on both datasets. However, our method significantly outperformed the other linear models, obtaining the best MSE and MAE.
4.1.3. Decomposition Mechanism
As shown in
Figure 3A–C, the scores for the three metrics decreased with deeper decompositions across the four datasets, suggesting that prediction performance gradually improved with the number of decompositions. The improvement from no decomposition to two decompositions was relatively significant, especially for the Exchange dataset, where all three metrics showed obvious improvements. The improvement from two to three decompositions, on the other hand, was minimal and consistent across datasets and metrics. However, the improvement from the triple-decomposition linear method to our method was also significant, even surpassing the improvement from no decomposition to two decompositions. This is due to our unique modeling approach for each series component, which captures component-level features as much as possible, resulting in favorable prediction performance. It also suggests that exploring decomposition methods and component-wise modules is a critical path to improving predictive performance.
Figure 3D shows a forecast visualization without applying any decomposition mechanism. In contrast,
Figure 3E shows that with our proposed tri-decomposition mechanism, the prediction of each series component can be accurately evaluated. For example, the trend component performed poorly in the first half of the series, with a directional deviation; the periodicity component showed a perfect fit; and the fluctuation component ignored points that may be outliers. Our proposed tri-decomposition approach greatly enhances interpretability and forecast visualization, paving the way for targeted model improvements.
4.1.4. Quantitative Series Components
To ensure that the performance gain was attributable to our symmetry-aware component modeling rather than to model complexity, we intentionally used a two-layer linear network as the baseline for each component. This provided a clean and fair comparison, allowing us to isolate modeling effectiveness under the same decomposition structure.
Table 3 shows the prediction performance for the three components: trend, periodicity, and fluctuation. “Ours” represents our method, and “Linear” represents a version in which our method was replaced with a two-layer linear network for the given component. The difference between the two methods can be regarded as an ablation experiment on our tri-decomposition linear model.
In our approach, multiscale mixing is used for trend modeling, while RDS, which measures the distribution of residuals, serves as the primary predictor for the trend component. Our trend-term modeling approach achieved average improvements of 10.5% in RDS (0.076 to 0.068), 5.9% in MSE (0.319 to 0.300), and 10.9% in TSC compared to the linear base model across the eight datasets. This suggests that, from a series-level perspective, the trend term yields superior forecasting performance, with improvements exceeding those observed in point-by-point evaluations.
For the periodicity component, frequency-domain reconstruction is used as the modeling approach, while TSC, which captures residual unpredicted periodic patterns, serves as the primary metric. Compared to the linear approach, our modeling scheme achieved improvements of 1% in MSE, 6.8% in TSC, and 12.8% in RDS. Similar to the trend term, although the improvements in classical metrics were small, the gains in series-wise metrics indicate a significant enhancement in overall forecasting accuracy.
The fluctuation component of the patch blend uses the MAE indicator instead of the MSE indicator. This stems from the fact that the fluctuation term tends to contain a large number of non-reproducible surprises and outliers, and the MSE, which penalizes outliers quadratically, is more susceptible to such interference. For this component, our method achieved improvements of 1% in MAE, 2% in TSC, and 3% in RDS compared to the linear model. Considering the low cost-effectiveness of predicting this component, the result reflects an acceptable balance between computational cost and evaluation gain.
4.2. Urban Air Quality Prediction
To evaluate the practical applicability of our proposed framework, we applied it to the task of urban air quality prediction. This task represents a critical real-world challenge that involves complex, multivariate time-series modeling under high temporal and spatial variability. In the remainder of this section, we present the experimental details, including the data-preprocessing procedures, model configurations, and training strategies used in the urban air quality forecasting setting.
4.2.1. Implementation Details
This section outlines the specific procedures, hyperparameters, and computational resources employed in our experiments, ensuring reproducibility and providing a clear framework for comparison with existing methodologies.
We employed a standard data-processing pipeline to prepare our dataset for model training and evaluation. This pipeline included several key steps. First, we converted timestamp information into cyclical features using sine and cosine transformations, capturing both time-of-day and day-of-year periodicities:
The second step involved imputation of missing values. Gaps in the data were addressed using linear interpolation for short-term missing values and forward-filling for longer gaps, ensuring continuity in the time series.
The third step involved normalization. All features were normalized using z-score normalization to ensure comparable scales across different pollutants or feature parameters:
where
is the mean and
is the standard deviation of the feature.
To rigorously evaluate our model’s performance and generalization capabilities, we partitioned the dataset into training, validation, and test sets using a 6:2:2 ratio. This split was performed chronologically to maintain the temporal structure of the data.
We set the batch size to 32 to balance computational efficiency and model stability. Training was performed over 100 epochs, chosen empirically to ensure convergence while avoiding overfitting. For time-series forecasting, we used a 168-day input sequence to capture long-term temporal trends and seasonal patterns. The model generated forecasts for four target horizons (24, 48, 96, and 168 days), which are particularly relevant for strategic air quality monitoring and long-term policy planning. To prevent overfitting, we applied a 0.1 dropout rate across all attention layers. The initial learning rate was set to 0.001 to ensure stable convergence given the model’s complexity and the variability of air quality data across different cities. These hyperparameters were selected based on empirical testing and best practices in environmental time-series forecasting. In the following sections, we present comparisons and ablation studies to assess the impact of these hyperparameter choices on model performance, shedding light on the sensitivity of our approach to different configurations.
4.2.2. Dataset Description
Figure 4 presents the Air Quality Index (AQI) trajectories for four major cities during the three-year period from 2022 to 2024. This three-year snapshot reveals both short-term fluctuations and longer-term trends in air quality. The figure highlights clear seasonal patterns across all cities, with higher AQI values typically observed during the winter months, likely due to increased heating demand and unfavorable meteorological conditions for pollutant dispersion. Notably, while Beijing and Tianjin exhibit similar AQI profiles with distinct peaks, Guangzhou shows a more stable trajectory, reflecting its southern geographic location and relatively consistent climatic conditions.
To further explore the dynamics of air pollution in a specific urban context, we focus on Beijing’s air quality data for the year 2023.
Figure 5a presents the temporal evolution of major pollutants throughout the year. This figure reveals the complex interplay between different pollutants over time. Notably, we observe inverse relationships between some pollutants, such as
and PM2.5, which can be attributed to the photochemical processes governing their formation and depletion.
To quantify the relationships between different pollutants, we employed the Pearson Correlation Coefficient (PCC). The PCC is a statistical measure that assesses the strength and direction of the linear relationship between two variables. It is defined as
where
and
are the individual sample points indexed with i,
and
are the sample means, and n is the number of paired samples.
Figure 5b presents a heatmap of the PCC values between different pollutants in Beijing for the year 2023.
The PCC analysis reveals strong correlations between key pollutants. For example, PM2.5 and PM10 are positively correlated (PCC = 0.86), suggesting shared sources or similar formation processes. In contrast, is negatively correlated with most pollutants, especially (PCC = −0.83), reflecting complex photochemical interactions. These correlations inform our modeling strategy. Positive correlations indicate that certain pollutants can serve as proxies for others, simplifying the model. Negative correlations emphasize the need for the model to capture antagonistic relationships, particularly in photochemical smog formation. The data analysis highlights the complexity of urban air quality dynamics, reinforcing the need for a sophisticated modeling approach. The temporal patterns, inter−city differences, and pollutant correlations guide the design of our attention−based model, ensuring it captures both long−term trends and short−term fluctuations across diverse urban settings.
4.2.3. Multivariate Prediction Performance
As presented in
Table 4, our proposed method demonstrated superior forecasting accuracy across most experimental settings, consistently ranking among the top performers. These results provide valuable insight into the model’s effectiveness across different urban environments and prediction horizons.
On the Beijing dataset, our method achieved the lowest MAE (0.436) and MSE (0.494) at the 24-day prediction horizon. At 48 days, it matched TimesNet in MAE performance and still yielded the best MSE. The proposed model continued to outperform other baselines in terms of MAE at the 96-day and 168-day horizons, although TimesNet exhibited slightly better MSE at 168 days. These results indicate that our method is particularly effective for short-term and medium-term forecasting, maintaining consistent predictive accuracy. On the Tianjin dataset, our model consistently achieved the lowest MAE across all forecast lengths. While TimesNet obtained slightly lower MSE values in some cases, our method remained within the top two for both metrics across most horizons. This observation underscores the strength of our component-wise modeling strategy in capturing complex industrial pollution dynamics with localized variability. For Guangzhou, our model achieved the best MAE and MSE at the 48-day horizon and remained highly competitive across all other time spans. TimesNet performed well in short-term settings; however, our method exhibited more stable performance at longer forecasting horizons, particularly at 168 days, where it achieved the best MAE (0.499) and ranked second in MSE. On the Jinan dataset, our method attained the lowest MAE at the 24-day horizon and performed comparably with strong baselines at 48 and 96 days. At the 168-day horizon, although AutoFormer slightly outperformed our model in both MAE and MSE, the performance gap remained minimal. These results demonstrate the robustness of our method in inland urban regions with more heterogeneous pollution sources.
Overall, our method achieved top-two performance in 15 out of all evaluated MAE and MSE cases, including 6 first-place and 9 second-place results. These outcomes indicate a superior generalization ability compared to existing baseline models. While TimesNet attained strong MSE values across multiple horizons, our method exhibited more consistent MAE performance, suggesting better robustness in capturing the central tendencies of the data. In contrast, Transformer-based models such as AutoFormer and Informer performed less favorably across most scenarios, further demonstrating the practical advantages of our symmetry-aware and computationally efficient linear modeling framework in urban air quality prediction tasks.
5. Conclusions
In this study, we proposed a symmetry-aware framework for air quality prediction that addresses key limitations in conventional forecasting methodologies. Our approach improves accuracy, interpretability, and evaluation consistency by combining time-series decomposition, structured modeling, and component-wise evaluation in a unified design.
First, we introduced a tri-component decomposition strategy that separates trend, periodicity, and fluctuation components from air quality sequences. This design allows for targeted linear modeling of each component, resulting in interpretable and efficient predictions. Second, we implemented specialized modeling schemes that explicitly leverage the structural and statistical symmetries of each component, including multiscale fusion for trends, frequency-domain reconstruction for periodicities, and patch fusion for fluctuations. These lightweight structures enhance the learning of temporal dynamics while maintaining low computational complexity. Third, we designed component-specific evaluation metrics that capture temporal and distributional symmetries that traditional point-wise error metrics fail to reflect, including Residual Distribution Symmetry (RDS) and Temporal Symmetry Correlation (TSC). Through extensive experiments on real-world AQI datasets (e.g., Beijing, Tianjin, Guangzhou, and Jinan) and other public time-series benchmarks, our method consistently delivered competitive or superior results. For example, on the Beijing dataset, our method achieved an MAE of 0.445 at the 96-hour prediction horizon, outperforming the best baseline model, PatchFormer, which achieved MAE = 0.464. On the Tianjin dataset, our approach achieved an MAE of 0.466, outperforming PatchFormer. These results show that our method not only outperforms traditional models in terms of MAE and MSE but also provides interpretable predictions by preserving the symmetry inherent in the data.
The results affirm the growing potential of linear forecasting frameworks, especially when enhanced with domain-aligned inductive biases such as symmetry. The proposed framework demonstrates its ability to handle complex, multivariate time-series data while maintaining both accuracy and interpretability. This work paves the way for the future development of more transparent, adaptive, and theoretically grounded forecasting systems for air quality monitoring and beyond, particularly when extended to non-environmental domains.
Author Contributions
Conceptualization, C.W.; methodology, C.W.; software, Y.J.; validation, Y.J.; formal analysis, C.W.; investigation, P.Q.; resources, P.Q.; data curation, Y.J.; writing—original draft preparation, Y.J.; writing—review and editing, P.Q.; visualization, Y.J.; supervision, P.Q.; project administration, P.Q.; funding acquisition, P.Q. All authors have read and agreed to the published version of the manuscript.
Funding
This research was funded by the project Research on the Impact of Artificial Intelligence Technology on Legal Regulation in Environmental Law Control (Grant 24SKGH373).
Data Availability Statement
The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.
Conflicts of Interest
The authors declare no conflicts of interest.
References
- Wang, Y.; Ali, M.A.; Bilal, M.; Qiu, Z.; Mhawish, A.; Almazroui, M.; Shahid, S.; Islam, M.N.; Zhang, Y.; Haque, M.N. Identification of NO2 and SO2 pollution hotspots and sources in Jiangsu Province of China. Remote Sens. 2021, 13, 3742. [Google Scholar] [CrossRef]
- Xing, Q.; Sun, M. Characteristics of PM2.5 and PM10 Spatio-Temporal Distribution and Influencing Meteorological Conditions in Beijing. Atmosphere 2022, 13, 1120. [Google Scholar] [CrossRef]
- Wang, J.; Xie, X.; Fang, C. Temporal and spatial distribution characteristics of atmospheric particulate matter (PM10 and PM2.5) in Changchun and analysis of its influencing factors. Atmosphere 2019, 10, 651. [Google Scholar] [CrossRef]
- Zaib, S.; Lu, J.; Bilal, M. Spatio-temporal characteristics of air quality index (AQI) over Northwest China. Atmosphere 2022, 13, 375. [Google Scholar] [CrossRef]
- Huang, F.; Lu, Z.; Li, L.; Wu, X.; Liu, S.; Yang, Y. Numerical simulation for European and American option of risks in climate change of Three Gorges Reservoir Area. J. Numer. Math. 2022, 30, 23–42. [Google Scholar] [CrossRef]
- Martínez-Álvarez, F.; Troncoso, A.; Asencio-Cortés, G.; Riquelme, J.C. A survey on data mining techniques applied to electricity-related time series forecasting. Energies 2015, 8, 13162–13193. [Google Scholar] [CrossRef]
- Kalhori, M.R.N.; Emami, I.T.; Fallahi, F.; Tabarzadi, M. A data-driven knowledge-based system with reasoning under uncertain evidence for regional long-term hourly load forecasting. Appl. Energy 2022, 314, 118975. [Google Scholar] [CrossRef]
- Li, L.; Su, X.; Zhang, Y.; Lin, Y.; Li, Z. Trend modeling for traffic time series analysis: An integrated study. IEEE Trans. Intell. Transp. Syst. 2015, 16, 3430–3439. [Google Scholar] [CrossRef]
- Afrin, T.; Yodo, N. A Long Short-Term Memory-based correlated traffic data prediction framework. Knowl.-Based Syst. 2022, 237, 107755. [Google Scholar] [CrossRef]
- Duchon, C.; Hale, R. Time Series Analysis in Meteorology and Climatology: An Introduction; John Wiley & Sons: Hoboken, NJ, USA, 2012. [Google Scholar]
- Nandi, A.; De, A.; Mallick, A.; Middya, A.I.; Roy, S. Attention based long-term air temperature forecasting network: ALTF Net. Knowl.-Based Syst. 2022, 252, 109442. [Google Scholar] [CrossRef]
- Chan, N.H. Time Series: Applications to Finance; John Wiley & Sons: Hoboken, NJ, USA, 2004. [Google Scholar]
- Su, L.; Zuo, X.; Li, R.; Wang, X.; Zhao, H.; Huang, B. A Systematic Review for Transformer-based Long-term Series Forecasting. arXiv 2023, arXiv:2310.20218. [Google Scholar] [CrossRef]
- Ye, J.; Li, W.; Zhang, Z.; Zhu, T.; Sun, L.; Du, B. MvTS-library: An open library for deep multivariate time series forecasting. Knowl.-Based Syst. 2024, 283, 111170. [Google Scholar] [CrossRef]
- Zeng, A.; Chen, M.; Zhang, L.; Xu, Q. Are transformers effective for time series forecasting? Proc. AAAI Conf. Artif. Intell. 2023, 37, 11121–11128. [Google Scholar] [CrossRef]
- Wang, Z.; Ruan, S.; Huang, T.; Zhou, H.; Zhang, S.; Wang, Y.; Wang, L.; Huang, Z.; Liu, Y. A lightweight multi-layer perceptron for efficient multivariate time series forecasting. Knowl.-Based Syst. 2024, 288, 111463. [Google Scholar] [CrossRef]
- Berger, V.W.; Zhou, Y. Kolmogorov–smirnov test: Overview. In Wiley Statsref: Statistics Reference Online; John Wiley & Sons: Hoboken, NJ, USA, 2014. [Google Scholar]
- Box, G.E.; Jenkins, G.M.; Reinsel, G.C.; Ljung, G.M. Time Series Analysis: Forecasting and Control; John Wiley & Sons: Hoboken, NJ, USA, 2015. [Google Scholar]
- Zhao, Z.; Chen, W.; Wu, X.; Chen, P.C.; Liu, J. LSTM network: A deep learning approach for short-term traffic forecast. IET Intell. Transp. Syst. 2017, 11, 68–75. [Google Scholar] [CrossRef]
- Lai, G.; Chang, W.C.; Yang, Y.; Liu, H. Modeling long-and short-term temporal patterns with deep neural networks. In Proceedings of the 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, Ann Arbor, MI, USA, 8–12 July 2018; pp. 95–104. [Google Scholar]
- Wan, H.; Guo, S.; Yin, K.; Liang, X.; Lin, Y. CTS-LSTM: LSTM-based neural networks for correlatedtime series prediction. Knowl.-Based Syst. 2020, 191, 105239. [Google Scholar] [CrossRef]
- Wang, H.; Peng, J.; Huang, F.; Wang, J.; Chen, J.; Xiao, Y. Micn: Multi-scale local and global context modeling for long-term series forecasting. In Proceedings of the The Eleventh International Conference on Learning Representations, Online, 25–29 April 2022. [Google Scholar]
- Bai, S.; Kolter, J.Z.; Koltun, V. An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. arXiv 2018, arXiv:1803.01271. [Google Scholar] [CrossRef]
- Shen, L.; Wang, Y. TCCT: Tightly-coupled convolutional transformer on time series forecasting. Neurocomputing 2022, 480, 131–145. [Google Scholar] [CrossRef]
- Liu, P.; Liu, J.; Wu, K. CNN-FCM: System modeling promotes stability of deep learning in time series prediction. Knowl.-Based Syst. 2020, 203, 106081. [Google Scholar] [CrossRef]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. Adv. Neural Inf. Process. Syst. 2017, 30, 1–11. [Google Scholar]
- Zhou, H.; Zhang, S.; Peng, J.; Zhang, S.; Li, J.; Xiong, H.; Zhang, W. Informer: Beyond efficient transformer for long sequence time-series forecasting. Proc. AAAI Conf. Artif. Intell. 2021, 35, 11106–11115. [Google Scholar] [CrossRef]
- Li, S.; Jin, X.; Xuan, Y.; Zhou, X.; Chen, W.; Wang, Y.X.; Yan, X. Enhancing the locality and breaking the memory bottleneck of transformer on time series forecasting. Adv. Neural Inf. Process. Syst. 2019, 32, 1–11. [Google Scholar]
- Kitaev, N.; Kaiser, Ł.; Levskaya, A. Reformer: The efficient transformer. arXiv 2020, arXiv:2001.04451. [Google Scholar] [CrossRef]
- Liu, Y.; Hu, T.; Zhang, H.; Wu, H.; Wang, S.; Ma, L.; Long, M. itransformer: Inverted transformers are effective for time series forecasting. arXiv 2023, arXiv:2310.06625. [Google Scholar]
- Nie, Y.; Nguyen, N.H.; Sinthong, P.; Kalagnanam, J. A time series is worth 64 words: Long-term forecasting with transformers. arXiv 2022, arXiv:2211.14730. [Google Scholar]
- Chen, S.A.; Li, C.L.; Yoder, N.; Arik, S.O.; Pfister, T. Tsmixer: An all-mlp architecture for time series forecasting. arXiv 2023, arXiv:2303.06053. [Google Scholar] [CrossRef]
- Li, Z.; Qi, S.; Li, Y.; Xu, Z. Revisiting Long-term Time Series Forecasting: An Investigation on Affine Mapping. 2023. Available online: https://openreview.net/forum?id=T97kxctihq (accessed on 20 September 2023).
- Das, A.; Kong, W.; Leach, A.; Sen, R.; Yu, R. Long-term forecasting with tide: Time-series dense encoder. arXiv 2023, arXiv:2304.08424. [Google Scholar]
- Challu, C.; Olivares, K.G.; Oreshkin, B.N.; Ramirez, F.G.; Canseco, M.M.; Dubrawski, A. Nhits: Neural hierarchical interpolation for time series forecasting. In Proceedings of the AAAI Conference on Artificial Intelligence, Washington, DC, USA, 7–14 February 2023; Volume 37, pp. 6989–6997. [Google Scholar]
- Wu, H.; Xu, J.; Wang, J.; Long, M. Autoformer: Decomposition transformers with auto-correlation for long-term series forecasting. Adv. Neural Inf. Process. Syst. 2021, 34, 22419–22430. [Google Scholar]
- Zhou, T.; Ma, Z.; Wen, Q.; Wang, X.; Sun, L.; Jin, R. Fedformer: Frequency enhanced decomposed transformer for long-term series forecasting. In Proceedings of the International Conference on Machine Learning (ICML2022), Baltimore, MD, USA, 17–23 July 2022; pp. 27268–27286. [Google Scholar]
- Zhang, F.; Guo, T.; Wang, H. DFNet: Decomposition fusion model for long sequence time-series forecasting. Knowl.-Based Syst. 2023, 277, 110794. [Google Scholar] [CrossRef]
- Ni, R.; Lin, Z.; Wang, S.; Fanti, G. Mixture-of-Linear-Experts for Long-term Time Series Forecasting. In Proceedings of the International Conference on Artificial Intelligence and Statistics, Valencia, Spain, 2–4 May 2024; pp. 4672–4680. [Google Scholar]
- Teng, X.; Zhang, X.; Luo, Z. Multi-scale local cues and hierarchical attention-based LSTM for stock price trend prediction. Neurocomputing 2022, 505, 92–100. [Google Scholar] [CrossRef]
- Li, J.; Shao, X.; Sun, R. A DBN-Based Deep Neural Network Model with Multitask Learning for Online Air Quality Prediction. J. Control. Sci. Eng. 2019, 2019, 5304535. [Google Scholar] [CrossRef]
- Ji, C.; Zhang, C.; Hua, L.; Ma, H.; Nazir, M.S.; Peng, T. A multi-scale evolutionary deep learning model based on CEEMDAN, improved whale optimization algorithm, regularized extreme learning machine and LSTM for AQI prediction. Environ. Res. 2022, 215, 114228. [Google Scholar] [CrossRef]
- Jadon, A.; Patil, A.; Jadon, S. A comprehensive survey of regression based loss functions for time series forecasting. arXiv 2022, arXiv:2211.02989. [Google Scholar] [CrossRef]
- Mahalakshmi, G.; Sridevi, S.; Rajaram, S. A survey on forecasting of time series data. In Proceedings of the 2016 International Conference on Computing Technologies and Intelligent Data Engineering (ICCTIDE’16), Kovilpatti, India, 7–9 January 2016; IEEE: New York, NY, USA, 2016; pp. 1–8. [Google Scholar]
- Rjoob, K.; Bond, R.; Finlay, D.; McGilligan, V.; Leslie, S.J.; Rababah, A.; Iftikhar, A.; Guldenring, D.; Knoery, C.; McShane, A.; et al. Machine learning and the electrocardiogram over two decades: Time series and meta-analysis of the algorithms, evaluation metrics and applications. Artif. Intell. Med. 2022, 132, 102381. [Google Scholar] [CrossRef] [PubMed]
- Zhang, Y.; Yan, J. Crossformer: Transformer utilizing cross-dimension dependency for multivariate time series forecasting. In Proceedings of the The Eleventh International Conference on Learning Eepresentations, Online, 25–29 April 2022. [Google Scholar]
- Liu, M.; Zeng, A.; Chen, M.; Xu, Z.; Lai, Q.; Ma, L.; Xu, Q. Scinet: Time series modeling and forecasting with sample convolution and interaction. Adv. Neural Inf. Process. Syst. 2022, 35, 5816–5828. [Google Scholar]
- Wu, H.; Hu, T.; Liu, Y.; Zhou, H.; Wang, J.; Long, M. Timesnet: Temporal 2d-variation modeling for general time series analysis. In Proceedings of the Eleventh International Conference on Learning Representations, Online, 25–29 April 2022. [Google Scholar]
Figure 1.
Decomposition–modeling–evaluation framework for air quality prediction.
Figure 1.
Decomposition–modeling–evaluation framework for air quality prediction.
Figure 2.
Schematic of our proposed component-wise linear model. The trend decomposition is based on sliding averaging, while the periodicity decomposition is based on frequency-domain segmentation. (a) The trend term is predicted using multiscale mixing. (b) The periodicity term is reconstructed using frequency-domain reconstruction. (c) The fluctuation term is modeled using patch fusion.
Figure 2.
Schematic of our proposed component-wise linear model. The trend decomposition is based on sliding averaging, while the periodicity decomposition is based on frequency-domain segmentation. (a) The trend term is predicted using multiscale mixing. (b) The periodicity term is reconstructed using frequency-domain reconstruction. (c) The fluctuation term is modeled using patch fusion.
Figure 3.
(A–C) Metrics for different decomposition methods on four datasets using an input length of 96 and an output length of 336. MSE is shown in blue, TSC in green, and RDS in red. The colors from light to dark represent the no-decomposition linear method, double-decomposition linear method, tri-decomposition linear method, and our method. (D,E) Forecast visualizations on the Electricity dataset.
Figure 3.
(A–C) Metrics for different decomposition methods on four datasets using an input length of 96 and an output length of 336. MSE is shown in blue, TSC in green, and RDS in red. The colors from light to dark represent the no-decomposition linear method, double-decomposition linear method, tri-decomposition linear method, and our method. (D,E) Forecast visualizations on the Electricity dataset.
Figure 4.
AQI trends for Beijing, Tianjin, Guangzhou, and Jinan (2022–2024).
Figure 4.
AQI trends for Beijing, Tianjin, Guangzhou, and Jinan (2022–2024).
Figure 5.
(a) Pollutant concentration trends in Beijing (2023). (b) Pearson Correlation Coefficient heatmap for pollutants in Beijing (2023).
Figure 5.
(a) Pollutant concentration trends in Beijing (2023). (b) Pearson Correlation Coefficient heatmap for pollutants in Beijing (2023).
Table 1.
Details of time-series experimental datasets.
Table 1.
Details of time-series experimental datasets.
Dataset | Dim. | Prediction Length | Dataset Size | Frequency | Information |
---|
ETTh1, ETTh2 | 7 | {96; 192; 336; 720} | (8545; 2881; 2881) | Hourly | Transformer |
ETTm1, ETTm2 | 7 | {96; 192; 336; 720} | (34,465; 11,521; 11,521) | 15 min | Transformer |
Exchange | 8 | {96; 192; 336; 720} | (5120; 665; 1422) | Daily | Economy |
Weather | 21 | {96; 192; 336; 720} | (36,792; 5271; 10,540) | 10 min | Weather |
ECL | 321 | {96; 192; 336; 720} | (18,317; 2633; 5261) | Hourly | Electricity |
Traffic | 862 | {96; 192; 336; 720} | (12,185; 1757; 3509) | Hourly | Transportation |
Table 2.
Multivariate time-series forecasting results. Avg represents the average results for output lengths of 96, 192, 336, and 720, using an input length of 96. Using standard MSE (Mean Squared Error) and MAE (Mean Absolute Error), lower scores indicate better predictions. Red blocks represent the best result among all methods, and green blocks represent the best result among the linear methods. Settings are consistent with the mainstream experiments, except for “Ours”, which follows iTransformer.
Table 2.
Multivariate time-series forecasting results. Avg represents the average results for output lengths of 96, 192, 336, and 720, using an input length of 96. Using standard MSE (Mean Squared Error) and MAE (Mean Absolute Error), lower scores indicate better predictions. Red blocks represent the best result among all methods, and green blocks represent the best result among the linear methods. Settings are consistent with the mainstream experiments, except for “Ours”, which follows iTransformer.
Models | Ours | RLinear | DLinear | TiDE | iTransformer | TimesNet | PatchTST | Crossformer | SCINet | FEDformer | Autoformer |
---|
Metric
|
MSE MAE
|
MSE MAE
|
MSE MAE
|
MSE MAE
|
MSE MAE
|
MSE MAE
|
MSE MAE
|
MSE MAE
|
MSE MAE
|
MSE MAE
|
MSE MAE
|
---|
ETTm1 | 96 | 0.369 | 0.397 | 0.355 | 0.376 | 0.345 | 0.372 | 0.364 | 0.387 | 0.334 | 0.368 | 0.338 | 0.375 | 0.329 | 0.367 | 0.404 | 0.426 | 0.418 | 0.438 | 0.379 | 0.419 | 0.505 | 0.475 |
192 | 0.422 | 0.423 | 0.391 | 0.392 | 0.380 | 0.389 | 0.398 | 0.404 | 0.377 | 0.391 | 0.374 | 0.387 | 0.367 | 0.385 | 0.450 | 0.451 | 0.439 | 0.450 | 0.426 | 0.441 | 0.553 | 0.496 |
336 | 0.484 | 0.458 | 0.424 | 0.415 | 0.413 | 0.413 | 0.428 | 0.425 | 0.426 | 0.420 | 0.410 | 0.411 | 0.399 | 0.410 | 0.532 | 0.515 | 0.490 | 0.485 | 0.445 | 0.459 | 0.621 | 0.537 |
720 | 0.554 | 0.504 | 0.487 | 0.450 | 0.474 | 0.453 | 0.487 | 0.461 | 0.491 | 0.459 | 0.478 | 0.450 | 0.454 | 0.439 | 0.666 | 0.589 | 0.595 | 0.550 | 0.543 | 0.490 | 0.671 | 0.561 |
Avg | 0.457 | 0.445 | 0.414 | 0.407 | 0.403 | 0.407 | 0.419 | 0.419 | 0.407 | 0.410 | 0.400 | 0.406 | 0.387 | 0.400 | 0.513 | 0.496 | 0.485 | 0.481 | 0.448 | 0.452 | 0.588 | 0.517 |
ETTm2 | 96 | 0.116 | 0.227 | 0.182 | 0.265 | 0.193 | 0.292 | 0.207 | 0.305 | 0.180 | 0.264 | 0.187 | 0.267 | 0.175 | 0.259 | 0.287 | 0.366 | 0.286 | 0.377 | 0.203 | 0.287 | 0.255 | 0.339 |
192 | 0.146 | 0.258 | 0.246 | 0.304 | 0.284 | 0.362 | 0.290 | 0.364 | 0.250 | 0.309 | 0.249 | 0.309 | 0.241 | 0.302 | 0.414 | 0.492 | 0.399 | 0.445 | 0.269 | 0.328 | 0.281 | 0.340 |
336 | 0.183 | 0.292 | 0.307 | 0.342 | 0.369 | 0.427 | 0.377 | 0.422 | 0.311 | 0.348 | 0.321 | 0.351 | 0.305 | 0.343 | 0.597 | 0.542 | 0.637 | 0.591 | 0.325 | 0.366 | 0.339 | 0.372 |
720 | 0.237 | 0.331 | 0.407 | 0.398 | 0.554 | 0.522 | 0.558 | 0.524 | 0.412 | 0.407 | 0.408 | 0.403 | 0.402 | 0.400 | 1.730 | 1.042 | 0.960 | 0.735 | 0.421 | 0.415 | 0.433 | 0.432 |
Avg | 0.170 | 0.276 | 0.286 | 0.327 | 0.350 | 0.401 | 0.358 | 0.404 | 0.288 | 0.332 | 0.291 | 0.333 | 0.281 | 0.326 | 0.757 | 0.610 | 0.571 | 0.537 | 0.305 | 0.349 | 0.327 | 0.371 |
ETTh1 | 96 | 0.424 | 0.435 | 0.386 | 0.395 | 0.386 | 0.400 | 0.479 | 0.464 | 0.386 | 0.405 | 0.384 | 0.402 | 0.414 | 0.419 | 0.423 | 0.448 | 0.654 | 0.599 | 0.376 | 0.419 | 0.449 | 0.459 |
192 | 0.477 | 0.473 | 0.437 | 0.424 | 0.437 | 0.432 | 0.525 | 0.492 | 0.441 | 0.436 | 0.436 | 0.429 | 0.460 | 0.445 | 0.471 | 0.474 | 0.719 | 0.631 | 0.420 | 0.448 | 0.500 | 0.482 |
336 | 0.530 | 0511 | 0.479 | 0.446 | 0.481 | 0.459 | 0.565 | 0.515 | 0.487 | 0.458 | 0.491 | 0.469 | 0.501 | 0.466 | 0.570 | 0.546 | 0.778 | 0.659 | 0.459 | 0.465 | 0.521 | 0.496 |
720 | 0.662 | 0.593 | 0.481 | 0.470 | 0.519 | 0.516 | 0.594 | 0.558 | 0.503 | 0.491 | 0.521 | 0.500 | 0.500 | 0.488 | 0.653 | 0.621 | 0.836 | 0.699 | 0.506 | 0.507 | 0.514 | 0.512 |
Avg | 0.523 | 0.503 | 0.446 | 0.434 | 0.456 | 0.452 | 0.541 | 0.507 | 0.454 | 0.447 | 0.458 | 0.450 | 0.469 | 0.454 | 0.529 | 0.522 | 0.747 | 0.647 | 0.440 | 0.460 | 0.496 | 0.487 |
ETTh2 | 96 | 0.177 | 0.284 | 0.288 | 0.338 | 0.333 | 0.387 | 0.400 | 0.440 | 0.297 | 0.349 | 0.340 | 0.374 | 0.302 | 0.348 | 0.745 | 0.584 | 0.707 | 0.621 | 0.358 | 0.397 | 0.346 | 0.388 |
192 | 0.217 | 0.319 | 0.374 | 0.390 | 0.477 | 0.476 | 0.528 | 0.509 | 0.380 | 0.400 | 0.402 | 0.414 | 0.388 | 0.400 | 0.877 | 0.656 | 0.860 | 0.689 | 0.429 | 0.439 | 0.456 | 0.452 |
336 | 0.243 | 0.343 | 0.415 | 0.426 | 0.594 | 0.541 | 0.643 | 0.571 | 0.428 | 0.432 | 0.452 | 0.452 | 0.426 | 0.433 | 1.043 | 0.731 | 1.000 | 0.744 | 0.496 | 0.487 | 0.482 | 0.486 |
720 | 0.294 | 0.389 | 0.420 | 0.440 | 0.831 | 0.657 | 0.874 | 0.679 | 0.427 | 0.445 | 0.462 | 0.468 | 0.431 | 0.446 | 1.104 | 0.763 | 1.249 | 0.838 | 0.463 | 0.474 | 0.515 | 0.511 |
Avg | 0.232 | 0.333 | 0.374 | 0.398 | 0.559 | 0.515 | 0.611 | 0.550 | 0.383 | 0.407 | 0.414 | 0.427 | 0.387 | 0.407 | 0.942 | 0.684 | 0.954 | 0.723 | 0.437 | 0.449 | 0.450 | 0.459 |
ECL | 96 | 0.182 | 0.265 | 0.201 | 0.281 | 0.197 | 0.282 | 0.237 | 0.329 | 0.148 | 0.240 | 0.168 | 0.272 | 0.181 | 0.270 | 0.219 | 0.314 | 0.247 | 0.345 | 0.193 | 0.308 | 0.201 | 0.317 |
192 | 0.184 | 0.269 | 0.201 | 0.283 | 0.196 | 0.285 | 0.236 | 0.330 | 0.162 | 0.253 | 0.184 | 0.289 | 0.188 | 0.274 | 0.231 | 0.322 | 0.257 | 0.355 | 0.201 | 0.315 | 0.222 | 0.334 |
336 | 0.196 | 0.284 | 0.215 | 0.298 | 0.209 | 0.301 | 0.249 | 0.344 | 0.178 | 0.269 | 0.198 | 0.300 | 0.204 | 0.293 | 0.246 | 0.337 | 0.269 | 0.369 | 0.214 | 0.329 | 0.231 | 0.338 |
720 | 0.231 | 0.316 | 0.257 | 0.331 | 0.245 | 0.333 | 0.284 | 0.373 | 0.225 | 0.317 | 0.220 | 0.320 | 0.246 | 0.324 | 0.280 | 0.363 | 0.299 | 0.390 | 0.246 | 0.355 | 0.254 | 0.361 |
Avg | 0.198 | 0.283 | 0.219 | 0.298 | 0.212 | 0.300 | 0.251 | 0.344 | 0.178 | 0.270 | 0.192 | 0.295 | 0.205 | 0.290 | 0.244 | 0.334 | 0.268 | 0.365 | 0.214 | 0.327 | 0.227 | 0.338 |
Exchange | 96 | 0.106 | 0.233 | 0.093 | 0.217 | 0.088 | 0.218 | 0.094 | 0.218 | 0.086 | 0.206 | 0.107 | 0.234 | 0.088 | 0.205 | 0.256 | 0.367 | 0.267 | 0.396 | 0.148 | 0.278 | 0.197 | 0.323 |
192 | 0.152 | 0.285 | 0.184 | 0.307 | 0.176 | 0.315 | 0.184 | 0.307 | 0.177 | 0.299 | 0.226 | 0.344 | 0.176 | 0.299 | 0.470 | 0.509 | 0.351 | 0.459 | 0.271 | 0.315 | 0.300 | 0.369 |
336 | 0.260 | 0.390 | 0.351 | 0.432 | 0.313 | 0.427 | 0.349 | 0.431 | 0.331 | 0.417 | 0.367 | 0.448 | 0.301 | 0.397 | 1.268 | 0.883 | 1.324 | 0.853 | 0.460 | 0.427 | 0.509 | 0.524 |
720 | 0.601 | 0.606 | 0.886 | 0.714 | 0.839 | 0.695 | 0.852 | 0.698 | 0.847 | 0.691 | 0.964 | 0.746 | 0.901 | 0.714 | 1.767 | 1.068 | 1.058 | 0.797 | 1.195 | 0.695 | 1.447 | 0.941 |
Avg | 0.279 | 0.378 | 0.378 | 0.417 | 0.354 | 0.414 | 0.370 | 0.413 | 0.360 | 0.403 | 0.416 | 0.443 | 0.367 | 0.404 | 0.940 | 0.707 | 0.750 | 0.626 | 0.519 | 0.429 | 0.613 | 0.539 |
Traffic | 96 | 0.638 | 0.378 | 0.649 | 0.389 | 0.650 | 0.396 | 0.805 | 0.493 | 0.395 | 0.268 | 0.593 | 0.321 | 0.462 | 0.295 | 0.522 | 0.290 | 0.788 | 0.499 | 0.587 | 0.366 | 0.613 | 0.388 |
192 | 0.595 | 0.354 | 0.601 | 0.366 | 0.598 | 0.370 | 0.756 | 0.474 | 0.417 | 0.276 | 0.617 | 0.336 | 0.466 | 0.296 | 0.530 | 0.293 | 0.789 | 0.505 | 0.604 | 0.373 | 0.616 | 0.382 |
336 | 0.602 | 0.357 | 0.609 | 0.369 | 0.605 | 0.373 | 0.762 | 0.477 | 0.433 | 0.283 | 0.629 | 0.336 | 0.482 | 0.304 | 0.558 | 0.305 | 0.797 | 0.508 | 0.621 | 0.383 | 0.622 | 0.337 |
720 | 0.641 | 0.378 | 0.647 | 0.647 | 0.645 | 0.394 | 0.719 | 0.449 | 0.467 | 0.302 | 0.640 | 0.350 | 0.514 | 0.322 | 0.589 | 0.328 | 0.841 | 0.523 | 0.626 | 0.382 | 0.660 | 0.408 |
Avg | 0.619 | 0.366 | 0.626 | 0.378 | 0.625 | 0.383 | 0.760 | 0.473 | 0.428 | 0.282 | 0.620 | 0.336 | 0.481 | 0.304 | 0.550 | 0.304 | 0.804 | 0.509 | 0.610 | 0.376 | 0.628 | 0.379 |
Weather | 96 | 0.158 | 0.214 | 0.192 | 0.232 | 0.196 | 0.255 | 0.202 | 0.261 | 0.174 | 0.214 | 0.172 | 0.220 | 0.177 | 0.218 | 0.158 | 0.230 | 0.221 | 0.306 | 0.217 | 0.296 | 0.266 | 0.336 |
192 | 0.201 | 0.259 | 0.240 | 0.271 | 0.237 | 0.296 | 0.242 | 0.298 | 0.221 | 0.254 | 0.219 | 0.261 | 0.225 | 0.259 | 0.206 | 0.277 | 0.261 | 0.340 | 0.276 | 0.336 | 0.307 | 0.367 |
336 | 0.250 | 0.298 | 0.292 | 0.307 | 0.283 | 0.335 | 0.287 | 0.335 | 0.278 | 0.296 | 0.280 | 0.306 | 0.278 | 0.297 | 0.272 | 0.335 | 0.309 | 0.378 | 0.339 | 0.380 | 0.359 | 0.359 |
720 | 0.325 | 0.351 | 0.364 | 0.353 | 0.345 | 0.381 | 0.351 | 0.386 | 0.358 | 0.347 | 0.365 | 0.359 | 0.354 | 0.348 | 0.398 | 0.418 | 0.377 | 0.427 | 0.403 | 0.428 | 0.419 | 0.428 |
Avg | 0.233 | 0.280 | 0.272 | 0.291 | 0.265 | 0.317 | 0.271 | 0.320 | 0.258 | 0.278 | 0.259 | 0.287 | 0.259 | 0.281 | 0.259 | 0.315 | 0.292 | 0.363 | 0.309 | 0.360 | 0.338 | 0.382 |
1st Count | 19 | 15 | 4 | 5 | 0 | 0 | 0 | 0 | 11 | 15 | 2 | 0 | 5 | 5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Table 3.
Comparison of results between the triple-decomposition linear model and our model, showing the performance of the models across the trend, periodicity, and fluctuation components. MAE (Mean Absolute Error), TSC (Temporal Symmetry Correlation), and RDS (Residual Distribution Symmetry) are used to evaluate model accuracy, symmetry, and residual distribution. Values in bold indicate the best performance for each component or method.
Table 3.
Comparison of results between the triple-decomposition linear model and our model, showing the performance of the models across the trend, periodicity, and fluctuation components. MAE (Mean Absolute Error), TSC (Temporal Symmetry Correlation), and RDS (Residual Distribution Symmetry) are used to evaluate model accuracy, symmetry, and residual distribution. Values in bold indicate the best performance for each component or method.
Models Metric | Trend—Ours | Trend—Linear | Periodicity—Ours | Periodicity—Linear | Fluctuation—Ours | Fluctuation—Linear |
---|
MSE
| TSC
| RDS
| MSE
| TSC
| RDS
| MSE
| TSC
| RDS
| MSE
| TSC
| RDS
| MAE
| TSC
| RDS
| MAE
| TSC
| RDS
|
---|
ETTm1 | 0.418 | 0.154 | 0.083 | 0.419 | 0.156 | 0.086 | 0.042 | 0.106 | 0.066 | 0.040 | 0.114 | 0.071 | 0.200 | 0.048 | 0.085 | 0.199 | 0.050 | 0.087 |
ETTm2 | 0.194 | 0.152 | 0.093 | 0.196 | 0.159 | 0.095 | 0.008 | 0.062 | 0.023 | 0.007 | 0.068 | 0.032 | 0.112 | 0.034 | 0.076 | 0.114 | 0.035 | 0.080 |
ETTh1 | 0.330 | 0.158 | 0.007 | 0.316 | 0.163 | 0.009 | 0.390 | 0.201 | 0.114 | 0.404 | 0.218 | 0.116 | 0.586 | 0.142 | 0.091 | 0.589 | 0.143 | 0.091 |
ETTh2 | 0.209 | 0.160 | 0.073 | 0.212 | 0.162 | 0.079 | 0.031 | 0.173 | 0.035 | 0.030 | 0.189 | 0.037 | 0.203 | 0.099 | 0.048 | 0.206 | 0.100 | 0.049 |
Exchange | 0.646 | 0.273 | 0.052 | 0.793 | 0.283 | 0.068 | 0.004 | 0.038 | 0.005 | 0.004 | 0.046 | 0.010 | 0.055 | 0.029 | 0.052 | 0.058 | 0.031 | 0.055 |
Weather | 0.291 | 0.173 | 0.121 | 0.294 | 0.177 | 0.129 | 0.007 | 0.049 | 0.070 | 0.007 | 0.054 | 0.092 | 0.075 | 0.026 | 0.236 | 0.076 | 0.027 | 0.245 |
ECL | 0.132 | 0.134 | 0.057 | 0.135 | 0.150 | 0.064 | 0.618 | 0.240 | 0.054 | 0.615 | 0.248 | 0.059 | 0.718 | 0.221 | 0.050 | 0.720 | 0.222 | 0.050 |
Traffic | 0.183 | 0.101 | 0.065 | 0.189 | 0.106 | 0.078 | 0.857 | 0.211 | 0.038 | 0.867 | 0.223 | 0.044 | 0.801 | 0.168 | 0.077 | 0.805 | 0.171 | 0.076 |
Avg | 0.300 | 0.163 | 0.068 | 0.319 | 0.183 | 0.076 | 0.244 | 0.135 | 0.050 | 0.246 | 0.145 | 0.057 | 0.343 | 0.095 | 0.089 | 0.345 | 0.097 | 0.092 |
Table 4.
Multivariate prediction performance. Bold text indicates the best performance in the comparison model.
Table 4.
Multivariate prediction performance. Bold text indicates the best performance in the comparison model.
Models Metric | Ours | PatchFormer | TimesNet | AutoFormer | N-BETAS | InFormer | LSTNet |
---|
MAE
|
MSE
|
MAE
|
MSE
|
MAE
|
MSE
|
MAE
|
MSE
|
MAE
|
MSE
|
MAE
|
MSE
|
MAE
|
MSE
|
---|
Beijing | 24 | 0.436 | 0.494 | 0.436 | 0.497 | 0.433 | 0.494 | 0.445 | 0.507 | 0.453 | 0.519 | 0.487 | 0.512 | 0.517 | 0.545 |
48 | 0.447 | 0.496 | 0.452 | 0.503 | 0.447 | 0.509 | 0.459 | 0.537 | 0.469 | 0.546 | 0.487 | 0.511 | 0.513 | 0.546 |
96 | 0.443 | 0.505 | 0.464 | 0.511 | 0.456 | 0.509 | 0.472 | 0.527 | 0.478 | 0.551 | 0.489 | 0.515 | 0.532 | 0.581 |
168 | 0.452 | 0.513 | 0.485 | 0.531 | 0.469 | 0.510 | 0.462 | 0.521 | 0.498 | 0.562 | 0.499 | 0.517 | 0.541 | 0.609 |
Tianjin | 24 | 0.462 | 0.512 | 0.445 | 0.509 | 0.435 | 0.504 | 0.446 | 0.521 | 0.468 | 0.529 | 0.463 | 0.534 | 0.491 | 0.529 |
48 | 0.457 | 0.512 | 0.459 | 0.530 | 0.449 | 0.522 | 0.462 | 0.531 | 0.461 | 0.521 | 0.469 | 0.531 | 0.492 | 0.524 |
96 | 0.466 | 0.523 | 0.469 | 0.529 | 0.457 | 0.539 | 0.472 | 0.534 | 0.478 | 0.544 | 0.474 | 0.535 | 0.504 | 0.532 |
168 | 0.471 | 0.516 | 0.481 | 0.523 | 0.473 | 0.537 | 0.492 | 0.545 | 0.496 | 0.545 | 0.476 | 0.533 | 0.519 | 0.535 |
Guangzhou | 24 | 0.521 | 0.495 | 0.507 | 0.498 | 0.510 | 0.494 | 0.542 | 0.516 | 0.547 | 0.517 | 0.561 | 0.523 | 0.564 | 0.542 |
48 | 0.502 | 0.523 | 0.497 | 0.524 | 0.502 | 0.523 | 0.555 | 0.545 | 0.564 | 0.548 | 0.564 | 0.561 | 0.568 | 0.573 |
96 | 0.510 | 0.528 | 0.497 | 0.526 | 0.509 | 0.533 | 0.541 | 0.541 | 0.561 | 0.553 | 0.585 | 0.566 | 0.596 | 0.578 |
168 | 0.499 | 0.521 | 0.519 | 0.537 | 0.523 | 0.548 | 0.566 | 0.580 | 0.556 | 0.551 | 0.577 | 0.599 | 0.615 | 0.616 |
Jinan | 24 | 0.471 | 0.508 | 0.473 | 0.518 | 0.473 | 0.508 | 0.471 | 0.521 | 0.475 | 0.527 | 0.471 | 0.523 | 0.472 | 0.551 |
48 | 0.474 | 0.516 | 0.477 | 0.516 | 0.472 | 0.507 | 0.471 | 0.510 | 0.479 | 0.532 | 0.474 | 0.529 | 0.482 | 0.556 |
96 | 0.502 | 0.545 | 0.498 | 0.543 | 0.497 | 0.505 | 0.477 | 0.523 | 0.488 | 0.546 | 0.481 | 0.525 | 0.502 | 0.583 |
168 | 0.501 | 0.554 | 0.508 | 0.522 | 0.509 | 0.511 | 0.485 | 0.535 | 0.517 | 0.546 | 0.497 | 0.545 | 0.521 | 0.605 |
1st Count | 6 | 9 | 3 | 1 | 5 | 9 | 4 | 0 | 0 | 0 | 1 | 0 | 0 | 0 |
| Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).