TFHA: A Time–Frequency Harmonic Attention Framework for Analyzing Digital Management Strategy Impact Mechanisms
Abstract
1. Introduction
- Fourier Frequency Analysis (FFA): This module partitions brand strategies into semantically coherent clusters (e.g., product-focused, relationship-oriented, and communication-based) and models their influence on performance indicators across different temporal windows.
- Temporal Feature Embedding Mechanism (TFE): This mechanism incorporates both static calendar features (e.g., holidays and seasons) and dynamic event signals (e.g., crisis response and viral trends) to contextualize the temporal relevance of each strategy.
- Contrastive Time--Frequency Representation Enhancement (CTF-RE): By aligning textual descriptions, customer sentiment, and investment levels of brand strategies, this module learns domain-aware strategy embeddings that improve interpretability and predictive accuracy.
2. Related Work
2.1. Brand Management Strategies and Performance Metrics
2.2. Deep Learning Approaches for Time-Series Forecasting
2.3. Temporal Dynamics in Brand Performance Evaluation
2.4. Representation Learning for Strategic Behavior Modeling
2.5. Multi-View and Contrastive Learning in Management Research
3. Method
3.1. Multi-Scale Periodicity Extraction via Fourier Frequency Analysis
3.2. Temporal Feature Embedding Mechanism
3.3. Contrastive Time--Frequency Representation Enhancement (CTF-RE)
4. Experimental Setup
4.1. Dataset Description
- TravelBrandOps: contains 2300 records of daily operational metrics including booking volume, campaign frequency, and engagement indices.
- SocialEngageBuzz: contains 1800 samples containing brand-level social media activity, user interactions, and sentiment dynamics.
- BookingTrendDaily: contains 2100 time-series instances of daily bookings, cancellations, and promotion responses.
- CustomerSentiment: contains 1500 samples of customer reviews, satisfaction ratings, and PR-related responses.
4.2. Experimental Settings and Evaluation Metrics
- Dlinear [22]: A linear decomposition-based model that separates trend and seasonal components.
- Informer [23]: An efficient Transformer variant designed for long-sequence forecasting.
- Autoformer [24]: An auto-correlation mechanism-based model capturing seasonal-trend decomposition.
- ETSformer [25]: A model integrating exponential smoothing into Transformers for robust temporal forecasting.
- PatchTST [26]: A patch-based Transformer utilizing sub-series segmentation to improve local temporal representation.
4.3. Result and Discussion
4.4. Ablation Study
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Smith, M.J.; Buckton, C.; Patterson, C.; Hilton, S. User-generated content and influencer marketing involving e-cigarettes on social media: A scoping review and content analysis of YouTube and Instagram. BMC Public Health 2023, 23, 530. [Google Scholar] [CrossRef]
- Qi, X.; Hou, K.; Liu, T.; Yu, Z.; Hu, S.; Ou, W. From known to unknown: Knowledge-guided transformer for time-series sales forecasting in alibaba. arXiv 2021, arXiv:2109.08381. [Google Scholar]
- Cheng, Y.; Liu, Z.; Morimoto, Y. Attention-based SeriesNet: An attention-based hybrid neural network model for conditional time series forecasting. Information 2020, 11, 305. [Google Scholar] [CrossRef]
- Liu, D.; Wang, T.; Liu, S.; Wang, R.; Yao, S.; Abdelzaher, T. Contrastive self-supervised representation learning for sensing signals from the time-frequency perspective. In Proceedings of the 2021 International Conference on Computer Communications and Networks (ICCCN), Athens, Greece, 19–22 July 2021; IEEE: New York, NY, USA, 2021; pp. 1–10. [Google Scholar]
- Woo, G.; Liu, C.; Sahoo, D.; Kumar, A.; Hoi, S. Cost: Contrastive learning of disentangled seasonal-trend representations for time series forecasting. arXiv 2022, arXiv:2202.01575. [Google Scholar]
- Keller, K.L. Conceptualizing, measuring, and managing customer-based brand equity. J. Mark. 1993, 57, 1–22. [Google Scholar] [CrossRef]
- Wang, H.; Wei, Y.; Yu, C. Global brand equity model: Combining customer-based with product-market outcome approaches. J. Prod. Brand Manag. 2008, 17, 305–316. [Google Scholar] [CrossRef]
- Romanelli, E.; Tushman, M.L. Inertia, environments, and strategic choice: A quasi-experimental design for comparative-longitudinal research. Manag. Sci. 1986, 32, 608–621. [Google Scholar] [CrossRef]
- Weisberg, S. Applied Linear Regression; John Wiley & Sons: Hoboken, NJ, USA, 2005; Volume 528. [Google Scholar]
- Stock, J.H.; Watson, M.W. Vector autoregressions. J. Econ. Perspect. 2001, 15, 101–115. [Google Scholar] [CrossRef]
- Contreras, J.; Espinola, R.; Nogales, F.J.; Conejo, A.J. ARIMA models to predict next-day electricity prices. IEEE Trans. Power Syst. 2003, 18, 1014–1020. [Google Scholar] [CrossRef]
- Miller, J.A.; Aldosari, M.; Saeed, F.; Barna, N.H.; Rana, S.; Arpinar, I.B.; Liu, N. A survey of deep learning and foundation models for time series forecasting. arXiv 2024, arXiv:2401.13912. [Google Scholar]
- Kamola, M.; Arabas, P. Improving time-series demand modeling in hospitality business by analytics of public event datasets. IEEE Access 2020, 8, 53666–53677. [Google Scholar] [CrossRef]
- Wang, H.; Zhong, P.a.; Zsoter, E.; Prudhomme, C.; Pappenberger, F.; Xu, B. Regional adaptability of global and regional hydrological forecast system. Water 2023, 15, 347. [Google Scholar] [CrossRef]
- Ma, Y.; Ye, C.; Wu, Z.; Wang, X.; Cao, Y.; Chua, T.S. Context-aware event forecasting via graph disentanglement. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Long Beach, CA, USA, 6–10 August 2023; pp. 1643–1652. [Google Scholar]
- Pei, J.; Omar, M.; Al Dabel, M.M.; Mumtaz, S.; Liu, W. Federated Few-Shot Learning With Intelligent Transportation Cross-Regional Adaptation. IEEE Trans. Intell. Transp. Syst. 2025. [Google Scholar] [CrossRef]
- Kang, Y.; Cai, Z.; Tan, C.W.; Huang, Q.; Liu, H. Natural language processing (NLP) in management research: A literature review. J. Manag. Anal. 2020, 7, 139–172. [Google Scholar] [CrossRef]
- Liapis, C.M.; Kotsiantis, S. Temporal convolutional networks and BERT-based multi-label emotion analysis for financial forecasting. Information 2023, 14, 596. [Google Scholar] [CrossRef]
- Ma, F.; Wang, S.; Xie, T.; Sun, C. Regional Logistics Express Demand Forecasting Based on Improved GA-BP Neural Network with Indicator Data Characteristics. Appl. Sci. 2024, 14, 6766. [Google Scholar] [CrossRef]
- Hu, H.; Wang, X.; Zhang, Y.; Chen, Q.; Guan, Q. A comprehensive survey on contrastive learning. Neurocomputing 2024, 610, 128645. [Google Scholar] [CrossRef]
- Wei, T.; Yang, C.; Zheng, Y. Prototypical Graph Contrastive Learning for Recommendation. Appl. Sci. 2025, 15, 1961. [Google Scholar] [CrossRef]
- Zeng, A.; Chen, M.; Zhang, L.; Xu, Q. Are transformers effective for time series forecasting? In Proceedings of the AAAI Conference on Artificial Intelligence, Washington, DC, USA, 7–14 February 2023; Volume 37, pp. 11121–11128. [Google Scholar] [CrossRef]
- Zhou, H.; Zhang, S.; Peng, J.; Zhang, S.; Li, J.; Xiong, H.; Zhang, W. Informer: Beyond efficient transformer for long sequence time-series forecasting. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtual, 2–9 February 2021; Volume 35, pp. 11106–11115. [Google Scholar]
- Wu, H.; Xu, J.; Wang, J.; Long, M. Autoformer: Decomposition transformers with auto-correlation for long-term series forecasting. Adv. Neural Inf. Process. Syst. 2021, 34, 22419–22430. [Google Scholar]
- Woo, G.; Liu, C.; Sahoo, D.; Kumar, A.; Hoi, S. Etsformer: Exponential smoothing transformers for time-series forecasting. arXiv 2022, arXiv:2202.01381. [Google Scholar]
- Nie, Y.; Nguyen, N.H.; Sinthong, P.; Kalagnanam, J. A time series is worth 64 words: Long-term forecasting with transformers. arXiv 2022, arXiv:2211.14730. [Google Scholar]
Hyperparameter | Value |
---|---|
Optimizer | Adam |
Initial learning rate | 0.001 |
Batch size | 64 |
Number of epochs | 100 |
Early stopping patience | 10 epochs |
Number of attention heads | 8 |
Hidden layer dimension | 512 |
Dropout rate | 0.1 |
Dataset | Horizon | Dliner [22] | Informer [23] | Autoformer [24] | ETSformer [25] | PatchTST [26] | TFHA | ||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | MSE↓ | MAE↓ | ||
TravelBrandOps | 48 | 0.070 | 0.086 | 0.062 | 0.241 | 0.058 | 0.045 | 0.077 | 0.205 | 0.046 | 0.028 | 0.040 | 0.026 |
96 | 0.092 | 0.120 | 0.090 | 0.320 | 0.085 | 0.060 | 0.079 | 0.260 | 0.070 | 0.080 | 0.055 | 0.062 | |
192 | 0.130 | 0.180 | 0.120 | 0.350 | 0.104 | 0.170 | 0.180 | 0.290 | 0.120 | 0.140 | 0.095 | 0.110 | |
SocialEngageBuzz | 48 | 0.069 | 0.089 | 0.065 | 0.244 | 0.055 | 0.048 | 0.051 | 0.210 | 0.049 | 0.031 | 0.042 | 0.029 |
96 | 0.114 | 0.125 | 0.094 | 0.325 | 0.105 | 0.066 | 0.155 | 0.267 | 0.074 | 0.087 | 0.058 | 0.065 | |
192 | 0.134 | 0.186 | 0.123 | 0.355 | 0.115 | 0.174 | 0.186 | 0.293 | 0.122 | 0.146 | 0.101 | 0.115 | |
BookingTrendDaily | 48 | 0.081 | 0.082 | 0.055 | 0.230 | 0.044 | 0.042 | 0.065 | 0.215 | 0.035 | 0.025 | 0.030 | 0.022 |
96 | 0.095 | 0.115 | 0.085 | 0.315 | 0.063 | 0.158 | 0.045 | 0.255 | 0.060 | 0.075 | 0.052 | 0.055 | |
192 | 0.125 | 0.170 | 0.115 | 0.345 | 0.112 | 0.065 | 0.105 | 0.285 | 0.110 | 0.135 | 0.088 | 0.102 | |
CustomerSentiment | 48 | 0.062 | 0.088 | 0.058 | 0.242 | 0.041 | 0.047 | 0.078 | 0.208 | 0.042 | 0.030 | 0.037 | 0.028 |
96 | 0.103 | 0.118 | 0.092 | 0.328 | 0.084 | 0.068 | 0.148 | 0.070 | 0.067 | 0.065 | 0.053 | 0.062 | |
192 | 0.145 | 0.182 | 0.126 | 0.358 | 0.118 | 0.278 | 0.188 | 0.298 | 0.125 | 0.152 | 0.098 | 0.117 |
Methods | Metric | TravelBrandOps | SocialEngageBuzz | BookingTrendDaily | CustomerSentiment |
---|---|---|---|---|---|
Full Model | MSE | 0.46 | 0.49 | 0.35 | 0.42 |
MAE | 0.28 | 0.31 | 0.25 | 0.30 | |
w/o FFA | MSE | 0.52 | 0.55 | 0.40 | 0.48 |
MAE | 0.32 | 0.35 | 0.28 | 0.33 | |
w/o TFE | MSE | 0.60 | 0.63 | 0.48 | 0.55 |
MAE | 0.35 | 0.38 | 0.32 | 0.37 | |
w/o CTF-RE | MSE | 0.58 | 0.61 | 0.46 | 0.53 |
MAE | 0.33 | 0.36 | 0.30 | 0.35 |
Perturbation Type | MAE ↑ | MAPE (%) ↑ | Performance vs. Original |
---|---|---|---|
No Perturbation | 0.069 | 7.18 | — |
Drop 10% of Strategies | 0.082 | 9.32 | ↓ –18.8% |
Drop 20% of Strategies | 0.088 | 10.45 | ↓ –28.2% |
Semantic Noise Injection | 0.077 | 8.65 | ↓ –13.4% |
Shuffle Strategy Time Order | 0.085 | 9.98 | ↓ –26.3% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Cao, S.; Zhou, C. TFHA: A Time–Frequency Harmonic Attention Framework for Analyzing Digital Management Strategy Impact Mechanisms. Appl. Sci. 2025, 15, 9989. https://doi.org/10.3390/app15189989
Cao S, Zhou C. TFHA: A Time–Frequency Harmonic Attention Framework for Analyzing Digital Management Strategy Impact Mechanisms. Applied Sciences. 2025; 15(18):9989. https://doi.org/10.3390/app15189989
Chicago/Turabian StyleCao, Shu, and Can Zhou. 2025. "TFHA: A Time–Frequency Harmonic Attention Framework for Analyzing Digital Management Strategy Impact Mechanisms" Applied Sciences 15, no. 18: 9989. https://doi.org/10.3390/app15189989
APA StyleCao, S., & Zhou, C. (2025). TFHA: A Time–Frequency Harmonic Attention Framework for Analyzing Digital Management Strategy Impact Mechanisms. Applied Sciences, 15(18), 9989. https://doi.org/10.3390/app15189989