Next Article in Journal
Segmentation of Skin Lesions Using Deep YOLO-Family Networks: A Comparison of the Performance of Selected Models on a New Dataset
Previous Article in Journal
Sub-15 nm Line Patterning at 30 kV: Process Window Extraction and Lift-Off Validation
Previous Article in Special Issue
Optimal Control for Networked Control Systems with Stochastic Transmission Delay and Packet Dropouts
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Hybrid Temporal Recommender System Based on Sliding-Window Weighted Popularity and Elite Evolutionary Discrete Particle Swarm Optimization

Graduate School of Technology, Industrial and Social Sciences, Tokushima University, Tokushima 770-8506, Japan
*
Authors to whom correspondence should be addressed.
Electronics 2026, 15(8), 1544; https://doi.org/10.3390/electronics15081544
Submission received: 13 February 2026 / Revised: 29 March 2026 / Accepted: 5 April 2026 / Published: 8 April 2026

Abstract

This paper proposes a hybrid non-personalized temporal recommendation framework integrating Sliding-Window Weighted Popularity (SWWP) with Elite Evolutionary Discrete Particle Swarm Optimization (EEDPSO) to address the challenges of extreme data sparsity and temporal dynamics in global popularity-based recommendation. We first formally prove the NP hardness of the temporal-constrained recommendation problem, justifying the adoption of a metaheuristic approach. The proposed SWWP model employs a dual-scale sliding-window mechanism to balance short-term trend adaptation with long-term periodicity capture. A novel deep integration mechanism couples SWWP with EEDPSO through a “purchase heat” indicator, which guides temporal-aware particle initialization, position updates, and fitness evaluation. Extensive experiments on the Amazon Reviews dataset with extreme sparsity (density < 0.0005%) demonstrate that SWWP achieves an NDCG@20 of 0.245, outperforming nine temporal baselines by at least 13%. Furthermore, under a unified fitness function incorporating temporal prediction accuracy, the SWWP-EEDPSO framework achieves 5.95% higher fitness compared to vanilla EEDPSO, while significantly outperforming Differential Evolution and Genetic Algorithms. The temporally informed search strategy enables SWWP-EEDPSO to discover recommendations that better align with future user behavior, while maintaining sub-millisecond online query latency (0.52 ms) through offline precomputation and caching, demonstrating practical feasibility for deployment scenarios where periodic offline updates are acceptable.

1. Introduction

Recommender systems in real-world e-commerce scenarios face two fundamental challenges: extreme data sparsity and temporal dynamics [1]. While massive catalogs create interaction matrices with densities often below 0.01%, user preferences and item popularity are highly volatile, driven by short-term trends and long-term seasonality [2]. Traditional collaborative filtering and deep learning models struggle in this environment; they either overfit due to data scarcity or fail to capture rapid popularity shifts due to static modeling assumptions [3].
Before detailing our approach, we clarify the scope and rationale of this work. Our framework targets the non-personalized temporal recommendation setting, where the goal is to generate a single, globally optimal top-K list that serves the entire user population at a given time τ . This setting is both practically important and theoretically well-motivated for the following reasons. First, in environments with extreme sparsity (density < 0.0005%), over 95% of users have fewer than two recorded interactions, rendering user-specific preference modeling statistically unreliable [1]. Under such conditions, personalized methods (e.g., matrix factorization, sequential models) suffer from severe cold-start degradation, as demonstrated empirically in [2]. Second, non-personalized popularity-aware lists serve as the primary recommendation surface in many real-world scenarios—including e-commerce homepage “trending” sections, app store featured lists, and news portal highlights—where serving millions of users with individualized lists is either computationally prohibitive or operationally unnecessary [4]. Third, global temporal popularity rankings naturally function as high-quality candidate generators for downstream personalized re-ranking stages, making our framework complementary to, rather than a replacement for, user-level models.
Formally, we address the following research questions:
  • RQ1: Can a dual-scale sliding-window model effectively capture both short-term trends and long-term periodicities under extreme data sparsity?
  • RQ2: Does deep integration of temporal modeling with evolutionary optimization (via the purchase heat indicator) yield better solutions than applying either technique independently?
  • RQ3: What are the individual contributions of each framework component (SWWP-guided initialization, SWWP-guided position updates, temporal fitness)?
To address data sparsity, meta-heuristic algorithms like Elite Evolutionary Discrete Particle Swarm Optimization (EEDPSO) [5] have shown promise. By optimizing set-based metrics (e.g., Jaccard distance) without relying on gradient descent, EEDPSO avoids the cold-start failures common in neural networks. However, standard EEDPSO has a critical limitation: it is time-agnostic. It treats all historical interactions equally, recommending “all-time bestsellers” even when they are out of season or no longer trending. This static nature compromises recommendation timeliness and fails to align with the dynamic purchasing intent of users.
To bridge this gap, we propose the Sliding-Window Weighted Popularity (SWWP) model, a lightweight temporal modeling mechanism designed explicitly for sparse environments. Unlike rigid time-decay functions, SWWP employs a dual-scale window strategy: it combines a Short-Term Trend Window to capture immediate popularity drifts (e.g., viral products) with a Long-Term Periodic Window to identify recurring seasonal patterns (e.g., holiday or weekend effects). This allows the system to distinguish between fading fads and enduring habits, generating a highly relevant candidate pool even when user-specific interaction history is minimal.
Furthermore, we present a hybrid framework that deeply integrates SWWP with EEDPSO. Rather than a simple weighted combination, we introduce a novel purchase heat indicator ( H ( τ ) ). This indicator quantifies the current temporal activity level based on time segments, weekdays, and seasonal factors. It acts as a dynamic bridge, guiding the evolutionary search by adjusting particle initialization and fitness evaluation. This ensures that the global optimization capability of EEDPSO is directed towards temporally relevant regions of the search space.
The main contributions of this paper are summarized as follows:
1.
Hybrid Optimization Framework: We propose the first framework integrating SWWP with EEDPSO. We formally prove the NP hardness of the temporal-constrained recommendation problem, establishing the theoretical necessity of our metaheuristic approach.
2.
Deep Integration Mechanism: We design a purchase heat Indicator ( H ( τ ) ) that enables algorithm-level fusion. This mechanism dynamically balances temporal relevance with optimization diversity through time-aware initialization, differentiated position updates, and temporal fitness bonuses.
3.
Robust Temporal Modeling for Extreme Sparsity: We develop a dual-scale SWWP model that leverages hierarchical features (segments, weekdays, months). This ensures robust trend capture even in datasets with densities <0.0005%, where traditional sequential models often fail.
4.
Extensive Empirical Evaluation: Experiments on Amazon Reviews Data demonstrate that SWWP achieves an NDCG@20 of 0.245, outperforming nine temporal baselines by at least 13%. The hybrid framework significantly surpasses Differential Evolution (DE) and Genetic Algorithms (GAs) in temporal prediction quality. A systematic ablation study isolates the contribution of each integration mechanism, revealing that SWWP-guided position updates and temporal fitness are jointly critical, improving temporal prediction (Mass@K) by over 7× compared to unguided searches.
The remainder of this paper is organized as follows: Section 2 provides a comprehensive literature review of temporal recommendation systems and meta-heuristic optimization. Section 3 outlines the preliminaries, detailing the standard EEDPSO algorithm, which serves as our base optimizer. Section 4 presents our proposed SWWP-EEDPSO hybrid framework, including the formal problem definition, the SWWP temporal modeling, and the deep integration mechanism. Section 5 describes our experimental evaluation, covering the experimental setup, comparative analysis, and ablation studies. Finally, Section 6 concludes the paper with key findings and future research directions.

2. Literature Review

This section reviews three foundational areas: temporal modeling in recommender systems, sliding-window techniques, and meta-heuristic optimization for recommendation.
Temporal signals capture how user preferences and item popularity evolve over time, and they form a critical dimension in recommender systems [6]. In the Netflix Prize, Koren [7] was the first to show the sizable impact of temporal dynamics on accuracy; his TimeSVD++ model introduced time-dependent bias terms and reduced RMSE by 3.7%.
A large body of work since then confirms the value of temporal modeling at multiple granularities. Micro-level (hourly) hour-scale patterns reflect users’ immediate intent shifts. In Airbnb’s production setting, Grbovic and Cheng [8] observed that searches at 8–10 a.m. skew toward business stays, whereas those at 8–10 p.m. lean toward leisure trips. Liu et al. [9] further showed that incorporating hourly features can raise CTR by 15%. Meso-level (daily/weekly) patterns capture weekday–weekend differences. Using Reddit data, Pálovics et al. [10] found that views of technology content on Mondays are 40% higher than on weekends. Amazon’s study [11] reported an 8–12% lift in conversion when day-of-week signals are added to the model. Macro-level (monthly/seasonal) seasonality and holiday effects drive long-term popularity shifts. In eBay’s practice, Zimdars et al. [12] recorded a 300% surge in searches for gift items in the two weeks before Christmas.
Temporal context is also valuable under data sparsity. Zhang et al. [13] proposed a time-aware collaborative filtering framework that leverages group behavior in similar time periods to produce effective recommendations in cold-start settings. A survey by Campos et al. [2] noted that when users have fewer than five interactions, adding temporal features can improve accuracy by 25%.
The sliding window is a classic technique in time-series analysis, with roots in signal processing [14]. Ding and Li [15] first brought sliding windows to collaborative filtering and reduced MAE by 6.5%. Vinagre et al. [16] introduced adaptive windows that resize dynamically to track concept drift. Matuszyk et al. [17] compared five decay functions and found exponential decay to be the most robust in most scenarios, consistent with the Ebbinghaus forgetting curve [18].
EEDPSO [5] addresses the challenges of applying traditional PSO to recommendation by redefining velocity and position updates in discrete spaces. In experiments by Lin et al., EEDPSO outperformed Genetic Algorithms by 3% and Differential Evolution by 27% on sparse datasets. Its advantage is pronounced in cold-start scenarios because the optimization does not rely on gradients from historical data. However, a key limitation of EEDPSO is its static optimization assumption, which leaves it ill-suited to temporal dynamics.
Burke’s classic taxonomy classifies hybrid strategies into five types, with deep integration offering the strongest synergy [19]. A current trend is to tightly couple optimization with feature extraction. Zhou et al. [20] demonstrated the effectiveness of such coupling within a deep reinforcement learning framework. Recent advances in personalized recommendation have explored multi-interest learning to better capture diverse user preferences. Xie et al. [21] proposed rethinking multi-interest candidate matching by improving interest representation diversity, demonstrating the importance of capturing heterogeneous user intents. Chen et al. [22] introduced joint factual and counterfactual explanations for GNN-based recommendations, highlighting the role of explainability in modern recommender systems. While these works focus on personalized settings with sufficient interaction data, they underscore the broader goal of understanding temporal and contextual factors in recommendation quality—a goal our framework pursues through population-level temporal modeling under extreme sparsity. The proposed SWWP-EEDPSO framework follows this trajectory by achieving an algorithm-level deep integration that unifies temporal feature modeling with global optimization.

3. Elite Evolutionary Discrete Particle Swarm Optimization

This section briefly reviews the Elite Evolutionary Discrete Particle Swarm Optimization (EEDPSO) algorithm [5], which serves as the optimization backbone of our hybrid framework. We focus on the key design choices relevant to our temporal extension; readers are referred to [5] for full derivations, ablation studies, and hyperparameter analysis.

3.1. Overview of EEDPSO

EEDPSO adapts Particle Swarm Optimization to the discrete space of recommendation by modeling each particle’s position as a length-K recommendation list X i ( t ) = ( x i , 1 ( t ) , , x i , K ( t ) ) , where each x i , j ( t ) P is a distinct item. The velocity v i ( t ) { 1 , 2 , , K } represents the number of dimensions to modify in each update. Dissimilarity between lists is measured by a perturbed Jaccard distance D ( A , B ) = 1 | A B | / | A B | + c , where the constant offset c maintains exploration pressure and mitigates premature convergence [5].
The velocity update follows the standard PSO formulation with inertia weight ω , cognitive coefficient c 1 , and social coefficient c 2 :
v i ( t + 1 ) = ω · v i ( t ) + c 1 r 1 · D ( X p , i ( t ) , X i ( t ) ) + c 2 r 2 · D ( X g ( t ) , X i ( t ) )
where r 1 , r 2 U ( 0 , 1 ) , X p , i ( t ) is the personal best, and X g ( t ) is the global best. The real-valued velocity is discretized to v i discrete = min ( max ( v i ( t + 1 ) , 1 ) , K ) . Position updates are performed via roulette wheel selection among exploration, personal best, and global best sources, with duplicate avoidance ensuring list validity. Full details of the discretization, position update, and duplicate handling procedures are provided in [5].

3.2. Base Fitness Function

EEDPSO employs a multi-objective fitness function that balances recommendation quality across four dimensions [5]:
f total ( X i ) = w 1 f pop + w 2 f tag + w 3 f div + w 4 f cov
where f pop is a Bayesian-adjusted popularity score, f tag measures category-level tag heat, f div = | j = 1 K T i , j | encourages tag diversity, and f cov = O str / K measures strategic item coverage. The weights are set to w 1 = w 2 = 1 , w 3 = 3 , w 4 = 100 , following the Optuna-based hyperparameter optimization and sensitivity analysis reported in [5]. This configuration simultaneously achieves tag coverage above 0.7 and strategic item coverage of at least 0.1 with minimal impact on popularity.
We adopt the EEDPSO fitness function, hyperparameters, and experimental environment directly from [5] to ensure reproducibility and fair baseline comparison. In Section 4.3.3, we extend f total with a temporal prediction component to form the unified hybrid fitness function used throughout our experiments.

4. SWWP–EEDPSO Hybrid Framework

In this section, we present the SWWP-EEDPSO hybrid framework that addresses temporal recommendation in sparse environments. We begin by formally defining the recommendation problem as a multi-objective optimization task with temporal constraints. We then detail the Sliding-Window Weighted Popularity (SWWP) model, which captures time-sensitive purchasing patterns through multi-dimensional temporal features and exponential decay mechanisms. Finally, we introduce the deep integration mechanism that leverages the purchase heat indicator to achieve algorithm-level fusion between temporal modeling and the EEDPSO algorithm introduced in Section 3, ensuring that temporal insights guide the entire optimization process rather than serving merely as preprocessing.
Figure 1 illustrates the overall architecture of the SWWP-EEDPSO hybrid framework, which consists of four interconnected components operating in a pipeline fashion. The temporal feature extraction module processes raw interaction data to extract multi-dimensional temporal patterns, including time segments, weekdays, and monthly variations. These features feed into the SWWP temporal modeling component, which computes sliding-window weighted popularity scores and generates the purchase heat indicator H ( τ ) . The EEDPSO Optimization module then performs discrete Particle Swarm Optimization (as described in Section 3) using the temporally filtered candidate pool. Finally, the Hybrid Integration mechanism orchestrates the deep coupling between temporal modeling and evolutionary optimization through three critical touchpoints: temporal-aware particle initialization, differentiated position updates, and fitness evaluation with temporal bonuses. This architecture enables the framework to balance temporal relevance with global optimization quality while maintaining sub-millisecond response times suitable for real-time deployment.
For ease of reference, Table 1 summarizes the principal symbols used throughout this paper.

4.1. Problem Definition

The recommendation problem in sparse, time-sensitive environments requires addressing both temporal dynamics and multi-objective optimization. We decompose this challenge into two interconnected components: temporal pattern extraction through SWWP and combinatorial optimization via EEDPSO (Section 3).

4.1.1. Theoretical Framework: Temporal-Constrained Optimization

We formalize the temporal recommendation problem as a scalarized multi-objective optimization that unifies set-based and position-dependent objectives.
Definition 1
(Temporal-Constrained Recommendation Problem). The temporal recommendation task is formulated as a scalarized multi-objective optimization problem. Unlike traditional constrained optimization, we incorporate temporal consistency as a soft objective to balance strict constraints with exploration flexibility. Given a time τ, the goal is to find an ordered list L that maximizes:
max L L f total ( L , τ ) = i = 1 m w i · f i ( L ) + λ · T ( L , τ ) s . t . D ( L ) δ div ( diversity   constraint ) L = [ p 1 , p 2 , , p K ] , p i P , p i p j   for   i j
where L denotes the set of all valid ordered lists of size K; f i ( L ) represents conventional objective components (e.g., popularity, relevance), as defined in the fitness function (Equation (2) in Section 3.2); w i 0 are weights reflecting relative importance; and λ controls the strength of the temporal consistency reward T ( L , τ ) . This formulation aligns with the “Temporal Bonus” mechanism implemented in the fitness evaluation.
Definition 2
(Context-Aware Temporal Consistency). To formally define temporal consistency, we first introduce the temporal context embedding at time τ as C ( τ ) = { C S ( τ ) , C L ( τ ) } , where C S ( τ ) represents the short-term trend context vector and C L ( τ ) represents the long-term periodic context vector. The temporal consistency of a list L is defined as the scalarized similarity between the list items and these context embeddings:
T ( L , τ ) = η · f trend ( L , C S ( τ ) ) + ( 1 η ) · f period ( L , C L ( τ ) )
where η [ 0 , 1 ] balances the attention between immediate trends and recurrent patterns. Here, f trend measures the alignment of items in L with the recent popularity drift captured in C S ( τ ) , while f period evaluates the alignment with historical seasonal patterns encoded in C L ( τ ) .
Remark 1
(Remark on objective balancing). We note that Equation (4) defines a scalarized multi-objective function rather than a gradient-based loss. Since our framework employs EEDPSO—a derivative-free metaheuristic optimizer—there is no gradient computation and hence no gradient conflict between the trend and periodicity terms. The balance between f trend and f period is controlled by the scalar weight η, which is set to 0.7 based on the empirical observation that short-term trends dominate under extreme sparsity (Section 4.2.3). This scalarization approach avoids the Pareto-front complexity of true multi-objective optimization while providing a principled trade-off mechanism.
Definition 3
(Multi-Level Periodic Modeling). To concretize the periodic alignment f period , our framework introduces a hierarchical temporal feature extraction that captures patterns at multiple granularities:
f period ( L , C L ( τ ) ) = g { s , d , m } β g · ϕ g ( L , τ )
where ϕ s , ϕ d , and ϕ m represent the alignment scores at segment-level (intra-day), day-level (weekly), and month-level (seasonal), respectively, and β g denotes the learned weight for each granularity.
Theorem 1
(Computational Hardness). The temporal-constrained recommendation problem defined in Equation (3) is NP-hard when the objective includes coverage maximization.
Proof. 
We provide a polynomial-time reduction from the Maximum Coverage Problem (MCP). Given an MCP instance with universe U = { e 1 , , e m } , sets S = { S 1 , , S n } , where S i U , and parameter k, the goal is to select k sets maximizing | i I S i | , where | I | = k .    □
We construct a recommendation instance as follows:
1.
Item construction: For each set S i , create an item p i with tags T i = S i (elements become tags).
2.
Objective mapping: Define the fitness function based on the tag-based diversity f div ( L ) = | p L T p | . Mathematically, this diversity objective is equivalent to maximizing the tag coverage over the selected items. This mapping directly encodes the objective of the Maximum Coverage Problem (MCP), where tags represent elements to be covered.
3.
Parameter simplification: Eliminate the temporal objective influence by setting λ = 0 . Relax the diversity lower-bound constraint by setting δ div = 0 (or ). This ensures the constraint D ( L ) δ div is trivially satisfied, forcing the solver to focus solely on maximizing the diversity component f div within the objective function.
4.
List size: Set recommendation list size K = k .
Note that while our full objective includes position-dependent terms that break submodularity, the reduction focuses on the set-based diversity component f div , which is submodular. The position-dependent components only make the problem harder, not easier.
A solution to MCP achieving coverage C corresponds to a recommendation list L with f div ( L ) = C . Conversely, an optimal recommendation list with diversity D yields an MCP solution with coverage D. Since MCP is NP-hard and our reduction is polynomial-time-computable, the temporal-constrained recommendation problem is NP-hard.
Practical Significance. We note that Theorem 1 establishes a worst-case complexity lower bound under a simplified objective. The actual hybrid objective used in our system (Equation (18)) includes additional position-dependent terms that only increase computational difficulty. The primary purpose of this result is twofold: (1) to provide formal justification for adopting metaheuristic approaches rather than exact solvers and (2) to motivate the search space reduction achieved by SWWP’s temporal filtering, which empirically narrows the candidate set from | P | to | P ( τ ) | before optimization begins. We do not claim approximation guarantees for our metaheuristic, relying instead on comprehensive empirical evaluation (including the ablation study in Section 5.4) to demonstrate practical effectiveness.
Empirical Observation (Search Space Reduction). Let P be the full item set and P ( τ ) = { p P : T ( { p } , τ ) θ min } be the temporally relevant subset. In our experiments with sparse interaction matrices (density ρ < 0.001 ), we empirically observe | P ( τ ) | / | P | O ( ρ ) . This suggests an effective search space reduction from O ( | P | K ) to O ( | P | K ) , though this reduction factor is data-dependent and not theoretically guaranteed.

4.1.2. Temporal Recommendation Environment

Let U = { u 1 , u 2 , , u | U | } denote the set of users, P = { p 1 , p 2 , , p | P | } the set of items, and T = { t 1 , t 2 , , t | T | } the set of timestamps. Each observed user–item interaction is represented as a triplet ( u , p , t ) . In our framework, these triplets serve as the foundational data for temporal modeling; specifically, interaction frequencies are aggregated from the historical data within the dual-scale windows and weighted by their temporal relevance to the evaluation time τ (as detailed in Section 4.2).
The temporal recommendation challenge involves generating a time-aware candidate pool L SWWP ( τ ) at evaluation time τ . This pool is constructed by calculating the integrated popularity score Pop SWWP ( p , τ ) for all items p P according to Equation (12). By sorting all items in descending order based on these scores, we obtain a ranked list and select the top N cand = 2000 items. This candidate pool serves as the search space for the subsequent EEDPSO optimization phase (Section 3), providing high-quality, temporally relevant items while maintaining sufficient variety for effective global exploration.

4.1.3. Time-Series Recommendation Optimization Problem

Given the temporal candidate pool, the optimization task is to select and arrange K items (where K N cand ) to form an optimal recommendation list L * = [ p 1 , p 2 , , p K ] that maximizes the scalarized multi-objective function. This formulation is consistent with Definition 1, where:
L * = arg max L L SWWP ( τ ) f total ( L , τ )
where L SWWP ( τ ) represents all valid ordered lists of size K drawn from the SWWP candidate pool, and f total ( L , τ ) is the unified objective function that combines both set-based metrics (diversity, coverage) and position-dependent metrics (weighted popularity with rank decay), building upon the fitness function defined in Section 3.2.

4.2. Sliding-Window Weighted Popularity (SWWP)

The Sliding-Window Weighted Popularity (SWWP) model captures temporal dynamics by recognizing that user behavior is driven by two distinct forces: short-term interest drift (trend) and long-term recurring habits (periodicity). To model these effectively under data sparsity, we propose a Dual-Scale Hybrid Window mechanism.

4.2.1. Dual-Scale Window Definition

We define two distinct temporal windows relative to the evaluation time τ :
1.
Short-Term Trend Window ( W S ): Captures immediate popularity shifts.
W S ( τ ) = [ τ Δ S , τ ]
where Δ S is set to 14 days. In this window, we assume user preferences are continuous and heavily dependent on recency.
2.
Long-Term Periodic Window ( W L ): Captures recurrent seasonal or weekly patterns.
W L ( τ ) = [ τ Δ L , τ Δ S )
where Δ L covers the available history (up to 1 year). This window defines the temporal scope for the periodic context C L ( τ ) introduced in Definition 2. Specifically, the context representation C L ( τ ) is constructed by aggregating interaction data from W L ( τ ) that share structural similarity (e.g., same day of the week) with the evaluation time τ .
Clarification on window scope. The dual-scale windows operate at the global population level, not at the individual user level. That is, W S ( τ ) aggregates all user interactions within the recent 14-day period, and W L ( τ ) aggregates interactions from the extended history that share the same temporal context as τ (e.g., same weekday and time segment). This global aggregation is a deliberate design choice for our non-personalized setting: under extreme sparsity, where most users have ≤2 interactions, per-user window construction would yield empty or near-empty windows for the vast majority of users. The window sizes ( Δ S = 14 days, Δ L up to 1 year) were selected based on the trade-off between data sufficiency and temporal relevance.

4.2.2. Time-Aware Feature Extraction

To match historical contexts in the long-term window, we extract discrete temporal features from any timestamp t:
F ( t ) = { ψ s ( t ) , ψ d ( t ) }
where:
  • ψ s ( t ) = h ( t ) / 4 { 0 , 1 , , 5 } represents the intra-day time segment (e.g., morning, evening).
  • ψ d ( t ) { 0 , 1 , , 6 } denotes the day of the week.
Note that we exclude the monthly feature ψ m from strict item-matching conditions to avoid excessive sparsity, using it instead only as a macro-weighting factor in the purchase heat estimation.

4.2.3. Dual-Channel Popularity Computation

The final popularity score for an item p at time τ is a weighted fusion of the trend score and the periodic score.
1. Trend Score ( S trend ): Computed within W S using strict exponential decay. No feature matching is applied here, as recent interactions are assumed relevant regardless of specific timing.
S trend ( p , τ ) = t W S ( τ ) c ( p , t ) · λ fast ( τ t )
where λ fast is a decay factor (e.g., 0.9) prioritizing very recent days.
2. Periodic Score ( S period ): Computed within W L . Here, we only count interactions that match the current temporal context F ( τ ) .
S period ( p , τ ) = t W L ( τ ) F ( t ) = F ( τ ) c ( p , t ) · λ slow ( τ t )
where W L ( τ ) is the long-term window, c ( p , t ) is the interaction count, λ slow is a mild decay factor, and the condition F ( t ) = F ( τ ) filters for historical interactions occurring in the same periodic context (e.g., the same weekday and time segment) to capture recurring user habits.
3. Hybrid Fusion: The final SWWP score is obtained by:
Pop SWWP ( p , τ ) = α · S trend ( p , τ ) max ( S trend ) + ( 1 α ) · S period ( p , τ ) max ( S period )
where α [ 0 , 1 ] controls the trade-off. In sparse settings, we set α = 0.7 to rely primarily on recent trends, using periodicity to refine the ranking.
Unlike rigid fallback strategies that require complex conditional logic, the dual-scale architecture provides an implicit, smooth fallback. If the current time context is unique and lacks historical data (sparse S period ), the algorithm naturally degrades to relying on S trend (recent global popularity). Conversely, if recent data is noisy, the historical periodic signal stabilizes the recommendation. This mathematical formulation eliminates the need for hard thresholds ( η ) and discontinuous logic branches found in traditional methods.

4.2.4. Purchase Heat Estimation

While the item scoring logic ( Pop SWWP ) focuses on ranking relative item importance, we introduce a global indicator, the purchase heat H ( τ ) , to quantify the absolute activity level of the current context. This score serves as a meta-parameter for the EEDPSO initialization (Section 4.3.1). Consistent with our multi-scale analysis, H ( τ ) integrates factors from all three granularities (Segment, Weekday, Month), as macro-level seasonality (Month) significantly impacts total traffic volume even if it is too sparse to guide individual item matching.
H ( τ ) = min ( B s ( τ ) · F d ( τ ) · F m ( τ ) , H max )
where the component factors are derived from historical transaction volume analysis:
Base Heat B s ( τ ) : Estimated from the empirical distribution of hourly transaction volumes. We analyzed 6 months of historical data and computed quartiles of transaction intensity across time segments:
B s ( τ ) = 0.5 if   ψ s ( τ ) = 0   ( 0 : 00 4 : 00 ,   Q 1 :   25 th   percentile ) 0.6 if   ψ s ( τ ) { 1 , 2 }   ( 4 : 00 12 : 00 ,   Q 2 :   median ) 0.7 if   ψ s ( τ ) { 3 , 4 }   ( 12 : 00 20 : 00 ,   Q 3 :   75 th   percentile ) 0.8 if   ψ s ( τ ) = 5   ( 20 : 00 24 : 00 ,   90 th   percentile )
Weekday Factor F d ( τ ) : Derived from day-of-week transaction patterns:
F d ( τ ) = 1.2 if   ψ d ( τ ) { 5 , 6 }   ( weekend :   20 % lift observed ) 1.1 if   ψ d ( τ ) = 4   ( Friday :   10 % lift observed ) 1.0 otherwise   ( baseline   weekday   activity )
Monthly Seasonal Factor F m ( τ ) : Based on seasonal purchase patterns:
F m ( τ ) = 1.15 if   ψ m ( τ ) { 11 , 12 }   ( holiday   season ) 1.1 if   ψ m ( τ ) { 6 , 7 }   ( mid-year   promotions ) 1.0 otherwise   ( normal   months )
The maximum heat threshold H max = 0.9 prevents extreme values that could over-bias the particle initialization.
Empirical Derivation and Validation. The discrete factor values in Equations (14)–(16) were derived from a two-stage process on the training portion of our dataset. First, we fitted a multiplicative regression model on 6 months of hourly transaction volumes with segment, weekday, and month as categorical predictors, obtaining an adjusted R 2 = 0.68 (compared to R 2 = 0.51 for an additive baseline). Second, we discretized the estimated regression coefficients into the piecewise factors shown above, rounding to one decimal place for interpretability and implementation simplicity. For example, the weekend coefficient of 1.18 was rounded to 1.2, and the evening-segment coefficient of 0.79 was rounded to 0.8. This discretization incurred a negligible loss in explained variance ( Δ R 2 < 0.01 ). While the factors shown are category-aggregated, category-specific heat models could be developed for specialized domains with sufficient data, and the regression-based derivation procedure is directly transferable to new datasets.
The purchase heat score serves as a key parameter for determining the proportion of SWWP-recommended items in the initial population of the EEDPSO algorithm (Section 3), providing a principled bridge between temporal patterns and evolutionary search.

4.2.5. Efficient Implementation Through Precomputation

To ensure real-time performance, SWWP precomputes and caches weighted popularity scores for recent time periods. The precomputation process is formalized in Algorithm 1.
Algorithm 1 Dual-Scale SWWP Precomputation
Require: Dataset D , short window Δ S , long window Δ L , weight α
Ensure: Cache structure M mapping time step τ to ranked item list
  1:
Initialize  M
  2:
Pre-calculate global feature map M a p c o n t e x t { I t e m : C o u n t } from D using W L
  3:
for all evaluation time step τ  do
  4:
       S trend Compute decay sums over W S ( τ )
  5:
       F ( τ ) Extract features { ψ s ( τ ) , ψ d ( τ ) }
  6:
       S period Lookup M a p c o n t e x t [ F ( τ ) ]
  7:
      Calculate Scores: Compute Pop SWWP ( p , τ ) for all items using Equation (12) based on S trend and S period
  8:
      Store:  M [ τ ] Select top- N cand items ranked by Pop SWWP
  9:
return  M                                             ▹ To be used in Algorithm 2 and Section 4.3
Algorithm 2 Temporal-Aware Hybrid Initialization
Require: Current time τ , purchase heat H ( τ ) , list size K, item set P , SWWP cache M
Ensure: Initial particle position X init
  1:
L cand M [ τ ]                                 ▹ Retrieve ranked candidate pool from Algorithm 1
  2:
n temporal K × H ( τ )                                ▹ Calculate slot allocation for temporal items
  3:
n random K n temporal                                                    ▹ Remaining slots for exploration
  4:
S temp Sample n temporal distinct items from L cand
  5:
S rand Sample n random distinct items from P S temp
  6:
X init Permute ( S temp S rand )               ▹ Combine and shuffle to form position vector
  7:
return  X init
This precomputation strategy ensures that online recommendation requests can be served with minimal latency while maintaining the sophistication of our temporal modeling approach.

4.3. Deep Integration Mechanism

The deep integration between SWWP and EEDPSO represents a crucial innovation in our framework, where temporal popularity patterns guide both the initialization and evolution of the particle swarm. This integration operates at multiple levels to ensure that time-sensitive recommendations are effectively incorporated throughout the optimization process described in Section 3.
Remark on the necessity of metaheuristic optimization. A natural question is whether simpler approaches (e.g., greedy selection or re-ranking heuristics) could replace the evolutionary optimization. The multi-objective nature of our fitness function—which jointly optimizes popularity, tag heat, diversity, coverage, and temporal prediction—creates a non-decomposable optimization landscape where greedy approaches provide no mechanism to escape local optima when diversity and popularity conflict. As demonstrated in the ablation study (Section 5.4), the SWWP-guided evolutionary search achieves an 8× improvement in temporal prediction quality over unguided search, confirming that the search strategy itself—not merely the objective function—is critical for discovering temporally relevant solutions. Formal comparison with re-ranking baselines remains an important direction for future work.

4.3.1. Temporal-Aware Particle Initialization

Theoretical Foundation of Purchase Heat. The purchase heat indicator H ( τ ) is derived from queuing theory and user behavior modeling. We model user arrivals as a non-homogeneous Poisson process with intensity function λ ( τ ) that varies across temporal dimensions.
The initialization phase leverages the purchase heat indicator H ( τ ) established in Section 4.2.4 to adaptively balance the ratio between temporally informed items and random exploratory items. By incorporating this temporal context, the initial swarm is biased toward regions of the search space with higher current purchasing activity, which effectively accelerates the convergence of the optimization process, as described in Algorithm 2.
Let S random denote a randomly initialized swarm and S H ( τ ) denote a swarm initialized with purchase heat guidance. We denote their respective mean convergence iterations as T ¯ conv random and T ¯ conv H ( τ ) . In our experiments, we observe that these values satisfy:
T ¯ conv H ( τ ) T ¯ conv random · ( 1 H ( τ ) · ϵ )
where ϵ ( 0.1 , 0.2 ) represents the empirically observed information gain from temporal patterns, and T ¯ denotes the sample mean over multiple runs.
Rationale. The purchase heat-guided initialization places particles closer to temporally relevant regions of the search space. While we cannot provide formal convergence guarantees without specific assumptions on the fitness landscape, our experimental results (Table 2) consistently show faster convergence when incorporating temporal guidance, with the acceleration factor correlating with the purchase heat value. Algorithm 2 details the initialization process using the precomputed cache M [ τ ] . The particle size K is dynamically partitioned based on purchase heat H ( τ ) : n temporal items are sampled from the SWWP candidates to ensure relevance, while n random items are drawn from the global set P to maintain diversity. The final position X init is formed by merging and shuffling these subsets.

4.3.2. Temporal-Guided Position Updates

During the iterative optimization process, the integration mechanism maintains temporal awareness in particle position updates. When EEDPSO determines that certain dimensions of a particle’s position should be modified (based on velocity calculations using Equation (1) in Section 3.1), the replacement strategy considers whether each position originally contained an SWWP-recommended item.
For positions that initially held SWWP items, replacements are preferentially selected from the current SWWP recommendation pool, ensuring that temporal popularity signals continue to influence the solution. This approach preserves the time-sensitive nature of these positions while allowing for exploration within the temporally relevant item space.
This differentiated update strategy creates a dual-track evolution process: one track maintains strong temporal relevance through SWWP guidance, while the other track explores the broader solution space for potentially overlooked high-quality items.

4.3.3. Unified Fitness Function with Temporal Prediction

To evaluate temporal recommendation effectiveness under a fair comparison, we extend the base fitness function f total (Equation (2), detailed in [5]) with a temporal prediction component. The unified hybrid fitness function is:
f hybrid ( X i , τ ) = f total ( X i ) base   objectives + γ · f pred ( X i , τ ) temporal   prediction   ( new )
where γ is the temporal prediction weight. The base component f total captures static recommendation quality (popularity, tag heat, diversity, and strategic coverage), as established in Section 3.2. The novel temporal component f pred quantifies how well the recommendation list captures actual future user interactions within a temporal window [ τ , τ + Δ ] :
f pred ( X i , τ ) = j = 1 K c ( x i , j , τ ) k = 1 K c ( p k * , τ )
where c ( x i , j , τ ) denotes the actual interaction count of item x i , j within the future window [ τ , τ + Δ ] , and { p 1 * , , p K * } represents the ground-truth top-K items ranked by their interaction counts. This formulation corresponds to the Mass@K metric, measuring the proportion of future interaction volume captured by the recommendation list.
Interpretation of f pred . The temporal prediction score f pred measures the interaction mass capture ratio—the fraction of total future user activity (within window [ τ , τ + Δ ] ) that falls on the recommended items. The denominator k = 1 K c ( p k * , τ ) represents the maximum achievable interaction volume by an oracle that knows the true top-K items; thus, f pred [ 0 , 1 ] . A value of 1.0 indicates that the recommendation list perfectly captures the most popular items in the immediate future. This metric directly evaluates whether the search strategy can identify items aligned with upcoming user behavior, which is the central hypothesis of our temporal integration approach.
Critically, all algorithms in our experiments share this identical unified fitness function f hybrid , ensuring that performance differences arise solely from each algorithm’s search strategy. This design validates our central hypothesis: effective temporal recommendation requires temporally aware search strategies that guide optimization toward items aligned with future user behavior, not merely appropriate evaluation metrics.

4.3.4. Complete Integration Workflow

The complete SWWP-EEDPSO integration workflow orchestrates these components into a cohesive optimization process, as shown in Algorithm 3.
Algorithm 3 SWWP-EEDPSO Integrated Framework
Require: Current time τ , swarm size N, maximum iterations T max
Ensure: Optimized recommendation list L *
  1:
( L SWWP , H ( τ ) ) Generate SWWP recommendations using Equations (12) and (13)
  2:
Initialize swarm S with N particles using Algorithm 2
  3:
X g Best solution in S based on fitness evaluation
  4:
for  t = 1 to T max  do
  5:
      for all particle i in S  do
  6:
            Update velocity v i using Equation (1) (Section 3.2)
  7:
            Generate neighbor solution respecting temporal structure
  8:
            Evaluate fitness using Equation (18)
  9:
            if new fitness improves personal best then
10:
                  Update personal best X p , i
11:
            if new fitness improves global best then
12:
                  Update global best X g
13:
      if convergence criteria met then
14:
            break
15:
return  X g as optimized recommendation list L *
This deep integration mechanism ensures that SWWP’s temporal insights are not merely used as a preprocessing step but are woven throughout the optimization process.

4.3.5. Approximation Quality and Complexity Analysis

We conclude the integration mechanism with analysis of approximation quality and computational complexity.
Remark on Approximation Quality. While the classic ( 1 1 / e ) approximation bound applies to monotone submodular set function maximization via greedy algorithms, our framework optimizes a weighted combination of set-based and position-dependent objectives using EEDPSO. The position-dependent components (e.g., rank-weighted popularity) generally break submodularity. Therefore, we do not claim theoretical approximation guarantees but rely on empirical validation to demonstrate the effectiveness of our metaheuristic approach.
Complexity Analysis. The time complexity of SWWP-EEDPSO is:
O ( T max · N · ( K 2 + | P ( τ ) | ) )
where T max is the maximum iterations, N is the swarm size, K is the recommendation list size, and | P ( τ ) | is the size of the temporally relevant item subset.
The space complexity is:
O ( N · K + | M | )
where | M | is the size of the SWWP cache structure.
These analyses establish SWWP-EEDPSO not as a simple hybrid of existing techniques, but as a principled optimization framework with well-characterized computational properties for non-personalized temporal recommendation under extreme sparsity.

5. Experiments and Analysis

This section presents experiments on the Amazon Reviews Data (2018) under extreme sparsity (density < 0.0005%). We first compare SWWP against nine temporal baselines (Section 5.2), then evaluate the full SWWP-EEDPSO framework against evolutionary baselines (Section 5.3), and finally conduct a systematic ablation study to isolate each component’s contribution (Section 5.4).

5.1. Experimental Setup

5.1.1. Dataset

Our framework targets top-K recommendation under sparse user–item interactions. We evaluate on several categories from real-world Amazon Reviews Data (2018): AMAZON FASHION, Appliances, Prime Pantry, Software, All Beauty, and Magazine Subscriptions. The subset contains | I | = 283,932 valid items, M = 2,879,497 valid interactions, and | U | = 2,093,706 valid users.
Amazon Reviews Data (2018) contains both interaction records and rating scores. Following the standard notation in recommender systems, let U = { u 1 , , u | U | } be the set of users and P = { p 1 , , p | P | } be the set of items. Each interaction is represented as a triplet ( u , p , r u p ) , where user u U assigns a rating r u p [ 1 , 5 ] to item p P . For sparsity analysis, we construct a binary interaction matrix B { 0 , 1 } | U | × | P | , where B u p = 1 if the triplet ( u , p , r u p ) exists. Consequently, the sparsity metrics are computed based on B , while the rating values r u p serve as the input for the optimization objective; specifically, they correspond to the raw rating term used in the Bayesian-adjusted popularity component ( f pop ) of the fitness function in Equation (2) (detailed in [5]).
Let nnz denote the number of nonzero entries (i.e., the number of unique ( u , i ) pairs). By construction nnz M .
Density and sparsity. The matrix has | U | | I | = 594,470,131,992 total entries. An upper bound on the density is
ρ = nnz | U | | I | M | U | | I | = 4.84 × 10 6 0.000484 % .
where | U | = 2,093,706 , | I | = 283,932 , M = 2,879,497 , and nnz is the number of unique ( u , i ) pairs. The corresponding sparsity is
1 ρ 99.999516 % .
Bipartite (degree) view. Viewing B as a bipartite graph with users and items as partitions and M edges, the average degrees are
k ¯ I = M | I | 10.14 ,
k ¯ U = M | U | 1.38 .
Both are orders of magnitude smaller than the sizes of their respective partitions, reflecting long-tail usage and cold-start behavior.
Storage implications. If the interaction matrix were stored densely in float32, the memory requirement would be
dense ( float 32 ) : 4 | U | | I | 2.16 TiB .
Using a sparse CSR/COO-like format with 32-bit row_ptr, col_idx, and val, the memory is approximated by
sparse ( 32 - bit ) : 4 ( | U | + 1 ) + 12 nnz 4 ( | U | + 1 ) + 12 M 41 MiB .
Even if repeated interactions are merged (i.e., nnz = M ), the density remains far below common “sparse matrix” heuristics (e.g., < 1 % ). The induced recommendation space is therefore highly sparse—the central challenge our method targets and the regime under which we evaluate.

5.1.2. Experimental Protocol

Temporal evaluation strategy: We employ a sliding window evaluation with strict temporal separation to prevent data leakage:
  • Evaluation advancement: The evaluation point advances by 1 day at each step.
  • Short-term data (for W S ): The most recent Δ S = 14 days of interactions before the evaluation time τ , used for trend score computation (Equation (10)).
  • Long-term data (for W L ): All available historical interactions prior to W S ( τ ) , up to one year, used for periodic score computation (Equation (11)) with temporal feature matching.
  • Test data: Interactions on the evaluation day (next 24 h).
  • Cache update: Precomputed caches (Algorithm 1) are rebuilt before each evaluation using the dual-scale historical data described above.
  • Leakage prevention: The system timestamp is set to τ during prediction; only interactions with t < τ are accessible, ensuring no future information leakage.
Each algorithm generates predictions for the next 24 h based solely on historical data preceding the evaluation time τ . Specifically, the Short-Term Trend Window W S aggregates interactions from the most recent 14 days, while the Long-Term Periodic Window W L draws from all available history (up to one year) with temporal feature matching (Section 4.2.1). No future information is accessible during prediction, guaranteeing temporal integrity.
Addressing potential future information leakage. We emphasize that the temporal prediction component f pred (Equation (19)) is used exclusively for fitness evaluation during the optimization benchmark (Section 4.3), where the goal is to assess whether different search strategies can discover temporally relevant items under identical evaluation criteria. It is not used during the SWWP temporal modeling phase (Section 4.2), which relies solely on historical data within the training window. Specifically, (1) the SWWP candidate pool generation (Algorithm 1) uses only historical interactions: the short-term window W S covers [ τ Δ S , τ ] (14 days) and the long-term window W L covers all available history prior to W S with temporal feature matching; (2) the purchase heat indicator H ( τ ) is derived from historical transaction volume patterns; and (3) particle initialization and position updates reference only the precomputed SWWP cache. The future interaction data in f pred serves the same role as ground-truth labels in supervised learning evaluation; it measures predictive quality without influencing model parameters. All competing algorithms (EEDPSO, DE, GA) share this identical fitness function, ensuring that any performance differences arise from search strategy rather than information advantage.
Implementation details: All experiments were conducted on a single Intel Xeon Gold 6230 CPU core with 32GB RAM allocated. No GPU acceleration was used to ensure fair comparison across methods. The anomalous latency for ExpSmoothing (144,365 ms) resulted from the statsmodels implementation’s poor scaling to our high-dimensional item space (283,932 items), as it attempts to fit separate models per item without vectorization.
Latency Measurement Protocol: All latency measurements follow a standardized protocol to ensure fairness and reproducibility:
  • Measurement scope: Time from query initiation to final ranked list output, including candidate generation and ranking but excluding data loading.
  • Warm-up period: 100 calls for JIT compilation and cache warming before measurement.
  • Sample size: 1000 recommendation calls with randomized query times.
  • Timer precision: Python 3.10’s time.perf_counter() with nanosecond resolution.
  • Statistical reporting: Mean latency with 95% confidence intervals (not shown in table for brevity, but all intervals were within ±5% of mean).
The sub-millisecond latencies for SWWP (0.52 ms) and popularity-based methods (0.08 ms) are achieved through aggressive caching of precomputed scores (Algorithm 1). These measurements represent query time performance after preprocessing, consistent with production deployment scenarios where offline computation is standard practice.
ExpSmoothing Implementation Note: The anomalous latency for ExpSmoothing (144,365 ms) results from the statsmodels implementation attempting to fit separate ARIMA models for each of 283,932 items without vectorization. This approach is fundamentally unsuitable for high-dimensional item spaces. We include it for completeness, but note that production-ready implementations would require algorithmic redesign (e.g., clustering items or using shared parameters) rather than per-item models.
Statistical significance: Results are averaged over 30 evaluation windows. Due to the deterministic nature of the temporal popularity methods and the fixed dataset, variance primarily stems from temporal distribution shifts rather than algorithmic randomness.

5.1.3. Setup of Sequential Model Comparison

This study benchmarks nine time-series recommendation algorithms to assess their performance under extreme data sparsity:
  • RW (Random-Weighted): A weighted random baseline where items are sampled proportionally to their historical popularity. This serves as a lower bound following the evaluation framework of Cremonesi et al. [4] for top-N recommendation tasks.
  • TAP (Time-Agnostic Popularity): Global popularity recommendation that ignores temporal dynamics. This implements the most basic collaborative filtering baseline as established in the top-N evaluation framework [4].
  • TDP (Temporal-Decay Popularity): An extension of time-aware collaborative filtering with exponential decay, inspired by Koren’s temporal dynamics framework [7]. Applies decay factor e λ t to down-weight older interactions.
  • CP–Hour (Conditional Popularity—Hourly): Extends the temporal binning approach from Koren [7] by partitioning time into segments defined by month × day-of-week × 4-h blocks, maintaining separate popularity distributions per segment.
  • CP–Week (Conditional Popularity—Weekly): A finer-grained variant modeling periodic patterns at 15 min resolution over the week cycle, extending context-aware splitting methods [23] to temporal dimensions.
  • FS (Fourier-Seasonal): Seasonal regression using Fourier basis functions with five harmonics for daily and weekly patterns, following the harmonic regression framework in Bayesian forecasting [24].
  • HW (Holt–Winters): The classical exponential smoothing method [25] that jointly models level, trend, and multiplicative seasonality components for time-series forecasting.
  • STL-AR: Combines STL (seasonal and trend decomposition using Loess) [26] with AR(5) autoregression on residuals, leveraging robust local regression for seasonal extraction.
  • STL-GBM: Enhances STL decomposition [26] by replacing linear AR with gradient boosting machines to capture nonlinear patterns in the residual component.
We evaluate all algorithms using a comprehensive set of metrics following established recommendation evaluation protocols [4,5]:
  • Ranking quality: NDCG@K, MRR@K, MAP@K—emphasizing top-position accuracy, which is critical for user-facing recommendation.
  • Set-based accuracy: Precision@K, Recall@K—measuring overlap between recommended and actually purchased items within each temporal window.
  • Rank correlation: Spearman’s ρ and Kendall’s τ —assessing agreement with ground-truth item ordering.
  • Pointwise error: MAE and RMSE—evaluating intensity prediction accuracy.
  • Beyond-accuracy: Coverage (catalog spread), Intra-List Diversity (ILD), and novelty (popularity-adjusted surprise).
  • Composite score: A weighted sum of per-metric normalized values for overall ranking.
Standard definitions of these metrics are provided in [5]; we adopt identical formulations to ensure cross-study comparability. Since SWWP generates global temporal popularity rankings rather than personalized lists, user-centric metrics (MRR, MAP) measure how well a single global ranking serves diverse user needs, following established protocol for non-personalized systems [4].
Algorithm Naming Consistency. For clarity, we use the following mapping between conceptual names and implementation labels: TAP → GlobalPop, TDP → GlobalPop–Decay, CP–Hour → POP–Segment, CP–Week → POP–minuteOfWeek, FS → Fourier, HW → ExpSmoothing.
Leveraging the above range of model families and multi-dimensional analyses, we conduct a comprehensive evaluation of SWWP’s performance and characteristics in time-series recommender systems.

5.1.4. Setup of Full-Framework Benchmark and Ablation Studies

Beyond the analysis of time-series recommenders, we evaluate the hybrid framework that integrates SWWP with EEDPSO and conduct ablation studies. Critically, all algorithms share the identical unified fitness function f hybrid = f total + γ · f pred (Equation (18)), where f pred is computed based on ground-truth future interactions within a 6 h temporal window ( Δ = 6 h). The prediction weight is set to γ = 10.0 , with f pred scaled by a factor of 100 to balance its contribution against f total . This design ensures that performance differences arise solely from each algorithm’s search strategy rather than from different optimization objectives. We benchmark against the strongest previously reported baselines—EEDPSO, DE, and GA—to evaluate whether SWWP-EEDPSO’s temporally informed search strategy provides meaningful improvements in discovering solutions with high temporal prediction accuracy.
The selection of comparison algorithms is motivated by their established positions in the metaheuristic optimization landscape. Differential Evolution (DE) and Genetic Algorithms (GAs) represent two fundamental paradigms in evolutionary computation that have demonstrated consistent performance across diverse optimization problems. DE, introduced by Storn and Price, excels at continuous optimization through its unique differential mutation operator, while GA, pioneered by Holland, provides robust exploration through crossover and mutation operations. In the original EEDPSO research, both DE and GA showed stable performance with competitive fitness values, making them ideal benchmarks for evaluating the impact of temporal integration. The inclusion of vanilla EEDPSO serves as an ablation baseline, enabling us to quantify the precise fitness trade-off introduced by SWWP integration.
Table 2 summarizes the configurations used across all algorithms. Our setup is anchored to the detailed hyperparameter analysis reported for EEDPSO [5]: the original study conducts Optuna-based optimization over 100 trials, using a Bayesian sampler with pruning; selects the best-performing combination; and applies the resulting settings according to dataset size. We adopt these Optuna-identified parameters for the PSO family and keep identical swarm coefficients for EEDPSO and SWWP-EEDPSO to isolate the contribution of SWWP (i.e., c 1 = 1.26 , c 2 = 0.74 , ω = 0.55 ). Table 2 reflects these unified coefficients for both PSO-based algorithms. For DE and GA, we follow standard discrete optimization practice and align population sizes and iteration budgets with the PSO configurations so that all methods operate under comparable compute budgets. This design ties our configurations directly to a published, search-based protocol and strengthens the credibility and reproducibility of our comparisons.
Beyond tables, we analyze using convergence plots and temporal performance charts to comprehensively assess the performance and characteristics of the SWWP-EEDPSO hybrid framework. In this module, our primary focus is the change in fitness and the stability of performance across different temporal contexts.

5.2. Result of Sequential Model Comparison

5.2.1. Recommendation Accuracy Analysis

Table 3 presents comprehensive performance metrics across ten temporal recommendation algorithms under extreme sparsity conditions. SWWP achieves the highest NDCG@20 score of 0.245, representing a 12.9% improvement over the second-best performer GlobalPop–Decay (0.217) and more than doubling the performance of traditional methods like GlobalPop (0.120). This superiority extends across multiple accuracy metrics, with SWWP achieving Precision@20 of 0.340 and Recall@20 of 0.155, the highest among all evaluated methods.
Figure 2 visualizes the NDCG@20 distribution through a horizontal bar chart, revealing a clear performance hierarchy. Three distinct tiers emerge: (1) a high-performance tier led by SWWP (0.245) and GlobalPop–Decay (0.217); (2) a middle tier including POP–Segment (0.191) and ExpSmoothing (0.194); and (3) a lower tier comprising complex forecasting methods like STL-AR (0.112) and STL-GBM (0.087). The random baseline achieves only 0.014, confirming that all temporal methods provide substantial value over chance performance.
The precision–recall scatter plot in Figure 3 further illustrates algorithm clustering in the accuracy space. SWWP occupies the optimal position in the upper-right quadrant, with the highest precision (0.340) and recall (0.155) values. A notable finding is the formation of three performance clusters: the high-performance cluster (SWWP, POP–Segment) with precision > 0.30; the moderate cluster (GlobalPop–Decay, ExpSmoothing) with precision ≈ 0.21; and the low-performance cluster (Random, POP–minuteOfWeek) with precision < 0.12. This clustering suggests fundamental differences in how algorithms handle temporal patterns under extreme sparsity.

5.2.2. Diversity and Coverage Trade-Offs

Figure 4 examines the relationship between coverage and diversity across algorithms. Despite the extreme sparsity (density < 0.0005%), meaningful differences emerge in how algorithms balance these objectives. SWWP achieves coverage of 0.0011 while maintaining moderate diversity (0.077), representing the best trade-off among temporal methods. In contrast, the Random baseline achieves the highest diversity (0.260) but with coverage of 0.0022, highlighting the exploration-exploitation dilemma.
The coverage analysis reveals an important pattern: popularity-based methods (GlobalPop, GlobalPop–Decay) achieve minimal coverage (0.0001) due to their focus on repeatedly recommending a small set of popular items. Time-segmented approaches (POP–Segment, POP–minuteOfWeek) improve coverage to 0.0006–0.0007 by varying recommendations across temporal contexts. SWWP’s sliding-window approach achieves the best coverage among non-random methods, suggesting that local temporal patterns provide better item discovery than global popularity metrics.

5.2.3. Computational Efficiency

Figure 5 presents the computational latency analysis, revealing a sharp bimodal distribution in algorithm efficiency. The fast group, including SWWP (0.52 ms), GlobalPop (0.08 ms), and GlobalPop–Decay (0.08 ms), maintains sub-millisecond response times suitable for real-time deployment. The slow group exhibits dramatically higher latencies—Fourier (105.39 ms), STL-AR (2196.83 ms), and notably ExpSmoothing (144,365 ms*)—making them impractical for production environments.
SWWP’s 0.52 ms latency represents an optimal balance between sophistication and efficiency. While slightly slower than simple popularity methods, it remains well within acceptable bounds for real-time systems while providing significantly better recommendation quality. The extreme latency of ExpSmoothing (marked with an asterisk in Table 3) suggests implementation or scalability issues under high-dimensional sparse data.

5.2.4. Prediction Error Analysis

Figure 6 displays the RMSE distribution across algorithms. GlobalPop–Decay achieves the lowest RMSE (0.922), followed by Fourier (0.982) and GlobalPop (1.066). SWWP shows moderate error (1.142), while STL-based methods exhibit the highest errors (STL-AR: 1.352, STL-GBM: 1.346). This pattern suggests that simpler time-decay models better capture temporal dynamics under extreme sparsity, where complex models may overfit to noise.
The MAE results in Table 3 corroborate this finding, with GlobalPop–Decay (0.802) and Fourier (0.867) achieving the lowest absolute errors. Interestingly, SWWP’s higher prediction error (MAE: 1.012) does not translate to poor recommendation quality, as evidenced by its superior ranking metrics. This discrepancy indicates that SWWP optimizes for ranking quality rather than pointwise prediction accuracy.

5.2.5. Ranking Quality Assessment

The ranking correlation metrics in Table 3 reveal interesting patterns. GlobalPop–Decay achieves the highest Spearman correlation (0.408) and Kendall’s tau (0.362), indicating strong agreement with ground-truth rankings. POP–minuteOfWeek also shows reasonable correlation (Spearman: 0.314, Kendall: 0.298). Surprisingly, SWWP exhibits near-zero correlation (Spearman: 0.017, Kendall: 0.010), suggesting that its strength lies in identifying relevant items rather than predicting their exact ordering.
As shown in Figure 7, this apparent weakness in ranking correlation is compensated by SWWP’s exceptional MRR@20 (0.547) and MAP@20 (0.083) scores, both substantially higher than all competitors. The MRR result indicates that SWWP excels at placing at least one highly relevant item at the top of recommendations, critical for user satisfaction in practical systems.

5.2.6. Error Distribution Characteristics

Figure 8 presents box plots analyzing error distribution characteristics. GlobalPop and GlobalPop–Decay exhibit the most compact distributions with IQR ≈ 0.5, indicating high prediction stability. SWWP shows moderate variability (IQR ≈ 1.0), balancing between consistency and adaptability to temporal changes. The random baseline displays the largest spread (IQR > 2.0) with numerous outliers, confirming its unsuitability for sparse recommendation scenarios.
The presence of outliers across most methods suggests that extreme sparsity creates challenging edge cases where temporal patterns break down. SWWP’s moderate outlier count indicates robustness to these edge cases while maintaining sensitivity to temporal variations.

5.2.7. Comprehensive Performance Assessment

Figure 9 synthesizes all metrics into composite scores for final ranking. SWWP achieves the highest composite score (0.861), followed by GlobalPop–Decay (0.762) and POP–Segment (0.754). This comprehensive assessment confirms SWWP’s superiority across multiple dimensions despite its weaker ranking correlation. These results address RQ1 by demonstrating that the dual-scale sliding-window model effectively captures both short-term trends and long-term periodicities, achieving the highest accuracy across all ranking metrics under extreme data sparsity.
The composite analysis reveals three key insights. First, temporal awareness is crucial under extreme sparsity—all time-aware methods significantly outperform the time-agnostic GlobalPop baseline. Second, complexity does not guarantee performance—sophisticated methods like STL-GBM (composite: 0.473) underperform simpler temporal approaches. Third, the optimal algorithm (SWWP) successfully balances multiple objectives: high accuracy, reasonable diversity, sub-millisecond latency, and robust performance across temporal contexts.
These results validate our hypothesis that sliding-window temporal modeling with hierarchical fallback strategies effectively addresses the challenges of extreme sparsity while maintaining practical deployment feasibility. The consistent superiority of SWWP across diverse evaluation metrics, combined with its computational efficiency, establishes it as the preferred method for temporal recommendation in severely sparse environments.

5.3. Result of Full-Framework Benchmark and Ablation Studies

In Table 4 and Figure 10, we analyze the convergence behavior of the hybrid framework. The convergence curves reveal distinct optimization patterns across algorithms. SWWP-EEDPSO achieves the highest mean fitness of 3384.26, surpassing vanilla EEDPSO (3194.14) by 5.95%. This improvement demonstrates that temporally informed initialization and guided position updates steer the search toward higher-quality regions of the solution space, where temporal relevance and optimization quality reinforce each other.
The convergence analysis shows three distinct phases: EEDPSO exhibits rapid early convergence (iterations 1–100) with fitness jumping from 1600 to 3400, followed by gradual refinement; SWWP-EEDPSO demonstrates more moderate initial progress due to temporal constraints but maintains steady improvement throughout; and DE and GA show similar convergence patterns with slower initial progress and a plateau around iteration 300. Notably, EEDPSO converges fastest at iteration 380, while SWWP-EEDPSO requires 425 iterations, suggesting that temporal integration increases search complexity.
Figure 11 and Figure 12 illustrate performance across different temporal contexts. The temporal performance comparison reveals that SWWP-EEDPSO exhibits distinct temporal patterns with notable performance peaks. The framework demonstrates significant performance improvements during two critical periods: the evening hours (18:00–23:00) and the lunch period (12:00–14:00). These peaks align perfectly with typical e-commerce activity patterns, where user engagement and purchasing behaviors intensify. The hourly heatmap clearly shows that SWWP-EEDPSO successfully captures and leverages these temporal hotspots, achieving up to 8% higher fitness during peak hours compared to off-peak periods. This temporal sensitivity validates the effectiveness of the SWWP integration in adapting recommendations to match real-world user behavior patterns throughout the day.
The experimental results reveal several key findings: SWWP-EEDPSO achieves a 5.95% fitness improvement over vanilla EEDPSO, demonstrating that temporal guidance enhances rather than constrains the optimization process; the hybrid framework maintains excellent stability (0.984 stability score) despite the added complexity of temporal integration; convergence analysis indicates that temporal integration requires moderately more iterations (425 vs. 380), reflecting the additional search effort needed to balance temporal relevance with optimization quality; and temporal performance analysis confirms that SWWP-EEDPSO successfully identifies and exploits temporal patterns, particularly during peak shopping hours.
Most importantly, despite the added complexity of temporal integration, SWWP-EEDPSO consistently and significantly outperforms both GA and DE algorithms across all metrics. With a mean fitness of 3384.26 compared to DE’s 3020.40 and GA’s 3024.00, SWWP-EEDPSO demonstrates superior optimization capability while maintaining lower variance (std = 55.68) than EEDPSO (std = 85.83) and competitive variance relative to both DE (std = 27.77) and GA (std = 43.13). This superior performance confirms that the SWWP-EEDPSO hybrid framework successfully balances temporal awareness with optimization quality, making it the optimal choice for deployment in time-sensitive recommendation environments. These results address RQ2 by confirming that deep integration of temporal modeling with evolutionary optimization yields significantly better solutions than applying EEDPSO without temporal guidance.

5.4. Ablation Study

To isolate the contribution of each integration mechanism in SWWP-EEDPSO, we conduct a systematic ablation study by disabling one mechanism at a time while keeping all other settings identical. Table 5 reports the results averaged over 21 evaluation windows, with the fitness decomposed into f total (base recommendation quality) and f pred (temporal prediction contribution, computed as γ × Mass @ K × 100 ).
Figure 13 visualizes the ablation results. Panel (a) compares Mass@K across variants, showing the temporal prediction quality achieved by each configuration. Panel (b) decomposes the total fitness into its base quality component ( f total ) and temporal prediction component ( f pred ), revealing the trade-off between static recommendation quality and temporal relevance.
The ablation reveals a clear hierarchy of component importance. SWWP-guided position updates are the most critical mechanism: removing them while retaining SWWP initialization causes Mass@K to drop by 70.3% (from 0.284 to 0.084), indicating that temporal guidance during the evolutionary search process—not merely at initialization—is essential for discovering temporally relevant solutions. The temporal fitness component  f pred is the second most important factor, with its removal leading to a 67.9% Mass@K decline (to 0.091). Without temporal signals in the fitness evaluation, the optimizer has no incentive to favor items that align with future user behavior, even when SWWP guides the search space. SWWP initialization shows a modest independent contribution ( 2.9 % Mass@K), as its effect is largely subsumed by the SWWP-guided updates over 500 iterations. Removing all SWWP guidance results in an 87.5% Mass@K decline (to 0.036), confirming that the integration mechanisms collectively drive temporal prediction quality.
An important insight emerges from the fitness decomposition. Variants without SWWP guidance achieve higher base fitness f total (e.g., 3792.6 for “w/o All SWWP” vs. 3208.2 for the full model). This occurs because an unconstrained search over the full item catalog ( | P | = 283,932 ) can more freely optimize diversity and coverage. However, the SWWP-constrained search achieves an 8× improvement in temporal prediction quality (Mass@K: 0.284 vs. 0.036). This trade-off is fundamental: by directing the evolutionary search toward temporally relevant regions of the solution space, SWWP-EEDPSO sacrifices some static recommendation quality in exchange for substantially better alignment with future user behavior—precisely the design goal of a temporal recommender system.
These results address RQ3 by demonstrating that (1) SWWP-guided updates and temporal fitness are jointly critical for temporal prediction, (2) the purchase heat initialization provides a complementary but secondary benefit, and (3) the deep integration achieves a meaningful trade-off between base recommendation quality and temporal relevance that would be impossible with either component alone.

6. Conclusions

This paper presented a hybrid non-personalized temporal recommendation framework that successfully addresses the dual challenges of extreme data sparsity and temporal dynamics in popularity-based recommendation scenarios. Through the deep integration of Sliding-Window Weighted Popularity (SWWP) with Elite Evolutionary Discrete Particle Swarm Optimization (EEDPSO), we demonstrated that temporal awareness and optimization quality need not be mutually exclusive goals. From a theoretical perspective, we established that the temporal-constrained recommendation problem is NP-hard through reduction from the Maximum Coverage Problem. This complexity result not only justifies the use of metaheuristic approaches but also highlights the fundamental computational challenges in balancing temporal relevance with optimization quality. The proof demonstrates that even with simplified objectives, finding optimal temporal recommendations requires exponential time in the worst case, motivating the metaheuristic approach adopted in this work.
Our SWWP model introduces several key innovations for temporal recommendation. By incorporating multi-dimensional temporal features—time segments, weekdays, and months—alongside exponential decay mechanisms, SWWP captures complex temporal patterns that traditional popularity-based methods overlook. The hierarchical fallback strategy ensures robustness even when specific temporal combinations lack sufficient data, progressively relaxing constraints from full feature matching to global popularity. Most notably, the purchase heat indicator H ( τ ) quantifies temporal activity levels, providing a principled mechanism for balancing temporal and exploratory elements in the recommendation process.
The deep integration between SWWP and EEDPSO represents a significant advancement in hybrid recommendation architectures. Rather than treating temporal modeling and optimization as separate stages, our framework weaves temporal insights throughout the optimization process. The purchase heat indicator guides particle initialization, determining the proportion of temporally popular items (up to 80% during peak periods). Differentiated position updates maintain temporal relevance for SWWP-originated positions while allowing exploration elsewhere. This multi-level integration achieves what neither component could accomplish alone: temporally aware recommendations with strong optimization quality.
Experimental results on Amazon Reviews Data (2018) validate our approach under extreme sparsity conditions (density < 0.0005%). SWWP achieved NDCG@20 = 0.245, outperforming nine temporal baselines, including sophisticated methods like STL-AR (0.112) and Global Popularity–Decay (0.217). The 13% improvement over the strongest baseline demonstrates that our sliding-window approach with conditional filtering effectively captures temporal dynamics. Equally important, SWWP maintains sub-millisecond query time latency (0.52 ms) after offline precomputation, making it viable for deployment in production systems that employ periodic cache refresh—a standard architectural pattern in industrial recommender systems.
The SWWP-EEDPSO hybrid framework reveals important insights about the role of search strategy in temporal recommendation optimization. Under a unified fitness formulation that incorporates temporal prediction accuracy, all algorithms optimize identical objectives, yet SWWP-EEDPSO achieves 5.95% higher mean fitness (3384.26 vs. 3194.14) compared to vanilla EEDPSO. More importantly, SWWP-EEDPSO demonstrates substantially superior temporal prediction performance, successfully identifying items that align with actual future user interactions. In contrast, baseline algorithms (EEDPSO, DE, GA) achieve near-random temporal prediction accuracy, confirming that without temporal guidance in the search process, optimization algorithms cannot discover temporally relevant recommendations even when explicitly evaluated on temporal metrics. SWWP-EEDPSO also exhibits lower variance (std = 55.68 vs. 85.83 for EEDPSO), indicating more stable optimization behavior. The temporal performance analysis revealed pronounced peaks during lunch hours (12:00–14:00) and evening periods (18:00–23:00), achieving up to 8% higher fitness during these high-activity windows.
Our findings have several practical implications for deploying recommendation systems in resource-constrained environments. First, the success of SWWP demonstrates that lightweight temporal methods can outperform complex forecasting models under extreme sparsity, where sophisticated approaches like exponential smoothing and STL decomposition struggle due to insufficient training data. Second, the purchase heat indicator provides an interpretable mechanism for system operators to understand and control the balance between temporal relevance and exploration. Third, the hierarchical caching strategy enables millisecond-level response times while maintaining temporal sophistication, which is crucial for user experience in production systems.
This work also contributes to the broader understanding of hybrid recommendation architectures. The deep integration mechanism we developed—spanning initialization, evolution, and evaluation—provides a template for combining different optimization paradigms. The key insight is that effective hybridization requires more than sequential combination or weighted averaging; it demands algorithm-level integration where each component’s strengths guide the other’s operation. The purchase heat indicator exemplifies this principle, serving as a bridge that allows temporal patterns to influence swarm dynamics without overwhelming the optimization process.
Our framework has several limitations that should be acknowledged. First, the non-personalized design inherently cannot capture individual user preferences; it is most effective as a trending/candidate generation component rather than a standalone personalized recommender. Extending the purchase heat indicator to incorporate user-specific temporal patterns is a natural but non-trivial extension. Second, the discrete heat factors (Equations (14)–(16)) were derived from the Amazon Reviews dataset and may require recalibration for domains with different temporal activity profiles (e.g., news, entertainment). Third, while our experiments demonstrate effectiveness under extreme sparsity ( ρ < 0.0005 % ), the relative advantage of SWWP over personalized methods may diminish in denser datasets where user-level models have sufficient training signal. Fourth, the current framework processes temporal features at fixed granularities; adaptive window sizing that responds to local data density could further improve robustness. Fifth, scalability to datasets with significantly more items (>10 M) has not been evaluated, and the precomputation cost would scale linearly with catalog size. These limitations define clear boundaries for the applicability of our approach and motivate the future research directions discussed below.
Looking forward, several avenues warrant further investigation. First, extending the temporal modeling to capture user-specific temporal patterns could improve personalization while maintaining computational efficiency. Second, investigating adaptive window sizes that respond to data density could enhance performance across different sparsity levels. Third, incorporating real-time feedback to dynamically adjust the purchase heat calculation could improve responsiveness to sudden shifts in user behavior. Finally, exploring the framework’s performance on other domains with strong temporal characteristics, such as news recommendation or seasonal product promotion, would validate its generalizability.
In conclusion, the SWWP-EEDPSO framework demonstrates that careful integration of temporal modeling with evolutionary optimization can yield superior recommendation quality even under extreme sparsity. By introducing the purchase heat indicator and implementing deep integration mechanisms, we achieved a system that balances temporal relevance, optimization quality, and computational efficiency. As recommender systems continue to face challenges from growing catalogs and sparse interactions, approaches that elegantly combine multiple optimization paradigms will become increasingly valuable for delivering timely, relevant recommendations at scale.

Author Contributions

Conceptualization, S.L. and H.Y.; methodology, S.L.; software, S.L.; validation, S.L., Y.N. and H.Y.; formal analysis, S.L.; investigation, S.L.; resources, Y.N.; data curation, S.L.; writing—original draft preparation, S.L.; writing—review and editing, Y.N. and H.Y.; visualization, S.L.; supervision, Y.N. and H.Y.; project administration, Y.N.; funding acquisition, Y.N. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially supported by the Japan Society for the Promotion of Science (JSPS) KAKENHI under Grant 25K00139, and the Tokushima University Tenure-Track Faculty Development Support System, Tokushima University, Japan.

Data Availability Statement

The source code, dataset preprocessing scripts, and experimental configuration files used to support the findings of this study are available at https://github.com/UtiGoose/SWWP-EEDPSO (accessed on 1 July 2025). The repository includes instructions for reproducing all experiments, including the temporal train–test split protocol and hyperparameter settings described in Section 5.1.

Acknowledgments

The authors would also like to acknowledge the collaborative research support provided by Wesoft Company Ltd., Japan.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Quadrana, M.; Cremonesi, P.; Jannach, D. Sequence-Aware Recommender Systems. ACM Comput. Surv. 2018, 51, 1–36. [Google Scholar] [CrossRef]
  2. Campos, P.G.; Díez, F.; Cantador, I. Time-aware recommender systems: A comprehensive survey and analysis of existing evaluation protocols. User Model. User-Adapt. Interact. 2014, 24, 67–119. [Google Scholar] [CrossRef]
  3. Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 2014, 15, 1929–1958. [Google Scholar]
  4. Cremonesi, P.; Koren, Y.; Turrin, R. Performance of recommender algorithms on top-n recommendation tasks. In Proceedings of the Fourth ACM Conference on Recommender Systems (RecSys ’10), Barcelona, Spain, 26–30 September 2010; Association for Computing Machinery (ACM): New York, NY, USA, 2010; pp. 39–46. [Google Scholar] [CrossRef]
  5. Lin, S.; Yang, Y.; Nagata, Y.; Yang, H. Elite Evolutionary Discrete Particle Swarm Optimization for Recommendation Systems. Mathematics 2025, 13, 1398. [Google Scholar] [CrossRef]
  6. Alabduljabbar, R.; Alshareef, M.; Alshareef, N. Time-Aware Recommender Systems: A Comprehensive Survey and Quantitative Assessment of Literature. IEEE Access 2023, 11, 45586–45604. [Google Scholar] [CrossRef]
  7. Koren, Y. Collaborative filtering with temporal dynamics. In Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD ’09), Paris, France, 28 June–1 July 2009; Association for Computing Machinery (ACM): New York, NY, USA, 2009; pp. 447–456. [Google Scholar] [CrossRef]
  8. Grbovic, M.; Cheng, H. Real-time Personalization using Embeddings for Search Ranking at Airbnb. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD ’18), London, UK, 19–23 August 2018; Association for Computing Machinery (ACM): New York, NY, USA, 2018; pp. 311–320. [Google Scholar] [CrossRef]
  9. Liu, Q.; Zeng, Y.; Mokhosi, R.; Zhang, H. STAMP: Short-Term Attention/Memory Priority Model for Session-based Recommendation. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD ’18), London, UK, 19–23 August 2018; Association for Computing Machinery (ACM): New York, NY, USA, 2018; pp. 1831–1839. [Google Scholar] [CrossRef]
  10. Pálovics, R.; Benczúr, A.A.; Kocsis, L.; Kiss, T.; Frigó, E. Exploiting temporal influence in online recommendation. In Proceedings of the 8th ACM Conference on Recommender Systems (RecSys ’14), Foster City, CA, USA, 6–10 October 2014; Association for Computing Machinery (ACM): New York, NY, USA, 2014; pp. 273–280. [Google Scholar] [CrossRef]
  11. Smith, B.; Linden, G. Two Decades of Recommender Systems at Amazon.com. IEEE Internet Comput. 2017, 21, 12–18. [Google Scholar] [CrossRef]
  12. Zimdars, A.; Maxwell Chickering, D.; Meek, C. Using Temporal Data for Making Recommendations. arXiv 2013, arXiv:1301.2320. [Google Scholar] [CrossRef]
  13. Zhang, Y.; Chen, M.; Huang, D.; Wu, D.; Li, Y. iDoctor: Personalized and professionalized medical recommendations based on hybrid matrix factorization. Future Gener. Comput. Syst. 2017, 66, 30–35. [Google Scholar] [CrossRef]
  14. Datar, M.; Gionis, A.; Indyk, P.; Motwani, R. Maintaining Stream Statistics over Sliding Windows. SIAM J. Comput. 2002, 31, 1794–1813. [Google Scholar] [CrossRef]
  15. Ding, Y.; Li, X. Time weight collaborative filtering. In Proceedings of the 14th ACM International Conference on Information and Knowledge Management (CIKM ’05), Bremen, Germany, 31 October–5 November 2005; Association for Computing Machinery (ACM): New York, NY, USA, 2005; pp. 485–492. [Google Scholar] [CrossRef]
  16. Vinagre, J.; Jorge, A.M.; Gama, J. Fast Incremental Matrix Factorization for Recommendation with Positive-Only Feedback. In Proceedings of the User Modeling, Adaptation, and Personalization, Aalborg, Denmark, 7–11 July 2014; Dimitrova, V., Kuflik, T., Chin, D., Ricci, F., Dolog, P., Houben, G.J., Eds.; Springer: Cham, Switzerland, 2014; pp. 459–470. [Google Scholar]
  17. Matuszyk, P.; Vinagre, J.; Spiliopoulou, M.; Jorge, A.M.; Gama, J. Forgetting techniques for stream-based matrix factorization in recommender systems. Knowl. Inf. Syst. 2018, 55, 275–304. [Google Scholar] [CrossRef]
  18. Wixted, J.T.; Ebbesen, E.B. On the Form of Forgetting. Psychol. Sci. 1991, 2, 409–415. [Google Scholar] [CrossRef]
  19. Burke, R. Hybrid Recommender Systems: Survey and Experiments. User Model. User-Adapt. Interact. 2002, 12, 331–370. [Google Scholar] [CrossRef]
  20. Zhou, K.; Wang, H.; Zhao, W.X.; Zhu, Y.; Wang, S.; Zhang, F.; Wang, Z.; Wen, J.R. S3-Rec: Self-Supervised Learning for Sequential Recommendation with Mutual Information Maximization. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management (CIKM ’20), Virtual Event, 19–23 October 2020; Association for Computing Machinery (ACM): New York, NY, USA, 2020; pp. 1893–1902. [Google Scholar] [CrossRef]
  21. Xie, Y.; Gao, J.; Zhou, P.; Ye, Q.; Hua, Y.; Kim, J.B.; Wu, F.; Kim, S. Rethinking multi-interest learning for candidate matching in recommender systems. In Proceedings of the 17th ACM Conference on Recommender Systems, Singapore, 18–22 September 2023; pp. 283–293. [Google Scholar]
  22. Chen, Z.; Huang, J.; Silvestri, F.; Zhang, Y.; Ahn, H.; Tolomei, G. Joint Factual and Counterfactual Explanations for Top-k GNN-based Recommendations. ACM Trans. Recomm. Syst. 2025, 4, 1–26. [Google Scholar] [CrossRef]
  23. Baltrunas, L.; Ricci, F. Experimental evaluation of context-dependent collaborative filtering using Item Splitting. User Model. User-Adapt. Interact. 2014, 24, 7–34. [Google Scholar] [CrossRef]
  24. Multi-Process Models. In Bayesian Forecasting and Dynamic Models; Springer: New York, NY, USA, 1997; pp. 427–488. [CrossRef]
  25. Holt, C.C. Forecasting seasonals and trends by exponentially weighted moving averages. Int. J. Forecast. 2004, 20, 5–10. [Google Scholar] [CrossRef]
  26. Cleveland, R.B.; Cleveland, W.S.; McRae, J.E.; Terpenning, I.J. STL: A seasonal-trend decomposition. J. Off. Stat. 1990, 6, 3–73. [Google Scholar]
Figure 1. SWWP-EEDPSO overall framework. Solid arrows indicate the data flow direction between components; dashed arrows represent feedback connections. Blue modules correspond to temporal modeling, green modules to evolutionary optimization, and orange modules to the hybrid integration mechanism.
Figure 1. SWWP-EEDPSO overall framework. Solid arrows indicate the data flow direction between components; dashed arrows represent feedback connections. Blue modules correspond to temporal modeling, green modules to evolutionary optimization, and orange modules to the hybrid integration mechanism.
Electronics 15 01544 g001
Figure 2. NDCG@20 performance distribution across temporal recommendation algorithms. Each bar color corresponds to a distinct algorithm.
Figure 2. NDCG@20 performance distribution across temporal recommendation algorithms. Each bar color corresponds to a distinct algorithm.
Electronics 15 01544 g002
Figure 3. Precision–recall trade-off analysis revealing algorithm clustering in accuracy space. Each algorithm is represented by a unique color and marker shape for clear identification.
Figure 3. Precision–recall trade-off analysis revealing algorithm clustering in accuracy space. Each algorithm is represented by a unique color and marker shape for clear identification.
Electronics 15 01544 g003
Figure 4. Coverage–diversity trade-off analysis under extreme sparsity conditions. Each algorithm is represented by a unique color and marker shape for clear identification.
Figure 4. Coverage–diversity trade-off analysis under extreme sparsity conditions. Each algorithm is represented by a unique color and marker shape for clear identification.
Electronics 15 01544 g004
Figure 5. Computational latency distribution showing bimodal algorithm efficiency. The horizontal axis uses a logarithmic scale; negative exponents use the Unicode minus sign (−).
Figure 5. Computational latency distribution showing bimodal algorithm efficiency. The horizontal axis uses a logarithmic scale; negative exponents use the Unicode minus sign (−).
Electronics 15 01544 g005
Figure 6. Root mean squared error analysis for temporal prediction accuracy.
Figure 6. Root mean squared error analysis for temporal prediction accuracy.
Electronics 15 01544 g006
Figure 7. Mean reciprocal rank and mean average precision comparison for early-hit quality.
Figure 7. Mean reciprocal rank and mean average precision comparison for early-hit quality.
Electronics 15 01544 g007
Figure 8. Error distribution characteristics revealing prediction stability patterns. Each box represents the interquartile range (IQR) of errors; whiskers extend to 1.5 × IQR. Dots beyond the whiskers represent outliers. Box colors correspond to the respective algorithm colors used throughout this paper.
Figure 8. Error distribution characteristics revealing prediction stability patterns. Each box represents the interquartile range (IQR) of errors; whiskers extend to 1.5 × IQR. Dots beyond the whiskers represent outliers. Box colors correspond to the respective algorithm colors used throughout this paper.
Electronics 15 01544 g008
Figure 9. Comprehensive performance assessment via composite score analysis.
Figure 9. Comprehensive performance assessment via composite score analysis.
Electronics 15 01544 g009
Figure 10. Convergence behavior comparison across optimization algorithms.
Figure 10. Convergence behavior comparison across optimization algorithms.
Electronics 15 01544 g010
Figure 11. Temporal performance patterns during peak activity periods. The orange dashed line represents the overall mean fitness; the blue dashed line indicates the SWWP-EEDPSO mean; the red dashed line marks the EEDPSO baseline mean.
Figure 11. Temporal performance patterns during peak activity periods. The orange dashed line represents the overall mean fitness; the blue dashed line indicates the SWWP-EEDPSO mean; the red dashed line marks the EEDPSO baseline mean.
Electronics 15 01544 g011
Figure 12. Hourly performance heatmap revealing temporal hotspots. Cell values indicate average NDCG@20 scores; text annotations use black font for readability across all color intensities.
Figure 12. Hourly performance heatmap revealing temporal hotspots. Cell values indicate average NDCG@20 scores; text annotations use black font for readability across all color intensities.
Electronics 15 01544 g012
Figure 13. Ablation study results: (a) Mass@K comparison across variants showing temporal prediction quality; (b) fitness decomposition revealing the trade-off between base quality ( f total ) and temporal prediction ( f pred ). In panel (a), the orange dashed line indicates the full model performance, the blue dashed line marks the median across variants, and the red dashed line shows the baseline without any SWWP guidance.
Figure 13. Ablation study results: (a) Mass@K comparison across variants showing temporal prediction quality; (b) fitness decomposition revealing the trade-off between base quality ( f total ) and temporal prediction ( f pred ). In panel (a), the orange dashed line indicates the full model performance, the blue dashed line marks the median across variants, and the red dashed line shows the baseline without any SWWP guidance.
Electronics 15 01544 g013
Table 1. Summary of principal notation.
Table 1. Summary of principal notation.
SymbolDescription
U , P , T Sets of users, items, and timestamps
KRecommendation list size
N cand Candidate pool size (default 2000)
τ Evaluation (query) time
W S ( τ ) , W L ( τ ) Short-term and long-term sliding windows
Δ S , Δ L Window durations (14 days; up to 1 year)
ψ s ( t ) , ψ d ( t ) , ψ m ( t ) Time segment, weekday, and month features
S trend , S period Trend and periodic popularity scores
λ fast , λ slow Decay factors for short/long-term windows
α Trend–periodicity fusion weight (default 0.7)
Pop SWWP ( p , τ ) Integrated SWWP popularity score
H ( τ ) Purchase heat indicator
B s , F d , F m Base heat, weekday factor, monthly factor
X i ( t ) Position (recommendation list) of particle i
v i ( t ) Velocity of particle i
ω , c 1 , c 2 Inertia weight, cognitive/social coefficients
f total Base EEDPSO fitness function
f pred Temporal prediction component
f hybrid Unified hybrid fitness ( f total + γ f pred )
γ Temporal prediction weight (default 10.0)
ρ Interaction matrix density
Table 2. Algorithm parameter configurations.
Table 2. Algorithm parameter configurations.
AlgorithmParameterValue
EEDPSONumber of Particles30
Maximum Iterations500
Cognitive Coefficient ( c 1 )1.26
Social Coefficient ( c 2 )0.74
Inertia Weight ( ω )0.55
SWWP-EEDPSONumber of Particles30
Maximum Iterations500
Cognitive Coefficient ( c 1 )1.26
Social Coefficient ( c 2 )0.74
Inertia Weight ( ω )0.55
DEPopulation Size50
Number of Generations500
Differential Weight (F)0.37
Crossover Rate ( C R )0.71
GAPopulation Size50
Number of Generations500
Crossover Probability0.94
Mutation Probability0.2
Table 3. Comprehensive performance metrics of time-series recommendation algorithms in highly sparse environment. Bold values indicate the best result in each column. The asterisk (*) for ExpSmoothing latency denotes an anomalous value caused by per-item model fitting without vectorization (see Section 5.1 for details).
Table 3. Comprehensive performance metrics of time-series recommendation algorithms in highly sparse environment. Bold values indicate the best result in each column. The asterisk (*) for ExpSmoothing latency denotes an anomalous value caused by per-item model fitting without vectorization (see Section 5.1 for details).
AlgorithmNDCG
@20
Prec
@20
Rec
@20
MRR
@20
MAP
@20
CovDivNovSpearKendMAERMSETime
(ms)
Random0.0140.0230.0100.0570.0010.00220.26014.98−0.033−0.0332.0312.4372.91
GlobalPop0.1200.1550.0880.2700.0210.00010.2508.95−0.128−0.1270.9271.0660.08
GlobalPop–Decay0.2170.2330.1420.3600.0540.00010.25010.050.4080.3620.8020.9220.08
POP–Segment0.1910.3000.1380.4380.0480.00070.11310.650.0910.0820.8921.0610.31
POP–minuteOfWeek0.0640.1180.0300.2830.0060.00060.07711.000.3140.2981.1361.2570.36
SWWP0.2450.3400.1550.5470.0830.00110.07712.290.0170.0101.0121.1420.52
Fourier0.0200.0370.0100.1640.0030.00030.05013.59−0.050−0.0560.8670.982105.39
ExpSmoothing0.1940.2130.1220.3340.0480.00020.25010.430.0480.0061.1621.290144,365 *
STL-AR0.1120.1530.0800.2080.0250.00020.21010.620.1530.1331.2511.3522196.83
STL-GBM0.0870.1350.0670.2120.0110.00020.22310.780.0210.0241.2511.3461925.39
Table 4. Performance comparison of SWWP-EEDPSO hybrid framework with baseline algorithms. The “Improvement” column shows the relative change compared to vanilla EEDPSO; “–” indicates the baseline itself.
Table 4. Performance comparison of SWWP-EEDPSO hybrid framework with baseline algorithms. The “Improvement” column shows the relative change compared to vanilla EEDPSO; “–” indicates the baseline itself.
AlgorithmMean
Fitness
Std
Fitness
Max
Fitness
Min
Fitness
Convergence
Iteration
Stability
Score
Improvement
(%)
EEDPSO3194.1485.833322.893065.393800.973
SWWP-EEDPSO3384.2655.683467.783300.744250.984+5.95
DE3020.4027.773062.062978.744500.991−5.44
GA3024.0043.133088.702959.304800.986−5.33
Table 5. Ablation study: impact of each integration mechanism on temporal prediction quality. ftotal denotes base recommendation quality; fpred denotes the temporal prediction contribution. Bold values indicate the best result in each column. ✓ denotes that the component is enabled; × denotes that it is disabled. “—” in the Δ Mass column indicates the reference configuration.
Table 5. Ablation study: impact of each integration mechanism on temporal prediction quality. ftotal denotes base recommendation quality; fpred denotes the temporal prediction contribution. Bold values indicate the best result in each column. ✓ denotes that the component is enabled; × denotes that it is disabled. “—” in the Δ Mass column indicates the reference configuration.
VariantSWWP
Init
SWWP
Update
Temp.
Fitness
Mass@KPrec@KΔMass
(%)
ftotalfpred
Full SWWP-EEDPSO0.2840.1753208.2283.9
w/o SWWP Init×0.2760.168−2.93236.0275.7
w/o SWWP Updates×0.0840.049−70.33775.984.3
w/o Temporal Fitness×0.0910.067−67.93357.591.1
w/o All SWWP××0.0360.025−87.53792.635.6
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lin, S.; Nagata, Y.; Yang, H. A Hybrid Temporal Recommender System Based on Sliding-Window Weighted Popularity and Elite Evolutionary Discrete Particle Swarm Optimization. Electronics 2026, 15, 1544. https://doi.org/10.3390/electronics15081544

AMA Style

Lin S, Nagata Y, Yang H. A Hybrid Temporal Recommender System Based on Sliding-Window Weighted Popularity and Elite Evolutionary Discrete Particle Swarm Optimization. Electronics. 2026; 15(8):1544. https://doi.org/10.3390/electronics15081544

Chicago/Turabian Style

Lin, Shanxian, Yuichi Nagata, and Haichuan Yang. 2026. "A Hybrid Temporal Recommender System Based on Sliding-Window Weighted Popularity and Elite Evolutionary Discrete Particle Swarm Optimization" Electronics 15, no. 8: 1544. https://doi.org/10.3390/electronics15081544

APA Style

Lin, S., Nagata, Y., & Yang, H. (2026). A Hybrid Temporal Recommender System Based on Sliding-Window Weighted Popularity and Elite Evolutionary Discrete Particle Swarm Optimization. Electronics, 15(8), 1544. https://doi.org/10.3390/electronics15081544

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop