Next Article in Journal
Learning Stable Tabular Representations for Predicting via Field Decorrelation and Diversity-Regularized Fusion
Previous Article in Journal
CMCLTrack: Reliability-Modulated Cross-Modal Adapter and Cross-Layer Mamba Fusion for RGB-T Tracking
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

MPIF in E-Commerce Recommendation: Application of Multi-Pairwise Ranking with Heterogeneous Implicit Feedback

School of New Media, Beijing Institute of Graphic Communication, Beijing 102600, China
*
Author to whom correspondence should be addressed.
Electronics 2026, 15(5), 985; https://doi.org/10.3390/electronics15050985
Submission received: 20 January 2026 / Revised: 23 February 2026 / Accepted: 25 February 2026 / Published: 27 February 2026
(This article belongs to the Section Artificial Intelligence)

Abstract

To address the one-class collaborative filtering (OCCF) issue in e-commerce recommendation with only positive implicit feedback, mainstream methods adopt pairwise preference learning represented by Bayesian Personalized Ranking (BPR). However, BPR relies on an invalid assumption and suffers from severe data sparsity. This paper proposes Multi-pairwise Ranking with Heterogeneous Implicit Feedback (MPIF), which exploits heterogeneous implicit and auxiliary information to mine deep user preferences, constructs six pairwise preferences for classified items, and optimizes the model via stochastic gradient descent (SGD). Experiments on three real-world datasets verify that MPIF+ outperforms all state-of-the-art baselines on Normalized Discounted Cumulative Gain at rank 5 (NDCG@5), Precision at rank 5 (Pre@5), Recall at rank 5 (Rec@5), and Area Under Curve (AUC). It yields maximum improvements of 34.2%, 5.5%, and 32.9% on NDCG@5 for the Sobazaar, Retailrocket, and REES46 datasets, respectively, achieving significant and stable recommendation gains.

1. Introduction

The rapid advancement of the Internet has led to an explosive growth of online information, making it increasingly challenging for users to locate content that is truly valuable to them—a phenomenon widely known as information overload. Recommendation systems [1] serve as an effective solution to this issue [2], enabling the provision of high-quality personalized services for users.
As the most widely adopted recommendation paradigm, collaborative filtering (CF) leverages user feedback to recommend top-N items that best match individual user interests [3]. User feedback plays a critical role in capturing and understanding user preferences [4] and can be broadly categorized into two types: explicit feedback (corresponding to the multi-class setting) [5] and implicit feedback (corresponding to the one-class setting) [6].
Explicit feedback (e.g., ratings) directly reflects user preferences, while implicit feedback (e.g., purchases, clicks) does not. However, explicit feedback is hard to acquire, whereas implicit feedback is readily available [7]. Thus, implicit-feedback-based recommendation is mainstream. Yet implicit feedback only reflects one-class preferences and lacks negative signals. To address this, one-class collaborative filtering (OCCF) was proposed [8,9,10].
OCCF methods are categorized into point-wise and pairwise approaches. According to [11], the pairwise approach performs significantly better, so this paper focuses on it. The pioneering pairwise method is Bayesian Personalized Ranking (BPR) [12]. Real-world implicit feedback is heterogeneous and divided into target and auxiliary actions [13]. Target actions are the primary focus in recommendation systems, while auxiliary actions serve as side information to model user preferences. For example, in e-commerce, purchase is the target action, and view/cart are auxiliary actions (Figure 1).
However, existing OCCF methods still have gaps, with insufficient feedback utilization leading to sparse data. Simultaneously, the preference assumption is overly rigid, failing to consider situations where non-interacted-with items may be “undiscovered” or represent “niche preferences”. Secondly, there is a lack of fine-grained negative sample modeling, which fails to distinguish between “disliked” and “unknown” items, resulting in misjudgments by the model. Meanwhile, methods such as multi-behavior learning (e.g., MB-GCN [14]), listwise ranking frameworks [15] and group-aware noise reduction (e.g., G-UBS [16]) focus more on behavior fusion or noise processing but fail to systematically model the preference relationships between behaviors.
To address the above issues, this research proposes Multi-pairwise Ranking with Heterogeneous Implicit Feedback (MPIF). By introducing heterogeneous implicit feedback, this research divides unobserved items into different categories, fully utilizes feedback to extract auxiliary information, and proposes multiple pairwise assumptions. This research optimizes MPIF using stochastic gradient descent and theoretically analyzes its computational complexity. Experiments on three real-world datasets demonstrate that MPIF outperforms existing recommendation algorithms, filling the gap in existing methods in terms of the accuracy of multi-behavior integration and the differentiation of negative samples. It provides a method to address data sparsity and unreasonable preference assumptions. The main contributions of this paper are as follows:
  • Addressing the issue of sparse purchase data and the underutilization of auxiliary behaviors such as browsing and adding-to-cart in e-commerce scenarios, this research proposes series-pairwise assumptions to introduce heterogeneous implicit feedback, which effectively alleviates the data sparsity problem and enhances the performance of recommendation systems.
  • This research proposes multi-pairwise assumptions to address the deficiencies of series-pairwise assumptions, which effectively enhances the performance of recommendations. This research also introduces and improves upon the PopRank method to acquire auxiliary information, which fully leverages heterogeneous implicit feedback. This further enhances the performance of recommendation systems.
  • Experiments were conducted on three real-world e-commerce datasets, Sobazaar, Retailrocket, and REES46, to validate the effectiveness of MPIF across e-commerce scenarios of different scales and types, and the method achieved superior performance compared with the state-of-the-art recommendation algorithms, providing a more reliable and effective personalized recommendation solution for practical e-commerce platforms.
The organization of this paper is outlined below. Section 2 summarizes the related work, Section 3 elaborates on our method in detail, and Section 4 shows the experimental results and analysis. Section 5 concludes this paper and points out potential directions for future work.

2. Related Work

2.1. BPR Class Algorithms

BPR (Bayesian Personalized Ranking) [12] is a classic ranking algorithm based on implicit feedback in recommendation systems, with its core focusing on the relative preferences of users between items. It constructs user–item-non-interacted-item triples based on the pairwise preference assumption, derives the BPR-Opt optimization formula with the goal of maximizing the posterior probability, calculates user–item preference scores through matrix decomposition, and uses stochastic gradient descent to solve parameters.
With in-depth research on BPR recommendation algorithms, Ding et al. proposed a Bayesian Personalized Ranking by Leveraging View Data [7]. Liu et al. proposed introducing a new penalty factor for the objective function, emphasizing the similarity between two positive items from the user’s perspective, and proposed the SPR algorithm [17]. Feng et al. proposed a hybrid feedback recommendation method called RBPR [18], which combines explicit rating and implicit feedback into one model, in which PMF and BPR are unified to explore the latent features of users and items. Our preliminary work proposes the fusion of similarity to find the potential feedback and introduces the concept of the item set [19].
These methods do not fully utilize heterogeneous implicit feedback in e-commerce scenarios either by integrating only a single auxiliary action or by relying on difficult-to-obtain explicit ratings. Moreover, the preference assumptions are crude, without considering the complex associations between the target and auxiliary actions while still adhering to the unreasonable setting of “no interaction as negative sample”. At the same time, they fail to mine the “disliked item set” through item popularity, fail to segment the types of non-interacted items, and thus struggle to alleviate the problem of data sparsity, resulting in limited improvement in user preference characterization and recommendation accuracy.

2.2. Other Recommendation Algorithms

In recent years, recommendation algorithms have continuously explored deep recommendation, multi-behavior learning, and unbinding methods. However, most models still suffer from the problem of not explicitly modeling behavior preference order and not systematically mining “dislike” items. In 2020, LightGCN [20] simplified graph convolutional networks to optimize recommendations, while MB-GCN [14] fused auxiliary behaviors such as clicks and add-ons through graph convolution. In 2021, KHGT [21] combined a user product interaction graph and a product relationship graph to capture multi-behavioral associations, while DICE [22] decoupled user interests and conformity behavior through causal embedding to alleviate selection bias. In 2023, Transformer-based multi-behavior models [23] were introduced that capture sequence dependencies through attention, but these do not consider pairwise preference ranking. In 2025, G-UBS [16] was introduced, which aims to reduce implicit feedback noise through group perception without utilizing multi-source behavioral information.
Different from the method mentioned above, our MPIF method has several essential advantages: (1) It introduces multiple implicit feedback and effectively integrates this information into the method. (2) It also improves the PopRank method to generate auxiliary information. (3) It proposes multi-pairwise assumptions to capture deeper levels of user preferences and achieve more accurate user preference.

3. Method

In this section, this research first elaborates on the specific integration mechanism of heterogeneous feedback. Next, the proposed MPIF recommendation method is presented, along with a detailed description of its learning process. To conclude this section, this research derives the theoretical time complexity of the designed algorithm.

3.1. Pairwise Assumption

3.1.1. Series Pairwise Assumption

To address the drawbacks of the BPR method, this research introduces multiple types of implicit feedback (heterogeneous implicit feedback). In e-commerce systems, the target action is purchase, but this data is very sparse in the real world [24]. Additionally, the assumptions of the BPR method do not always hold. Thus, this research proposes series-pairwise assumptions to integrate multiple types of implicit feedback, which effectively alleviates the data sparsity of the target action and reduces the uncertainty of assumptions in the BPR method.
As shown in Figure 1, there are multiple types of implicit feedback in e-commerce systems. When introducing multiple types of implicit feedback into the model, this research considers the preference relationships between them. This research sets the purchase items as the target set I u t , the viewing action and adding-to-cart action as the auxiliary set I u a , and the remaining items as the negative set I u . Therefore, this research proposes the following series-pairwise assumption:
P r e f ( u , i ) P r e f ( u , j ) P r e f ( u , k ) , i I u t , j I u a , k I u
This indicates that users prefer target action to auxiliary action, and users prefer auxiliary action to the remaining set, which indicates no action.

3.1.2. Multi-Pairwise Assumption

Multi-pairwise assumption is based on the behavioral logic of e-commerce users: target behavior (purchase) requires higher willingness than auxiliary behaviors (view/cart), which in turn require higher willingness than non-interaction. Table 1 shows that 10–29% of view actions and 4.9–88% of cart actions lead to purchases, verifying that target behavior is a stronger preference signal than auxiliary behaviors.
Compared to the BPR binary preference assumption (interacted > not interacted), multi-pair modeling improves learning ability from three aspects: first, it reduces hypothesis bias. BPR incorrectly marks all non-interacting items as “disliked”, and multi-pair modeling subdivides non-interacting items into “disliked” and “unknown”, which conforms to the causal logic of user behavior. The second is fine preference modeling. BPR ignores the preference differences between auxiliary behaviors (browsing/adding purchases) and target behaviors (purchasing), and multi-pair modeling models hierarchical preferences through six paired preferences, which can learn fine user interests. Third, the model enhances data utilization. BPR only uses target behavior data, which leads to signal sparsity issues. Multiple models integrate three types of implicit feedback, increasing the number of effective preference pairs 2–3 times and reducing the variance of matrix decomposition parameter estimation.
This research uses Equation (2) to compute the proportion of the intersection of auxiliary actions and target actions to the auxiliary actions.
C = | I a I t | | I a |
As shown in Table 1, there is overlap between the auxiliary set and the target set. This indicates that the series-pairwise assumption may not always hold, and users may have an equal preference for auxiliary and target actions. Therefore, this research proposes multi-pairwise assumptions, as follows:
P r e f ( u , i ) P r e f ( u , k ) , P r e f ( u , j ) P r e f ( u , k ) , i I u t , j I u a , k I u
In the multi-pairwise assumption, we do not consider the preference relationship between target actions and auxiliary actions to remedy the deficiencies of the series-pairwise assumption. This research proposes two pairwise assumptions: that users preferred target items to negative items and that users preferred auxiliary items to negative items.
In the above preference assumptions, this research only considers the preference for one auxiliary action. To address this limitation, two auxiliary actions (namely, click and cart) are considered. Therefore, new pairwise assumptions for multiple auxiliary actions are proposed. In our experimental analysis, the relationship between auxiliary actions is not discussed. The new multi-pairwise assumption is as follows:
P r e f ( u , i ) P r e f ( u , j ) , i I u t , j I u P r e f ( u , v ) P r e f ( u , j ) , v I u v , j I u P r e f ( u , c ) P r e f ( u , j ) , c I u c , j I u
I u v is the set for view data, while I u c is the set for cart data.

3.1.3. Generation of Auxiliary Information

To further optimize our method, this research introduces the popularity of items to explore items that users may not like. The popularity of items is defined as the number of users interacting with them. The more users interact with items, the more popular they are. In this study, this research utilizes multiple implicit feedback, which not only considers the target action but also counts the popularity of items. Low-popularity items are interacted with by a small number of users, and this lack of interaction is more likely to be due to users actively disliking them (rather than not noticing them). This design is suitable for e-commerce platforms with uneven user behavior distribution (a high proportion of interactions with top products), such as REES46, where the top 10% of products account for 60% of the interaction volume. This research set a popularity threshold ϵ p and found a low-popularity item set I l p , as shown in Equation (5).
I l p = { i I u | N i < ϵ p }
N i denotes the number of users who interacted with item i (via purchasing, clicking, or carting). For instance, when ϵ p = 1 , this research categorizes the items with fewer than a single interacting user into the I l p set. Then we generated the auxiliary information, which is the dislike-item set ( I u d ) for each user, as shown in Equation (6).
I u d = I l p \ { I u t I u a }
Therefore, this research further subdivides the items into the target set ( I u t ), cart set ( I u c ), view set ( I u v ), disliked set ( I u d ), and remaining unknown set ( I u u c ), as shown in Figure 2.
According to the item classification, this research obtains a new multiple pairwise assumption, as shown in Equation (7).
P r e f ( u , i ) P r e f ( u , j ) , i I u t , j I u u c P r e f ( u , v ) P r e f ( u , j ) , v I u v , j I u u c P r e f ( u , c ) P r e f ( u , j ) , c I u c , j I u u c P r e f ( u , i ) P r e f ( u , k ) , i I u t , k I u d P r e f ( u , v ) P r e f ( u , k ) , v I u v , k I u d P r e f ( u , c ) P r e f ( u , k ) , c I u c , k I u d
In summary, MPIF denotes the method that only introduces multiple implicit feedback, as shown in Equation (4). MPIF+ denotes the method that simultaneously introduces multiple implicit feedback and auxiliary information, as shown in Equation (7). In MPIF+, I u d (low-popularity items) provides reliable negative signals, reducing the noise ratio in negative samples by 30%. Simultaneously, by adding pairwise preferences (e.g., purchase > disliked, view >  disliked), the model learns clearer decision boundaries between “preferred” ( I t / I v / I c ) and “non-preferred” ( I u d ) items, which improves the ranking accuracy of top-N recommendations.

3.2. Model Learning

To learn the model parameters, this research adopts the stochastic gradient descent (SGD) algorithm. This research first presents the update rule for each variable, with η representing the learning rate.
θ θ + η f ( θ ) θ
Then we derive the corresponding gradient for each parameter as follows:
U u = f ( θ ) U λ u U u V i = f ( θ ) V i λ u V i V v = f ( θ ) V v λ u V v V c = f ( θ ) V c λ u V c V j = f ( θ ) V j λ u V j V k = f ( θ ) V k λ u V k
As elaborated earlier, MPIF formulates a multi-pairwise preference assumption by incorporating diverse implicit feedback and mining auxiliary information and optimizes its parameters through SGD. Such a framework effectively mitigates the data sparsity problem and delivers improved personalized recommendation performance for users. The detailed procedure of MPIF is formally described in Algorithm 1.
Algorithm 1 MPIF
  1: Initialization:
  2: Initialize;
  3: for user u∈U do
  4:    Derive I u t according to the target action;
  5:    Derive I u v according to the view action;
  6:    Derive I u c according to the cart action;
  7:    Derive I u d according to the auxiliary information;
  8:    Derive I u u c ;
  9: end for;
  10: Optimization:
  11: for t 1 = 1, …, T do
  12:   for t 2 = 1, …, n do
  13:       Sample item set i, v, c, j and k from I u t , I u v , I u c , I u u c , I u d , respectively;
  14:       Calculate the gradients;
  15:       Update the model parameters;
  16:    end for
  17: end for
The objective function of MPIF is as follows:
f ( θ ) = k = 1 6 w k · ln σ ( r u i j k ) R ( θ )
The regularization term R ( θ ) ensures the Lipschitz continuity of the function (Lipschitz constant L = max (w_k)). Among them, w_1–w_6 are the weights of six paired preferences. Through cross-validation, it was determined that w_target-unknown = 0.3, w_browse-unknown = 0.25, w_add purchase-unknown = 0.25, w_target-dislike = 0.1, w_browse-dislike = 0.05, and w_add purchase-dislike = 0.05. When a product belongs to both auxiliary behavior and disliked items (such as a purchased item with low popularity), a “priority strategy” is adopted (the disliked item constraint has the highest priority), as this constraint is based on popularity statistics verification and has higher reliability.
Finally, this research analyzes the time complexity of our MPIF. Calculating item popularity N i requires iterating over all user–item interaction pairs, defined as the time complexity of mining auxiliary information ( O ( n m ) ). Each iteration requires updating the embeddings of n users (dimension d) and corresponding item embeddings, defined as training time complexity O ( T n d ) . Therefore, the overall time complexity of our models is O ( n m + T n d ) , with only an increase in constant term overhead compared to BPR ( O ( T n d ) ), demonstrating high efficiency, where n is the number of users, m is the number of items, T is the number of iterations, and d is the latent vector dimension.

3.3. Sampling Strategy

To address class imbalance and ensure effective preference learning, this research designs a stratified sampling strategy as follows:
  • Negative feedback handling: For each user u, negative samples are drawn from two subsets, disliked items ( I u d ) and unknown items ( I u u c ), with a ratio of 1:3 ( I u d : I u u c )—this reduces noise from mislabeled negative samples.
  • Class imbalance mitigation: This research adopts oversampling for rare auxiliary–target pairs (e.g., cart–buy pairs in Sobazaar) and undersampling for frequent non-preference pairs, maintaining a total sample pair size of 500 per user per iteration.
  • Pair ratio setting: The sampling ratios of the six pairwise preferences (as shown in Equation (7)) are set equally (1/6 each) to ensure balanced learning of all preference relationships, verified via cross-validation to avoid bias toward dominant pairs.

4. Experiments

To verify the performance of the proposed method, this research carried out extensive experiments on three real-world datasets, including Sobazaar, Retailrocket, and REES46. This research then compared our MPIF with several representative baseline methods to sufficiently verify the effectiveness of our method.

4.1. Datasets and Evaluation Metrics

To verify the performance of our model, this research uses three real-world datasets, including Sobazaar, Retailrocket, and REES46. The statistics of all datasets are summarized in Table 2.
Sobazaar is an online fashion shopping platform [25]. As on most e-commerce platforms, users can make purchases, view and checkout on items. In addition, Sobazaar also has social relationships, and users can become friends with each other. Retailrocket data is collected from e-commerce websites in the real world, including purchase, click, and add-to-cart actions. REES46 comes from a large overseas e-commerce platform [26], which also includes the purchase, click, and add-to-cart actions.
In this article, this research uses target action as training and testing data and the auxiliary action as auxiliary data. We delete records outside the behavior time window (≤30 days) and remove users/items without any interaction. This paper adopts temporal splitting to divide the datasets: all user–item interaction data are sorted in ascending order of timestamps; the first 80% of interaction records serve as the training set and the last 20% as the test set, effectively avoiding future information leakage. This research takes the average of five experimental results as the model’s performance indicator to reduce evaluation errors.
Accordingly, this research employs four prevalent ranking-based evaluation indicators [8] for performance evaluation: NDCG@5, Prec@5, Rec@5, and AUC. Specifically, these abbreviations represent normalized discounted cumulative gain, precision, and recall at the 5 position, respectively.
P r e c @ 5 = 1 | U t e | [ u U t e 1 5 p = 1 5 δ ( L u ( p ) I u t e ) ]
where U t e denotes the set of test users, L u ( p ) represents the item recommended to user u at position p in the top five list, I u t e is the set of items interacted with by user u in the test set, and δ ( · ) is an indicator function that returns 1 if the condition is true and 0 otherwise.
N D C G @ 5 = 1 | U t e | u U t e [ 1 Σ P = 1 m i n ( 5 , | I u t e | ) 1 l o g ( p + 1 ) p = 1 5 2 δ ( L u ( p ) I u t e ) 1 l o g ( p + 1 ) ]
where the denominator is the Ideal Discounted Cumulative Gain (IDCG@5), which normalizes the score. All other symbols share the same meaning as in Equation (11).
R e c @ 5 = 1 | U t e | [ u U t e 1 I u t e p = 1 5 δ ( L u ( p ) I u t e ) ]
where I u t e is the total number of relevant (test) items for user u.
A U C = 1 | U t e | [ u U t e 1 R u t e ( i , j ) R u t e δ ( r ^ u i > r ^ u j ) ]
where R t e u = { ( i , j ) i I t e u , j I t e u I t r u } is the set of pairs consisting of a test item i and a non-interacted item j for user u, I u t r is the set of items in the training set, and r ^ u i is the predicted preference score of user u for item i.

4.2. Baselines and Parameter Settings

To rigorously evaluate the superiority of our proposed MPIF+ method, we compare it against a diverse set of state-of-the-art baselines. These baselines are carefully selected to represent different paradigms in implicit feedback recommendation, allowing for a comprehensive analysis of MPIF+’s contributions. The comparative methods are grouped and described as follows:
Point-wise methods: These methods transform the implicit feedback problem into a regression or classification task by assigning weights to missing data:
  • WRMF [8]: A point-wise recommendation model optimized via the weighted alternating least squares (wALS) optimization strategy. We include it as a representative of early, highly influential point-wise approaches.
  • eALS [27]: An efficient point-wise method that leverages element alternating least squares for fast optimization. It is chosen to compare MPIF+ against a computationally efficient point-wise baseline.
Pairwise BPR-based methods: These methods are the most relevant to our work, as they are founded on the same pairwise ranking principle as MPIF. They assume that a user prefers an interacted item over a non-interacted one:
  • BPR [12]: A seminal pairwise ranking algorithm that is widely used and effective for implicit feedback-based personalized recommendation. It serves as the primary baseline to demonstrate the necessity of moving beyond its single-feedback, binary-preference assumption.
  • SDBPR [7]: A Bayesian recommendation approach that introduces view data to improve the ranking performance. This baseline is crucial for validating our core idea of incorporating heterogeneous feedback, albeit in a less structured way than MPIF.
  • MSBPR [10]: A multi-pairwise preference and similarity-based BPR method, which considers item similarity in pairwise ranking. We compare against it to show the benefits of our multi-pairwise preference structure over similarity-based enhancements.
Noise-robust method: This category represents a different research direction focused on handling the inherent noise in implicit feedback:
  • G-UBS [16]: A group-aware robust implicit feedback interpretation method, which reduces noise in implicit feedback via group analysis. It is included to demonstrate that explicitly modeling the hierarchical structure of feedback, as in MPIF+, is more effective than general noise-reduction techniques for multi-behavior data.
Graph neural network-based method: This category represents a research direction that leverages graph structures to model user–item interactions and collaborative relationships, enabling effective representation learning for recommendation tasks:
  • LightGCN [20]: A state-of-the-art GCN-based recommender that simplifies graph convolution by removing feature transformation and nonlinear activation, focusing solely on neighborhood aggregation. It serves as a strong representative of modern graph-based collaborative filtering methods.
In addition, several variants of our proposed framework are compared, shown as follows:
  • MPIF v i e w : only considers view.
  • MPIF c a r t : only considers cart.
  • MPIF: considers the view and cart.
  • MPIF+: considers the view, cart and auxiliary information.
The performance of recommendation approaches is highly sensitive to hyperparameter settings. To ensure fair comparison, this research therefore employs cross-validation to tune the optimal parameters for each competing method in advance.
The learning rate is fixed at 0.01 for all datasets. As all methods rely on matrix factorization, this research sets the latent vector dimension to 50 and the maximum iteration count to 2000 across all models and datasets. This research searches for the optimal regularization coefficient within 0.1, 0.01, using NDCG@5 as the evaluation metric.
For the BPR method, this research sets the regularization term to 0.1 in the Sobazaar and REES46 and 0.01 in the Retailrocket datasets. For WRMF, this research sets the weight to 1 in Sobazaar and Retailrocket and 4 in REES46 datasets and sets the regularization term to 0.01 in all datasets. For eALS, this research sets the weight to 5 in Retailrocket and REES46 and 2 in Sobazaar datasets and sets the regularization term to 0.01 in all datasets. For the SDBPR method, this research sets the optimal parameter omega value is 0.1 in all datasets and sets the regularization term to 0.01 in all datasets. For MSBPR, this research sets the weight between items to 0.8 in all datasets and the regularization term to 0.01 in all datasets. For G-UBS, the group size is set to 4 in Sobazaar and 3 in Retailrocket and REES46 (optimized via cross-validation), the noise reduction coefficient is 0.15 in all datasets, and the regularization term is 0.01. For LightGCN: The latent dimension is uniformly set to 50, consistent with other methods. The number of GCN layers is searched in [1, 2, 3], and the optimal value is selected based on NDCG@5 on the validation set. The regularization coefficient and learning rate are tuned within the same ranges as other baselines. For MPIF+, this research sets the ϵ p to 25 in the Sobazaar datasets, 3 in the Retailrocket datasets and 25 in the REES46 datasets.

4.3. Performance Comparison

To fully validate the effectiveness of our method, this research compares it with a total of seven competing state-of-the-art approaches. Detailed experimental results are summarized in Table 3. The improvement of the proposed MPIF method is obviously clear across the Sobazaar, Retailrocket and REES46 datasets. Taking NDCG@5 as an example, MPIF+ achieves 0.2730, 0.8606 and 0.5605 on the three datasets, respectively, which exceeds the best baseline models by 34.2%, 5.5% and 32.9%, correspondingly; meanwhile, MPIF+ obtains the best performance on all evaluation metrics including Pre@5, Rec@5 and AUC, surpassing all existing state-of-the-art baseline models with remarkable performance gains on each real-world dataset, which fully validates the effectiveness and superiority of our method.

4.4. Discussion of Experimental Results

4.4.1. Results of Series Pairwise Assumption

In this section, this research conducts a comparison between the sequential pairwise approach (which integrates different auxiliary interactions: view or cart) and the BPR method. Specifically, view represents the sequential pairwise model using view-side auxiliary information, and cart stands for the counterpart with cart auxiliary data. The experimental findings are presented in Figure 3.
This research shows that the series pairwise method performs significantly better than the BPR method on all datasets. This is because the series pairwise assumption integrates purchase (the target behavior) with browsing/cart (auxiliary behaviors). This directly alleviates the sparsity problem inherent in BPR, which relies solely on a single type of positive feedback—especially given that auxiliary behaviors are far more numerous than purchases, as shown in Table 2. Moreover, it overcomes the unreasonable “non-interaction equals dislike” assumption, leading to its significantly superior performance over BPR across all datasets.

4.4.2. Results of Multi-Pairwise Assumption

In the previous section, this research proposed two preference assumptions that integrate multiple implicit feedback. Therefore, this research uses Prec@5 and AUC to estimate the series pairwise method and multi-pairwise method. The experimental results are shown in Figure 4.
It can be seen that the multi-pairwise method performed significantly better than the series-pairwise method in most situations. The multi-pairwise assumption abandons the strong constraint of “target behavior > auxiliary behavior”, which is more in line with real user behavior logic, and avoids the preference misjudgment of the series assumption. Therefore, the Prec@5 and AUC indicators are further improved.

4.4.3. Results of MPIF Variants

The results corresponding to the variants of MPIF are summarized in Table 4, based on which this research outlines a series of important findings below:
  • MPIF performs better than MPIF_view and MPIF_cart on all datasets, and the results demonstrate that a reasonable combination of multiple implicit feedback types can help improve recommendation performance.
  • MPIF+ beats all other MPIF variants. MPIF+ achieves performance gain over MPIF mainly from two aspects: adding I u d increases the true-negative sample proportion from 40% to 70%, enabling more accurate latent representation learning, and adding three extra pairwise preferences enriches training signals and strengthens the model’s ability to distinguish items. Further verification shows that on sparser datasets (e.g., Sobazaar), MPIF+ outperforms MPIF more significantly (+16.2% NDCG@5) than on denser datasets (e.g., Retailrocket, +2.0%), confirming I u d ’s greater impact in sparse scenarios with severe negative sample noise. This fully proves the necessity of mining disliked items from unobserved items.

4.4.4. Parameter Sensitivity Analysis

Parameter ϵ p : This research first tested the parameter ϵ p of MPIF+ in the all datasets. The experimental results are shown in Figure 5. This research arrives at the following conclusions:
  • The method performs better when ϵ p is smaller. As the value of ϵ p becomes larger, the performance of our models becomes stable. A smaller value of ϵ p leads to superior performance of our method. When ϵ p grows larger, the overall performance of the model tends to converge and stabilize.
  • Moreover, ϵ p exhibits different sensitivities with respect to different metrics. From Figure 5, it can be observed that the NDCG@5 curve drops most sharply in Retailrocket, whereas the remaining metrics show more stable trends.
Parameter d: Next, this research tested the influence of parameter d on the MPIF method and MPIF+ method in the Sobazaar, Retailrocket and REES46 datasets, respectively. The experimental results are shown in Figure 6. This research arrives at the following conclusions:
  • There exists a positive correlation between the latent dimension d and the overall performance of our method. Specifically, a larger d leads to better performance, yet the performance gradually plateaus once d reaches a sufficiently large value. Meanwhile, choosing an overly large d will introduce heavy computational costs and reduce efficiency.
  • Different datasets exhibit distinct levels of sensitivity to the choice of d. As illustrated in Figure 6, the overall trend on Retailrocket fluctuates much more obviously than that on the remaining datasets, demonstrating that the influence of d varies in magnitude across different data scenarios.
Parameter N: This research tested the influence of parameter d on the SDBPR method, the MPIF method and the MPIF+ method. Here, this research only shows experimental results in terms of NDCG@N in Figure 7:
  • A positive correlation exists between N and the recommendation performance: larger N leads to better performance, which plateaus and stabilizes when N is large enough.
  • Moreover, the sensitivity to N differs across datasets. From Figure 7, it can be observed that the overall change in Retailrocket is far more significant than that on other datasets, reflecting the varying degrees of influence exerted by parameter N.
  • MPIF+ performance is optimal. As N becomes larger, the performance of MPIF+ remains optimal.
Learning Rate η : Finally, this research discusses the model performance with different learning rates and numbers of iterations, using the core metric NDCG@5. These analyses cover three datasets and visually reflect the differences in convergence speed, as shown in Figure 8.
As shown in Figure 8, the learning rate ( η ) significantly affects the model’s convergence efficiency and stability. When selecting the optimal learning rate of 0.01, the model reaches the optimal stable performance (e.g., NDCG@5 = 0.2730 on Sobazaar, 0.8606 on Retailrocket) at approximately 1500 iterations. There is no significant fluctuation in subsequent iterations, balancing convergence speed and stability.

4.4.5. Ablation Study

To verify the contribution of each core component, this research developed three ablation variants of MPIF+:
  • MPIF+-S only use the series pairwise assumption (Equation (1)) and removes the multi-pairwise assumption.
  • MPIF+-M only uses the multi-pairwise assumption (Equation (4)) and removes the series pairwise assumption.
  • MPIF+-NoD removes the disliked item set ( I u d ) and uses only I u u c as negative samples.
Experimental results on Sobazaar are shown in Table 5.

4.5. Runtime Analysis

To investigate the runtime performance of MPIF, this research conducted a series of time-efficiency experiments across all three datasets. This research recorded the total training time for each baseline approach. In contrast, this research divided the runtime of our method into two parts for detailed analysis: (1) the time spent on mining disliked items via item popularity, and (2) the time spent on model training.
The experimental results are shown in Table 6. This research shows the following observations:
  • WRMF requires the longest execution time compared with other competing methods, and its effectiveness is worse than some pairwise methods. Therefore, the pairwise method is superior to the point-wise method.
  • MPIF requires longer execution time compared with other competing methods in certain situations. However, considering the improvement of our methods, the extra time consumed is still worthwhile.

5. Conclusions and Future Work

This work focuses on a key recommendation task in collaborative filtering, where mainstream methods adopt pairwise preference learning. This research proposed MPIF to address the limitations of existing methods: it integrated multiple types of e-commerce implicit feedback (purchase, browsing, add-to-cart) via multi-pairwise preference modeling and exploited item popularity to identify user-non-preferred items. Extensive experiments on three real-world datasets against six state-of-the-art baseline methods validated the effectiveness of MPIF across diverse e-commerce scenarios.
Although the experimental results verify the effectiveness of the proposed MPIF method, the method still has several limitations requiring further optimization. First, the approach partitions the disliked item set using the popularity threshold ε p , which introduces two biases—niche items that users may potentially prefer can be misclassified as disliked due to low interaction counts ( N i < ε p ), and popularity is only measured by interaction frequency rather than quality (e.g., browsing duration, cart cancellation), limiting the accuracy of auxiliary information mining. Second, the heuristic multi-pairwise preference assumption overlooks preference differences among auxiliary behaviors (e.g., priority between view and cart actions) and has not been validated in non-e-commerce scenarios (e.g., video, music recommendation), so its generalization ability remains untested. Third, the method adopts a static user preference assumption and fails to capture the temporal evolution of user interests (e.g., seasonal shifts, promotion effects), which may degrade its performance in long-term recommendation scenarios. Future research will focus on improving the limitations mentioned above to reduce misjudgment of niche preferences and conduct more reasonable preference modeling while simultaneously improving the performance of long-term recommendation tasks and enhancing their generalization ability and applicability of core assumptions.

Author Contributions

Methodology, C.C.; software, C.C.; validation, P.Q. and S.M.; resources, L.L. and M.C.; data curation, C.C.; writing—original draft preparation, C.C.; writing—review and editing, H.W.; visualization, C.C.; supervision, H.W., M.C. and L.L.; project administration, M.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work has been supported by the Beijing Institute of Graphic Communication (Grant No. Eb202306) (Grant No. Eb202505), the Publishing science emerging interdisciplinary platform construction project of Beijing Institute of Graphic Communication (Grant No. 04190123001/003), Open Foundation of State key Laboratory of Networking and Switching Technology (Beijing University of Posts and Telecommunications) (SKLNST-2023-1-12), Beijing Municipal Education Commission & Beijing Natural Science Foundation Co-financing Project (Grant No. KZ202210015019), and Project of Construction and Support for high-level Innovative Teams of Beijing Municipal Institutions (Grant No. BPHR20220107).

Data Availability Statement

The data presented in this study are available upon request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
OCCFOne-Class Collaborative Filtering
BPRBayesian Personalized Ranking
MPIFMulti-pairwise Ranking with Heterogeneous Implicit Feedback
SGDStochastic Gradient Descent
NDCG@5Normalized Discounted Cumulative Gain at 5
Prec@5Precision at 5
Rec@5Recall at 5
AUCArea Under the Curve

References

  1. Lu, J.; Wu, D.S.; Mao, M.S.; Wang, W.; Zhang, G.Q. Recommender system application developments: A survey. Decis. Support Syst. 2015, 74, 12–32. [Google Scholar] [CrossRef]
  2. Sun, Z.; Guo, G.B.; Zhang, J. Exploiting implicit item relationships for recommender systems. In User Modeling, Adaptation and Personalization, Proceedings of the UMAP 2015, Dublin, Ireland, 29 June–3 July 2015; Proceedings 23; Springer: Berlin/Heidelberg, Germany, 2015; pp. 252–264. [Google Scholar] [CrossRef]
  3. Koren, Y.; Rendle, S.; Bell, R. Advances in collaborative filtering. In Recommender Systems Handbook; Springer: Berlin/Heidelberg, Germany, 2021; pp. 91–142. [Google Scholar] [CrossRef]
  4. Ma, W.; Pan, W.; Ming, Z. SCF: Structured collaborative filtering with heterogeneous implicit feedback. Knowl.-Based Syst. 2022, 258, 109999. [Google Scholar] [CrossRef]
  5. Zeng, W.; Qin, J.; Wei, C. Neural Collaborative Autoencoder for Recommendation With Co-Occurrence Embedding. IEEE Access 2021, 9, 163316–163324. [Google Scholar] [CrossRef]
  6. Núñez-Valdez, E.R.; Quintana, D.; Crespo, R.G.; Isasi, P.; Herrera-Viedma, E. A recommender system based on implicit feedback for selective dissemination of ebooks. Inf. Sci. 2018, 467, 87–98. [Google Scholar] [CrossRef]
  7. Ding, J.T.; Yu, G.H.; He, X.N.; Feng, F.; Li, Y.; Jin, D.P. Sampler design for bayesian personalized ranking by leveraging view data. IEEE Trans. Knowl. Data Eng. 2019, 33, 667–681. [Google Scholar] [CrossRef]
  8. Hu, Y.F.; Koren, Y.; Volinsky, C. Collaborative filtering for implicit feedback datasets. In ICDM ’08: Proceedings of the 2008 Eighth IEEE International Conference on Data Mining; IEEE: Washington, DC, USA, 2008; pp. 263–272. [Google Scholar] [CrossRef]
  9. He, M.K.; Pan, W.K.; Ming, Z. BAR: Behavior-aware recommendation for sequential heterogeneous one-class collaborative filtering. Inf. Sci. 2022, 608, 881–899. [Google Scholar] [CrossRef]
  10. Zeng, L.; Guan, J.W.; Chen, B.L. MSBPR: A multi-pairwise preference and similarity based Bayesian personalized ranking method for recommendation. Knowl.-Based Syst. 2023, 260, 110165. [Google Scholar] [CrossRef]
  11. Yu, R.L.; Liu, Q.; Ye, Y.Y.; Cheng, M.Y.; Chen, E.H.; Ma, J.H. Collaborative list-and-pairwise filtering from implicit feedback. IEEE Trans. Knowl. Data Eng. 2020, 34, 2667–2680. [Google Scholar] [CrossRef]
  12. Rendle, S.; Freudenthaler, C.; Gantner, Z.; Schmidt-Thieme, L. BPR: Bayesian personalized ranking from implicit feedback. UAI 2012, 12, 452–461. [Google Scholar] [CrossRef]
  13. Wang, J.; Lin, L.F.; Zhang, H.; Tu, J.Q. Confidence-learning based collaborative filtering with heterogeneous implicit feedbacks. In Proceedings of the APWeb 2016, Suzhou, China, 23–25 September 2016; Proceedings, Part I; Springer: Berlin/Heidelberg, Germany, 2016; pp. 444–455. [Google Scholar] [CrossRef]
  14. Jin, B.; Gao, C.; He, X.; Jin, D.; Li, Y. Multi-behavior recommendation with graph convolutional networks. In Proceedings of the SIGIR ’20: Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, Virtual Event, 25–30 July 2020; ACM: New York, NY, USA, 2020; pp. 659–668. [Google Scholar] [CrossRef]
  15. He, T.; Xie, M.; Li, R.; Xu, X.; Yu, J.; Wang, Z.; Hu, L.; Li, H.; Gai, K. An end-to-end multi-objective ensemble ranking framework for video recommendation. In Proceedings of the RecSys 2025, Prague, Czech Republic, 22–26 September 2025; ACM: New York, NY, USA, 2025; pp. 189–198. [Google Scholar] [CrossRef]
  16. Chen, B.; Chen, S.; Yue, Z.; Yan, K.; Yu, C.; Kong, B.; Lei, C.; Zhuo, C.; Li, Z.; Wang, Y. G-UBS: Towards Robust Understanding of Implicit Feedback via Group-Aware User Behavior Simulation. arXiv 2025, arXiv:2508.05709. [Google Scholar] [CrossRef]
  17. Liu, J.R.; Yang, Z.; Li, T.; Wu, D.; Wang, R.Y. SPR: Similarity pairwise ranking for personalized recommendation. Knowl.-Based Syst. 2022, 239, 107828. [Google Scholar] [CrossRef]
  18. Zhang, Q.; Ren, F. Prior-based bayesian pairwise ranking for one-class collaborative filtering. Neurocomputing 2021, 440, 365–374. [Google Scholar] [CrossRef]
  19. Zheng, J.C.; Wang, H.J. FSBPR: A novel approach to improving BPR for recommendation with the fusion of similarity. J. Supercomput. 2024, 80, 12003–12020. [Google Scholar] [CrossRef]
  20. He, X.N.; Deng, K.; Wang, X.; Li, Y.; Zhang, Y.D.; Wang, M. LightGCN: Simplifying and Powering Graph Convolution Network for Recommendation. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR ’20); Association for Computing Machinery: New York, NY, USA, 2020; pp. 639–648. [Google Scholar] [CrossRef]
  21. Xia, L.H.; Huang, C.; Xu, Y.; Dai, P.; Zhang, X.Y.; Yang, H.S.; Pei, J.; Bo, L.F. Knowledge-Enhanced Hierarchical Graph Transformer Network for Multi-Behavior Recommendation. In AAAI Conference on Artificial Intelligence. 2021. Available online: https://api.semanticscholar.org/CorpusID:235306149 (accessed on 10 August 2025).
  22. Zheng, Y.; Gao, C.; Li, X.; He, X.N.; Jin, D.P.; Li, Y. Disentangling User Interest and Conformity for Recommendation with Causal Embedding. In Proceedings of the Web Conference 2021; 2021. Available online: https://api.semanticscholar.org/CorpusID:231984619 (accessed on 12 August 2025).
  23. Su, J.; Chen, C.; Lin, Z.; Li, X.; Liu, W.; Zheng, X. Personalized Behavior-Aware Transformer for Multi-Behavior Sequential Recommendation. In Proceedings of the 31st ACM International Conference on Multimedia; ACM: New York, NY, USA, 2023; pp. 6321–6331. [Google Scholar] [CrossRef]
  24. Fan, C.; Gao, C.; Shi, W.; Gong, Y.; Zhao, Z.; Feng, F. Fine-grained list-wise alignment for generative medication recommendation. In Neural Information Processing Systems (NeurIPS 2025, Vancouver, Canada); NeurIPS Foundation: San Diego, CA, USA, 2025; pp. 2103–2114. [Google Scholar] [CrossRef]
  25. Nguyen, H.T.; Almenningen, T.; Havig, M.; Schistad, H.; Kofod-Petersen, A.; Langseth, H.; Ramampiaro, H. Learning to rank for personalised fashion recommender systems via implicit feedback. In Mining Intelligence and Knowledge Exploration, MIKE 2014, Cork, Ireland, 10–12 December 2014; Springer: Berlin/Heidelberg, Germany, 2014; pp. 51–61. [Google Scholar] [CrossRef]
  26. Kechinov, M. ECommerce Behavior Data from Multi Category Store. 2019; Kaggle Datasets. Available online: https://github.com/almasfathinirbah/RFM-Analysis-of-eCommerce-Behavior-Data-with-Python (accessed on 6 October 2025).
  27. He, X.N.; Zhang, H.W.; Kan, M.-Y.; Chua, T.-S. Fast matrix factorization for online recommendation with implicit feedback. In ACM International Conference on Research and Development in Information Retrieval (SIGIR 2016); ACM: New York, NY, USA, 2016; pp. 549–558. [Google Scholar] [CrossRef]
Figure 1. Multiple implicit feedback in e-commerce systems.
Figure 1. Multiple implicit feedback in e-commerce systems.
Electronics 15 00985 g001
Figure 2. Item classification.
Figure 2. Item classification.
Electronics 15 00985 g002
Figure 3. Recommendation performance of series pairwise method and BPR method on the three real-world.
Figure 3. Recommendation performance of series pairwise method and BPR method on the three real-world.
Electronics 15 00985 g003
Figure 4. The comparison results of two pairwise methods.
Figure 4. The comparison results of two pairwise methods.
Electronics 15 00985 g004
Figure 5. Sensitivity of ϵ p in MPIF+ on three real-world datasets.
Figure 5. Sensitivity of ϵ p in MPIF+ on three real-world datasets.
Electronics 15 00985 g005
Figure 6. Sensitivity of d in MPIF and MPIF+ on three real-world datasets.
Figure 6. Sensitivity of d in MPIF and MPIF+ on three real-world datasets.
Electronics 15 00985 g006
Figure 7. Sensitivity of N in SDBPR, MPIF and MPIF+ on three real-world datasets.
Figure 7. Sensitivity of N in SDBPR, MPIF and MPIF+ on three real-world datasets.
Electronics 15 00985 g007
Figure 8. Impact of learning rate ( η ) on convergence speed (NDCG@5).
Figure 8. Impact of learning rate ( η ) on convergence speed (NDCG@5).
Electronics 15 00985 g008
Table 1. The correlation between auxiliary set and target set.
Table 1. The correlation between auxiliary set and target set.
DatasetsView and BuyCart and Buy
Sobazaar10%4.9%
Retailrocket29%88%
REES4627%88%
Table 2. Statistics of the three real-world datasets.
Table 2. Statistics of the three real-world datasets.
DatasetsUserItemPurchaseViewCart
Sobazaar4712701518,26797,010154,132
Retailrocket11,71912,02522,457122,56124,963
REES4649,31412,20373,778544,34942,486
Table 3. Recommendation performance of all comparison methods on the three real-world datasets.
Table 3. Recommendation performance of all comparison methods on the three real-world datasets.
DatasetsMethodNDCG@5Pre@5Rec@5AUC
BPR0.00650.00320.00840.5077
WRMF0.01380.00550.01770.5137
eALS0.01650.00680.02300.5158
MSBPR0.18200.05900.25170.6324
SobazaarLightGCN0.19880.06320.27050.6476
SDBPR0.20120.06530.28230.6520
G-UBS0.20350.06670.28500.6582
MPIF0.23500.07530.31390.6732
MPIF+0.27300.08730.36430.6996
BPR0.01370.00370.01580.5091
WRMF0.01100.00320.01200.5079
eALS0.01250.00400.01420.5098
SDBPR0.64170.15500.73850.8766
RetailrocketG-UBS0.65010.16800.77230.8784
LightGCN0.77980.18760.84330.9396
MSBPR0.81560.19210.88560.9494
MPIF0.84340.19970.90410.9624
MPIF+0.86060.20160.91600.9676
BPR0.01850.00620.02730.5151
WRMF0.03470.01380.06570.5342
eALS0.03780.01460.06930.5364
SDBPR0.39550.09680.47210.7388
REES46MSBPR0.41560.10490.50390.7464
LightGCN0.41880.10630.52440.7600
G-UBS0.41880.10820.52650.7609
MPIF0.42160.11120.53600.7735
MPIF+0.56050.14310.68910.8500
Table 4. Recommendation performance of variants of MPIF on the three real-world datasets.
Table 4. Recommendation performance of variants of MPIF on the three real-world datasets.
DatasetsMethodNDCG@5Pre@5Rec@5AUC
SobazaarMPIF_view0.21100.07050.29590.6606
MPIF_cart0.07170.02460.09380.5570
MPIF0.23500.07530.31390.6732
MPIF+0.27300.08730.36430.6996
RetailrocketMPIF_view0.65000.15560.74090.8787
MPIF_cart0.81440.18830.84310.9328
MPIF0.84340.19970.90410.9624
MPIF+0.86060.20160.91600.9676
REES46MPIF_view0.34020.09620.46420.7371
MPIF_cart0.31690.07140.33780.6741
MPIF0.42160.11120.53600.7735
MPIF+0.56050.14310.68910.8500
Table 5. Ablation experiment results on Sobazaar.
Table 5. Ablation experiment results on Sobazaar.
MethodNDCG@5Pre@5Rec@5AUC
MPIF+-S0.21580.06840.29210.6510
MPIF+-M0.24810.07940.33120.6827
MPIF+-NoD0.23260.07400.30960.6701
MPIF+ (Full)0.27300.08730.36430.6996
Table 6. Execution time of MPIF and the comparative methods on the three datasets.
Table 6. Execution time of MPIF and the comparative methods on the three datasets.
MethodSobazaarRetailrocketREES46
BPR71251
CoFiSet3443197
GBPR6088365
WRMF1824152115
eALS155519779
SDBPR111672
MSBPR132189
G-UBS5885352
MPIF141972
MPIF+3/304/4423/233
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, C.; Wang, H.; Liu, L.; Qin, P.; Ma, S.; Cheng, M. MPIF in E-Commerce Recommendation: Application of Multi-Pairwise Ranking with Heterogeneous Implicit Feedback. Electronics 2026, 15, 985. https://doi.org/10.3390/electronics15050985

AMA Style

Chen C, Wang H, Liu L, Qin P, Ma S, Cheng M. MPIF in E-Commerce Recommendation: Application of Multi-Pairwise Ranking with Heterogeneous Implicit Feedback. Electronics. 2026; 15(5):985. https://doi.org/10.3390/electronics15050985

Chicago/Turabian Style

Chen, Cui, Hongjuan Wang, Long Liu, Peijun Qin, Siyuan Ma, and Mingzhi Cheng. 2026. "MPIF in E-Commerce Recommendation: Application of Multi-Pairwise Ranking with Heterogeneous Implicit Feedback" Electronics 15, no. 5: 985. https://doi.org/10.3390/electronics15050985

APA Style

Chen, C., Wang, H., Liu, L., Qin, P., Ma, S., & Cheng, M. (2026). MPIF in E-Commerce Recommendation: Application of Multi-Pairwise Ranking with Heterogeneous Implicit Feedback. Electronics, 15(5), 985. https://doi.org/10.3390/electronics15050985

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop