Next Article in Journal
Identifying Methodological Language in Psychology Abstracts: A Machine Learning Approach Using NLP and Embedding-Based Clustering
Previous Article in Journal
Data-Driven Pseudo-Crack Cognition and Removal for Intelligent Pavement Inspection with Gradient Priority and Self-Attention
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

DPBD: Disentangling Preferences via Borrowing Duration for Book Recommendation

1
School of Computer Science and Engineering, Central South University, Changsha 410083, China
2
Communist Youth League Committee, Central South University, Changsha 410083, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Big Data Cogn. Comput. 2025, 9(9), 222; https://doi.org/10.3390/bdcc9090222
Submission received: 20 June 2025 / Revised: 12 August 2025 / Accepted: 26 August 2025 / Published: 28 August 2025

Abstract

Traditional book recommendation methods predominantly rely on collaborative filtering and context-based approaches. However, existing methods fail to account for the order of users’ book borrowings and the duration they hold them, both of which are crucial indicators reflecting users’ book preferences. To address this challenge, we propose a book recommendation framework called DPBD, which disentangles preferences based on borrowing duration, thereby explicitly modeling temporal patterns in library borrowing behaviors. The DPBD model adopts a dual-path neural architecture comprising the following: (1) The item-level path utilizes self-attention networks to encode historical borrowing sequences while incorporating borrowing duration as an adaptive weighting mechanism for attention score refinement. (2) The feature-level path employs gated fusion modules to effectively aggregate multi-source item attributes (e.g., category and title), followed by self-attention networks to model feature transition patterns. The framework subsequently combines both path representations through fully connected layers to generate user preference embeddings for next-book recommendation. Extensive experiments conducted on two real-world university library datasets demonstrate the superior performance of the proposed DPBD model compared with baseline methods. Specifically, the model achieved 13.67% and 15.75% on HR@1 and 15.75% and 12.90% on NDCG@1 across the two datasets.

1. Introduction

In recent years, as online reading and digital library services have become widespread, recommendation systems have played a critical role in helping users efficiently discover books of interest. Traditional collaborative filtering [1] and content-based [2,3] approaches model long-term preferences statically. They fail to account for the temporal ordering of interactions and the dynamic evolution of user interests. As a result, they struggle to meet readers’ evolving needs at various stages of their reading journey.
In contrast, sequential recommendation (SR) treats user borrowing behaviors as time-ordered event sequences, explicitly capturing temporal patterns in user actions. For example, in a library setting, we might see a user borrow books in this order: “Python Programming Primer” → “Statistical Learning” → “Computer Vision”. The sequential model recognizes this progressive learning pattern and can then predict what specialized books the user might need next. This enables truly personalized recommendations that adapt to changing user interests.
The evolution of sequential recommendation systems has progressed significantly from early Markov chain-based approaches to modern deep learning architectures. Initial research primarily relied on Markov processes to model user behavior sequences, but these methods faced limitations in capturing complex temporal patterns. With the rise in deep neural networks, CNN-based and RNN-based models have emerged as the two dominant approaches for extracting preference information in SR. More recently, attention-based models have gained mainstream popularity due to their exceptional ability to capture long-range dependencies. Representative works include SASRec [4], TiSASRec [5], etc. SASRec [4] leverages self-attention to weigh each item in the user behavior sequence. TiSASRec [5] extends SASRec by incorporating both absolute positional and relative time-interval encodings into the attention mechanism. BERT4Rec [6] adapts the bidirectional Transformer architecture to model user behavior with the full sequence context.
Despite these advances, two key challenges remain. Existing self-attention models compute attention weights solely from query–key similarity, overlooking the time a user spends with each book. In library recommendation scenarios, borrowing duration is regarded as a potent implicit signal of reading engagement intensity [7,8]. Generally, if a user keeps Book A longer than Book B, this tendency may correlate with a greater preference for Book A.
Additionally, existing methods only model item-level transitions, ignoring finer-grained feature-level patterns. For example, a reader might borrow Liu Cixin’s The Three-Body Problem and subsequently borrow The Wandering Earth, demonstrating an author-based transition. Similarly, in the university textbook domain, a student might borrow Advanced Calculus and subsequently borrow Linear Algebra, illustrating a category-based shift within mathematics. Modeling these feature-level transitions better aligns with the evolution of user interests, leading to improved recommendation performance.
To address these issues, we propose DPBD, a book recommendation framework that disentangles user preferences through borrowing duration. Our approach employs dual-path modeling to mine user preferences. In the item-level path, we denoise and normalize borrowing duration using library lending rules and user statistics. And then, we integrate them as dynamic biases into a self-attention network to learn item transition preferences. In the feature-level path, we first fuse multi-source book attributes (e.g., category and abstract) via a gated fusion mechanism, then apply a self-attention network to the resulting attribute sequence to extract feature evolution patterns. Finally, the outputs of the two paths are first concatenated and then passed through a fully connected layer to produce a user preference embedding. This embedding is matched against candidate book embeddings via dot-product scoring to generate the next-book recommendation.
In summary, our main contributions include the following:
  • We incorporate library lending policies and personalized user-level normalization to filter out anomalous borrowing duration and dynamically adjust attention weights, thereby enhancing the precision of user interest modeling.
  • We design parallel item-level and feature-level self-attention networks. One is used to capture item transitions, while the other is used to uncover feature evolution, thereby jointly leveraging both item and attribute signals.
  • We evaluate DPBD through comprehensive experiments on two real-world university library datasets, showing outperformance against state-of-the-art baselines across multiple metrics.

2. Related Works

2.1. Book Recommendation

Book recommendation is a specialized subfield of recommender systems that aims to deliver personalized book suggestions to readers. Early book recommendation systems primarily relied on simple association rules and content-based methods that analyzed book attributes and textual content to suggest related titles. For example, Jomsri et al. [9] proposed a library recommendation system based on user borrowing profiles, employing association rule mining to build the model.
With the advent of collaborative filtering [1,10], recommender systems began to incorporate user behavior data [11], improving recommendation accuracy by computing item–item or user–user similarities. Tewari et al. [12] proposed a hybrid framework that merges content filtering, collaborative filtering, and association rule mining for efficient and accurate book recommendations. Ramakrishnan et al. [1] attempt to assign explicit ratings to users depending on the implicit feedback given by users to specific books using various algorithms, thus increasing the number of entries available in the table.
Recent advances in deep learning have significantly enhanced book recommendation systems, providing greater intelligence and personalization through more precise modeling of user interests. Building on this, Ding et al. [13] proposed an innovative approach that clusters readers based on their academic profiles using a self-organizing map, effectively integrating diverse feature types to boost recommendation accuracy. Liu [14] extracted semantic features of books using BERT and modeled borrowing behavior with long short-term memory networks. Huang et al. [15] further enhanced matrix factorization by embedding additional feature layers within a deep learning framework to improve rating prediction.
However, the above studies largely overlook the impact of temporal dynamics on user preferences. To bridge this gap, some researchers have incorporated temporal attributes into book recommendations. Zhang et al. [16] proposed a personalized time-series collaborative filtering algorithm that accounts for inter-borrow intervals and book circulation frequencies to compute the temporal distance between books.
Yet, applied research in book recommendation remains relatively scarce. Current book recommendation systems remain constrained by their overreliance on explicit rating matrices, frequent neglect of sequential relationships in reading histories, and persistent inability to model the dynamic evolution of readers’ interests.

2.2. Sequential Recommendation

Sequential recommendation (SR) has emerged as a prominent branch in the field of recommendation systems, focusing on predicting the next item a user might be interested in based on their historical interaction sequences (e.g., clicks and purchases). Unlike traditional recommendation models, which process interactions independently, sequence models explicitly capture temporal dependencies among past events, enabling more fine-grained preference modeling. As a result, sequence-aware approaches typically achieve superior accuracy and have been the focus of extensive research in academia and industry [17,18].
Early sequence recommendation approaches primarily used Markov chains to model dynamic interest transitions. FPMC [19] combines matrix factorization with first-order Markov chains to simultaneously model both persistent user preferences and immediate item-to-item transitions. Fossil [20] extended this framework by combining similarity-based techniques with higher-order Markov chains, thereby modeling more complex sequential evolutions. However, the Markov assumption that the next state depends only on one or a few recent interactions limits the ability to capture transition patterns.
With the recent advancements in deep learning, many SR models are gradually being combined with neural networks. The main models employed include recurrent neural networks (RNNs) [21,22], convolutional neural networks (CNNs) [23,24], and graph neural networks (GNNs) [25,26]. Hidasi et al. [22] applied RNNs to sequential recommendation for the first time, breaking the traditional limitation of sequential recommendation methods that could only handle short sequences. Tang et al. [23] proposed the Caser model, which treats embedding representations as images and uses CNNs to extract sequence patterns. Wu et al. [25] framed session data as a graph and employed GNNs to capture intricate item transitions. Wang et al. [27] further enhanced sequence models by integrating transition signals and temporal encodings, leveraging external knowledge graphs and time kernels.
Attention-based methods have recently emerged as the dominant approach for sequential recommendation, demonstrating superior capability in modeling long-range item dependencies. SASRec [4] uses self-attention to assign adaptive weights to each historical item at every time step, achieving significant results. Building on this, BERT4Rec [6] adapts the bidirectional Transformer from NLP to user behavior sequences, enabling the model to fuse context from both past and future interactions for improved recommendation.
With the rapid development of generative systems and large language models (LLMs) [28], researchers have increasingly employed LLMs in combination with knowledge graphs [29] to enhance recommendation system performance. Liu et al. [30] propose a hybrid recommendation framework that leverages LLM semantic embeddings for long-tail items while retaining traditional collaborative filtering for popular items, effectively combining their complementary strengths. To address the semantic gap between structured knowledge and sequential text, Zhai et al. [31] construct a set of relation templates to transform structured knowledge graphs into knowledge prompts. Additionally, it mitigates knowledge noise issues by building knowledge trees and implementing knowledge tree masking mechanisms.

3. Proposed Method

Figure 1 illustrates our proposed book recommendation framework DPBD, which jointly captures item-level and feature-level sequential patterns via a dual self-attention layer. First, each book and its associated features are fed into an embedding layer to obtain dense vector representations. Text-based features, such as book titles, are processed with a pre-trained BERT model to obtain context-aware token embeddings. In the item-level path, we introduce a time-aware self-attention mechanism. It incorporates normalized and denoised lending times as dynamic biases, learning fine-grained book transition patterns. In the feature-level path, we first fuse multiple attribute embeddings, such as category and abstract, through a gated fusion module to adaptively select salient features. Subsequently, we employ a dedicated feature-based self-attention layer to extract feature evolution patterns. Finally, the outputs of both paths are concatenated and passed through a fully connected layer to produce the user’s preference vector, which drives the next-book recommendation.

3.1. Problem Definition

The book recommendation system predicts users’ next potential borrowings based on their historical borrowing sequences. Let U = { u 1 , u 2 , , u N } denote the set of users and I = { i 1 , i 2 , , i M } denote the set of books, where N and M represent the total number of users and books. For each user, we define a historical interaction sequence S = ( s 1 , s 2 , , s n ) , where each element s i I corresponds to a book borrowed at time step t I , ordered chronologically. Each book i is associated with metadata attributes (e.g., category and title). Specifically, we represent the category of book i as f i F , where F denotes the set of all possible categories. Given a user borrowing sequence S , the goal of book recommendation is to predict the next book s n + 1 that the user u is most likely to borrow from the item set I . Formally, the next-book prediction task is formulated as follows:
s n + 1 = arg max i I p ( i S ) .

3.2. Embedding Layer

To accommodate variable-length input sequences, we first normalize the training sequence S into a fixed-length sequence S = ( s 1 , s 2 , , s L ) , where L denotes the predefined maximum length. Specifically, if a sequence exceeds length L, we keep only the most recent L interactions; if it is shorter, we left-pad it with <pad> tokens until its length equals L, thus maintaining consistent input dimensions.
The same truncation and padding strategy is applied uniformly to all associated feature sequences. For instance, given an item sequence S , its corresponding feature sequence is similarly normalized to F = ( f 1 , f 2 , , f L ) , ensuring each f i aligns with s i . .
For discrete categorical features, such as book categories and authors. Each feature value is encoded as a one-hot vector e i ( c ) , which subsequently undergoes projection into a dense embedding space through an embedding layer. The formula for calculating e i ( c ) is as follows:
e i ( c ) = EmbeddingLookup ( f i ) R d .
For textual attributes like book titles and abstracts, we first tokenize the text and process it through a pre-trained BERT model to generate context-sensitive token embeddings. These are then mean-pooled into fixed-size vectors and linearly projected to the target dimension. The formula for calculating e i ( t ) is as follows:
h ¯ i = 1 L j = 1 L BERT ( x 1 , , x L ) j ,
e i ( t ) = W h ¯ i + b R d ,
where x i represents the token at position i and L represents the maximum sequence length.

3.3. Item-Level Sequence Modeling

3.3.1. Borrowing-Duration Modeling

Based on an empirical analysis of user borrowing behavior in an academic library, we found that borrowing duration effectively reflects user interest levels. This study employed a within-user analytical approach, which calculates the proportion of subsequent borrowings for each user in the same subject category following both long (≥14 days) and short (<14 days) borrowing durations. The statistical results demonstrate that the subject continuity rate after long borrowing reaches 70%, which is significantly higher than that following short borrowing periods. This finding provides important insights for optimizing library personalized recommendation systems, as analyzing user borrowing duration patterns enables more accurate identification of their in-depth interest areas.
To model borrowing duration, we classify behaviors into three types, namely blind borrowing, normal borrowing, and overdue borrowing, and make the following assumption:
  • A reader’s interest in an unfamiliar book can be inferred from other readers’ interactions with the same book.
  • Within the normal borrowing window, borrowing duration is positively correlated with interest intensity.
  • Any borrowing duration exceeding the maximum allowable period is considered maliciously overdue (e.g., forgotten returns or system update delays).
Based on library lending rules and historical data, we define two key parameters: the minimum effective borrowing duration μ a and the maximum valid borrowing duration μ b . The formulas are defined as follows:
μ a = 1 N a u u a t u , a ,
μ b = μ b 0 ( 1 + R ) ,
where u a represents the set of users who borrowed the book a, N a represents the total number of users who borrowed book a, t u , a represents the borrowing duration, μ b 0 represents the maximum single borrowing duration, and R represents the number of allowed renewals.
Based on the above assumptions, we compute each user’s relative borrowing duration from their actual loan time. If this duration falls below the average borrowing duration μ a of all users who have borrowed the same book, it is classified as blind borrowing; if it exceeds the maximum allowable time μ b , it is treated as noise. The adjusted borrowing duration t for the i t h book of each user is calculated as follows:
t i = 0 , t i μ a , t i , μ a < t i < μ b , 0 , t i μ b .
Reading speeds vary greatly among users; some academic readers may spend several weeks on a single monograph, while others might breeze through multiple novels in just a week. Directly comparing absolute borrowing durations thus overlooks individual variability. To account for personal reading habits, we normalize each user’s borrowing duration by their own average loan time. Specifically, we take the ratio of a user’s relative borrowing duration to their mean borrowing duration, yielding the adjusted borrowing duration t . The average borrowing duration t u ¯ and personalized borrowing duration t of each user is calculated as follows:
t u ¯ = 1 N u i = 1 N u t i ,
t i = t i t u ¯ ,
where t i represents user u borrowing duration on the i t h book, N u represents the total borrowing number, and t i represents user u personalized borrowing duration on the i t h book.
To prevent extreme values from skewing the results, we apply min–max normalization to transform all borrowing durations into the unit interval [ 0 , 1 ] . The formula for normalization is as follows:
T i m e S c o r e = t i M i n ( t ) M a x ( t ) M i n ( t ) ,
where M i n ( t ) represents the minimum value in the user’s borrowing duration records and M a x ( t ) represents the maximum value in the user’s borrowing duration history.
Through the above steps, we simultaneously satisfy the library’s lending policy constraints and accommodate individual variations in reading pace, ultimately producing a unified, high-confidence time score. By using this score as a dynamic bias in the self-attention mechanism, we inject borrowing-duration semantics into sequence modeling, thereby more comprehensively capturing users’ borrowing preferences.

3.3.2. Item-Level Self-Attention Layer

The item-level self-attention layer takes borrowing duration into account and is composed of stacked self-attention blocks. Given a user, we can obtain an item sequence S and its embedding representations E s via the embedding layer.
It is important to note that a self-attention model does not inherently encode the ordering of elements in the input sequence [32]. Therefore, to enable the model to capture relative or absolute positional information, we incorporate position embeddings.
In conventional approaches, position embeddings are incorporated directly into item embeddings, enabling each representation to capture both semantic information and the positional context. However, prior work has shown that this early fusion can lead to information intrusion [33], where positional and semantic signals interfere. To mitigate this, we employ a non-intrusive fusion strategy. Concretely, we feed pure item embeddings as the values in the self-attention, while the query and key are constructed by fusing item and position embeddings. It ensures that attention scores account for both semantic and positional information, while the final aggregated representation depends exclusively on the pure item embeddings, preventing any contamination from position embeddings. The system of equations is formally defined as follows:
Q = W Q E s + P ,
K = W K E s + P ,
V = W V E s ,
where W Q , W K , and W V are learnable projection matrices; E s denotes the book (item) embedding; and P denotes the learnable position embedding matrix.
While the traditional self-attention mechanism can capture inter-item relationships, it fails to explicitly utilize the crucial temporal information contained in borrowing durations. To address this limitation, we introduce a normalized borrowing duration score (TimeScore) as a dynamic bias term into the attention computation. The calculation formula for item-based preferences is as follows:
S c o r e = A t t e n t i o n S c o r e + T i m e S c o r e ,
A t t e n t i o n S c o r e = Q K T d ,
H = s o f t m a x ( S c o r e ) V .
The self-attention layer remains fundamentally a linear model, limiting its capacity to represent complex interactions. To enhance the model’s ability to capture non-linear relationships and account for interactions across latent dimensions, we apply a two-layer feed-forward network (FFN) to H. The formula is given as follows:
O b = F F N ( H ) = R e L U ( H W 1 + b 1 ) W 2 + b 2 ,
where W * and b * are model parameters.
After the first self-attention block (i.e., a self-attention layer and a feed-forward network), each H i has effectively aggregated information from all prior item embeddings. For capturing richer transition patterns, we employ stacked self-attention blocks. Formally, for block index p ( p > 1 ) , we define the model as follows:
O b ( p ) = S A ( O b ( p 1 ) ) ,
O b ( 1 ) = H ,
where S A ( · ) denotes the self-attention blocks.
Because layer normalization stabilizes the input distribution and speeds up convergence [34], residual connections combined with dropout mitigate overfitting and vanishing gradients [35]. For each layer g (i.e., the self-attention layer or the feed-forward network) in the block, we apply layer normalization on the input x before feeding it into g, apply dropout on g’s output, and add the input x to the final output. The formula is defined as follows:
g ( x ) = x + D r o p o u t ( g ( L a y e r N o r m ( x ) ) ,
L a y e r N o r m ( x ) = α x μ σ 2 + ε + β ,
where μ and σ are the mean and standard deviation of x over its feature dimensions, α and β are learnable scaling and shifting parameters, and ⊙ denotes element-wise multiplication.

3.4. Feature-Level Sequence Modeling

3.4.1. Gated Fusion Layer

In book recommendation scenarios, the heterogeneous nature of book features (e.g., title, author, and abstract) makes it challenging to determine which characteristics influence user preferences. To effectively combine different attribute features, we adopt a gated fusion mechanism that enables dynamic integration. For a given book i, its attributes are represented as A i = { e i ( c ) , e i ( t ) } , where e i ( c ) denotes the embedding vector of the category and e i ( t ) represents the textual feature embedding of book i. The formula of the fused features is as follows:
z i = σ ( W z A i + b z ) ,
A ˜ i = tanh ( W a A i + b a ) ,
f i = z i A ˜ i + ( 1 z i ) A i ,
where W * and b * are model parameters and σ is the sigmoid function.
In cases where a book has only a single attribute (e.g., category), its fused representation simply reduces to e ( c ) . The gated fusion layer dynamically adjusts the weights between transformed and original features, enabling the adaptive highlighting of salient information. It effectively models intricate dependencies among features by employing nonlinear transformations.

3.4.2. Feature-Level Self-Attention Layer

The objective of the feature-based self-attention layer is to capture informative transformation patterns at the feature level. This design adopts a purely feature-driven interaction approach, deliberately discarding temporal and positional information commonly used in traditional sequence modeling, to focus exclusively on uncovering intrinsic relationships between features.
Given a specific user, we can obtain the feature sequence F . After gating fusion, we obtain the vector representation E f of the initial feature sequence. Similar to the item-level processing that produces O b , the feature-level path generates its output O f using a self-attention architecture to capture abstract feature transitions. This pathway focuses on modeling high-level feature evolution, such as categorical shifts from “science fiction” to “AI popular science” or changes in author preferences, rather than temporal patterns. By deliberately excluding temporal signals, the architecture maintains a pure focus on learning intrinsic feature relationships. The computational process for O f proceeds as follows:
Q = W Q E f , K = W K E f , V = W V E f ,
F = s o f t m a x ( Q K T d ) V ,
O f = F F N ( F ) = R e L U ( F W 1 + b 1 ) W 2 + b 2 ,
where W Q , W K , and W V are learnable projection matrices; E f denotes the feature embedding; and W * and b * are model parameters.
To capture more complex transition patterns, we stack the self-attention blocks. The output of the feature-based self-attention block at block index q ( q > 1 ) is defined as follows:
O f ( q ) = S A ( O f ( q 1 ) ) ,
O f ( 1 ) = F .
where S A ( · ) denotes the self-attention blocks.

3.5. Preference Fusion and Prediction

This section presents the fusion process of item-level and feature-level preferences, followed by the prediction of next-book selection based on the integrated preferences.
For joint modeling of item and category transition patterns, we first concatenate the respective outputs of the item-level O b and feature-level O f self-attention layers, then project the combined representation through a fully connected layer. The formula is as follows:
O b f = O b , O f W b f + b b f ,
where the semicolon , indicates horizontal stacking, W b f R 2 d × d , and b b f R d .
In sequential recommendation, predicting the next book can be formulated as a classification task over the entire item set. Given a user preference representation O b f R d , we compute the probability of each candidate book by taking its dot product with the item embedding matrix M R | I | × d . The formula is as follows:
p ^ = softmax O b f M ,
where p ^ is the predicted probability that the user will borrow the next book i.
To optimize the model, we use the cross-entropy loss function to update the parameters. The loss function L for the recommendation task is defined as follows:
L = i = 1 M p i log ( p i ^ ) + ( 1 p i ) log ( 1 p i ^ ) ,
where p i represents the ground truth and p i ^ represents the predicted probability from the model.

4. Experiment

To evaluate the effectiveness and efficiency of DPBD, we conducted experiments on two real-world datasets collected from a university library system, addressing the following research questions:
  • RQ1: Can DPBD outperform the current mainstream recommendation algorithms on baseline tasks?
  • RQ2: Do the innovative components of DPBD contribute to its performance?
  • RQ3: How do hyperparameters affect DPBD’s performance?

4.1. Datasets

This study constructs experimental datasets based on real circulation records from two leading university libraries (University A and University B) during the 2021–2022 academic year, following rigorous data screening and processing protocols. The detailed specifications of the two datasets are summarized in Table 1. University A (Univ. A) comprises 81,171 circulation records from 4343 users covering 37,384 distinct books, while University B (Univ. B) includes 124,674 transactions from 6042 users involving 46,374 unique titles.
To mitigate the data sparsity issue, we employ a sliding window strategy to extract subsequences from each user’s borrowing history. Formally, given K users with individual behavior sequences, the total number of generated subsequences T can be expressed as
T = i = 1 K m a x ( n i L + 1 , 1 ) ,
where n i denotes the length of the i t h user’s sequence and L is the sliding window length.
For each user sequence s, where n i L , the number of subsequences that can be generated is n i L + 1 . For shorter sequences n i < L , we generate a single subsequence by zero-padding the original sequence to length L.
Following the methodology of prior work [4], we partition the dataset by using the last item in each sequence as the test sample, the penultimate item as the validation sample, and all preceding items as the training set.

4.2. Baseline Methods

To demonstrate the effectiveness of our proposed model, we compare DPBD with three types of baseline models:
  • Content-Based Recommendation Model (CBR) [36]: It recommends items similar to a user’s historical preferences by analyzing item features (e.g., book authors, categories, and abstracts) to calculate similarity.
  • Item-Based Collaborative Filtering (ItemCF) [37]: It computes co-occurrence similarity between items under the assumption that users who liked item A will also like item B, making it adept at uncovering item–item relationships.
  • User-Based Collaborative Filtering (UserCF) [38]: It identifies users with similar preferences and recommends items liked by those peers, based on the hypothesis that similar users enjoy similar items.
  • GRU4Rec [22]: It pioneered the application of gated recurrent units (GRUs) in sequential recommendation tasks. It models temporal dependencies in user behavior sequences through GRU networks, effectively capturing short-term interest dynamics.
  • Caser [16]: It introduces convolutional neural networks for sequence modeling, using horizontal convolutions to extract item-specific features and vertical convolutions to capture cross-item transition patterns.
  • SR-GNN [25]: It represents user histories as directed item graphs and employs graph neural networks to learn complex transition relationships, followed by an attention mechanism to generate recommendations.
  • SASRec [4]: It employs a unidirectional Transformer architecture to model user behavior sequences and Captures long-term dependencies through self-attention mechanisms while preserving sequential order via positional encoding.
  • BERT4Rec [6]: It incorporates a bidirectional Transformer architecture, adopting BERT’s masked language modeling approach. It also learns richer sequence representations by randomly masking items in sequences and predicting the masked items.
  • TiSASRec [5]: It builds on SASRec by embedding both absolute positional cues and relative time-interval information directly into its attention mechanism.

4.3. Metrics for Evaluation

When evaluating book recommendation systems, employing a methodology that ensures both validity and fairness is critical. To this end, we adopt a full-ranking evaluation protocol, which ranks every title in the library for each test case. By doing so, we avoid the biases inherent in partial evaluations, which consider only a sampled subset of items. To avoid evaluation bias, we assess systems using complete recommendation lists, which provide a more reliable measure of next-book prediction performance.
Book recommendation systems commonly use the hit rate (HR) and normalized discounted cumulative gain (NDCG) to measure recommendation performance. The HR provides a straightforward and intuitive measure of how well the recommended book lists align with users’ actual interests. The formula is expressed as follows:
H R = 1 N i = 1 N h i t s ( i ) ,
where N represents the number of recommendations and h i t s ( i ) represents whether the recommended list includes the books borrowed by the i t h user.
The NDCG metric quantifies the ranking quality of book recommendation lists, assigning higher scores to relevant books placed at higher positions. In practical evaluation, NDCG accounts for hit positions across all recommendation scenarios and user sessions. A higher NDCG value indicates that the books preferred by users are ranked closer to the top. The formula is expressed as
N D C G = 1 N i = 1 N 1 l o g ( r i + 1 ) ,
where r i , on behalf of the suggestion list, hits the i t h reader location of the real books.

4.4. Experimental Setup

The baseline models were implemented using either the authors’ original code or established public implementations. Combining reported configurations with each model’s performance on validation data, we identified the optimal hyperparameters. This study employs two widely adopted evaluation metrics: the HR and NDCG. For our experiments, we selected k = 1, 5, and 10 to demonstrate the results of HR@k and NDCG@k under different ranking cutoffs. The embedding size was fixed at 50 for all models, with a consistent batch size of 64. The maximum sequence length L was fixed at 10 across both datasets to ensure consistent comparisons. Our model implementation is publicly available at the following link: https://github.com/CSULP/DPBD, accessed on 23 July 2025.

5. Results and Discussion

This section discusses the proposed model’s performance using multiple metrics while examining the relative contributions of its architectural components and hyperparameter choices.

5.1. Overall Performance (RQ1)

We compare the performance of all baselines with DPBD. Table 2 presents the experimental results on the two real-world datasets, University A and University B, from which several key observations can be made.
Traditional methods, such as CBR, ItemCF, and UserCF, exhibit the weakest performance across all metrics. This outcome highlights their fundamental limitation: these models are built on static feature analysis or simple co-occurrence statistics, which fail to capture the evolving nature of user preferences over time. Such static modeling is inherently insufficient for dynamic recommendation scenarios, where users’ interests shift based on recent behaviors and temporal contexts.
While sequential models such as GRU4Rec and Caser show clear improvements over traditional baselines by considering sequential patterns, they still lag behind Transformer-based methods. The superior performance of SASRec and BERT4Rec thus validates the advantage of attention-based architectures in sequential recommendation tasks. However, it is worth noting that although BERT4Rec employs a bidirectional Transformer architecture, its performance on both datasets is slightly inferior to that of unidirectional SASRec. We attribute this to the relatively short borrowing sequences in the datasets, which limit the benefits of bidirectional context modeling. For shorter sequences, the benefit of the bidirectional context in prediction may remain underutilized, whereas the unidirectional architecture of SASRec more naturally maintains temporal ordering.
TiSASRec builds upon SASRec by embedding absolute positional and relative time-interval information into the attention mechanism. On the University A dataset, it improves HR@1 by approximately 5% and NDCG@1 by about 6%, further validating the importance of temporal signals in capturing reading depth and user preferences.
By combining borrowing-duration modeling and feature-sequence modeling, DPBD achieves the best performance in our experiments. Specifically, DPBD outperforms all baseline models on all evaluation metrics across both datasets. Compared to the strongest baseline TiSASRec, DPBD attains average improvements of 14.6%, 12.5%, 7.82%, 13.5%, 10%, and 7.3% across the six evaluation metrics on two datasets.

5.2. Blation Studies (RQ2)

We compare DPBD against four simplified variants to assess the contribution of each key component:
  • w/o Time (WT): It removes borrowing-time modeling and uses only the original attention scores.
  • w/o Fusion (WGF): It removes the gated fusion layer in feature-level sequence modeling, replacing it with a simple average over all features.
  • w/o Feature (WF): It removes feature-level sequence modeling and its associated fully connected layer, retaining only item-level sequence modeling.
  • w/o Both (WB): It removes borrowing-time modeling and feature-level sequence modeling, which reduce the model to SASRec.
Table 3 reports our model’s performance on two datasets, revealing that each key module makes a substantial contribution.
When we remove borrowing duration (WT variant), performance declines, providing compelling evidence for the critical role of borrowing duration in modeling reading engagement. This performance decline stems from two essential functions of temporal signals in our framework. First, we establish a borrowing duration model by integrating library lending policies with personalized user-level normalization to filter out anomalous borrowing records. Second, we integrate borrowing duration as dynamic biases within attention weights, allowing the model to better distinguish between shallow browsing and in-depth reading.
Omitting adaptive feature fusion (WGF variant) incurs a slight drop, indicating that dynamically weighting book attributes refines user preference modeling. When average weighting is applied, the model loses the ability to differentiate between category-level and author-specific user preferences. Simple feature averaging obscures importance distinctions among metadata elements, while the proposed gating mechanism dynamically adjusts feature weights to enhance the most discriminative combinations.
Excluding feature-level temporal modeling (WF variant) produces a larger decrease, underscoring the importance of capturing fine-grained shifts in user interests, such as category transitions. Item-level modeling can only learn explicit borrowing correlations (e.g., if a user borrowed book A, they often borrow book B next) but fails to capture potential associations within the same category. Feature-level modeling uncovers the rationale behind borrowing behavior by tracking attribute evolution (categories/authors/keywords), revealing the implicit logic of user preferences.
Finally, removing borrowing-time modeling and feature-level sequence modeling (WB variant) lowers HR@10 and NDCG@10 by 12% and 11% on the University A dataset. These results confirm that each module independently boosts performance and that their combination yields the best overall results.

5.3. Hyperparameter Study (RQ3)

Impact of embedding size d: Figure 2 shows our model’s performance on both datasets under different embedding dimensions d. When d is small, the limited capacity of each embedding vector restricts the model’s ability to capture fine-grained item information, leading to suboptimal performance. As d increases, performance improves, indicating that a higher-dimensional embedding can encode richer semantic information. However, when d > 60 , the model’s performance degrades, suggesting that overly high dimensionality introduces parameter redundancy that promotes noise fitting and potential overfitting.
Impact of sequence length L: Figure 3 illustrates how different sequence lengths affect our model on both datasets. When the sequence length is very short, the model can only utilize a limited history of interactions, making it difficult to capture the long-term evolution of user preferences and resulting in suboptimal recommendations. As the sequence length grows, performance steadily improves and peaks at 10. However, increasing the sequence length significantly reduces the number of available training samples, as fewer users have extensive interaction histories. The resulting reduction in training data weakens the model’s generalization ability, ultimately leading to a decline in performance.
Impact of the number of attention blocks (p and q): Table 4 reports model performance on both datasets under various settings of item-level self-attention blocks p and feature-level self-attention blocks q. On the University A dataset, the best results are obtained when p = 1 and q = 1 , while on the University B dataset, performance peaks when p = 1 and q = 2 . The discrepancy likely arises from differences in feature complexity between the datasets. The University B dataset contains more diverse and descriptive item features (e.g., abstracts), requiring deeper feature-level modeling to capture complex inter-feature relationships. By contrast, the feature space in the University A dataset exhibits simplicity and homogeneity, where a lightweight feature-attention structure provides adequate modeling capacity while maintaining parameter efficiency. Notably, performance does not consistently improve when p > 1 , indicating that single-layer item-level attention suffices for modeling sequential transitions in both datasets.

6. Conclusions

To overcome the shortcomings of traditional book recommenders in modeling dynamic preference evolution, we propose DPBD, a book recommendation framework that disentangles preferences via borrowing duration. By injecting normalized borrowing durations as dynamic biases into item-level self-attention, the model distinguishes between shallow browsing and deep reading, thereby enhancing the accuracy of preference modeling. In the feature-level path, a gated fusion layer adaptively selects among attributes such as category, author, and title, followed by self-attention to capture their temporal evolution patterns. Finally, the item- and feature-level representations are concatenated and passed through a fully connected layer to form a unified user preference vector. Experiments on two real university library datasets show that DPBD outperforms state-of-the-art baseline models.
In future work, on the one hand, we will integrate additional rich reading behavior signals (such as likes and comments) into the preference modeling framework. On the other hand, we will explore multi-view contrastive learning to enhance the model’s generalization ability in scenarios characterized by extreme sparsity.

Author Contributions

Conceptualization, Z.L. and L.C.; methodology, Z.L.; software, L.C.; validation, L.C.; formal analysis, L.C. and Y.Q.; investigation, Z.L.; resources, Z.L.; data curation, Y.Q.; writing—original draft preparation, L.C.; writing—review and editing, Z.L., Y.Q. and F.L.; visualization, L.C.; supervision, F.L.; project administration, Z.L. and F.L.; funding acquisition, Z.L. and F.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research is supported by NSSF (22BTQ033).

Informed Consent Statement

This study does not involve humans or animals.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors upon request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Ramakrishnan, G.; Saicharan, V.; Chandrasekaran, K.; Rathnamma, M.; Ramana, V.V. Collaborative filtering for book recommendation system. In Soft Computing for Problem Solving: SocProS 2018; Springer: Berlin/Heidelberg, Germany, 2020; Volume 2, pp. 325–338. [Google Scholar]
  2. Mathew, P.; Kuriakose, B.; Hegde, V. Book Recommendation System through content based and collaborative filtering method. In Proceedings of the 2016 International Conference on Data Mining and Advanced Computing (SAPIENCE), Ernakulam, India, 16–18 March 2016; IEEE: New York, NY, USA, 2016; pp. 47–52. [Google Scholar]
  3. Mooney, R.J.; Roy, L. Content-based book recommending using learning for text categorization. In Proceedings of the Fifth ACM Conference on Digital Libraries, San Antonio, TX, USA, 2–7 June 2000; pp. 195–204. [Google Scholar]
  4. Kang, W.C.; McAuley, J. Self-attentive sequential recommendation. In Proceedings of the 2018 IEEE International Conference on Data Mining, Singapore, 17–20 November 2018; pp. 197–206. [Google Scholar]
  5. Li, J.; Wang, Y.; McAuley, J. Time interval aware self-attention for sequential recommendation. In Proceedings of the 13th International Conference on Web Search and Data Mining, Houston, TX, USA, 3–7 February 2020; pp. 322–330. [Google Scholar]
  6. Sun, F.; Liu, J.; Wu, J.; Pei, C.; Lin, X.; Ou, W.; Jiang, P. BERT4Rec: Sequential recommendation with bidirectional encoder representations from transformer. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, Beijing, China, 3–7 November 2019; pp. 1441–1450. [Google Scholar]
  7. Vaz, P.C.; Ribeiro, R.; de Matos, D.M. Understanding temporal dynamics of ratings in the book recommendation scenario. In Proceedings of the 2013 International Conference on Information Systems and Design of Communication, Lisboa, Portugal, 11–12 July 2013; pp. 11–15. [Google Scholar]
  8. Jing, M.; Yu, Y. CF Recommending Model Based on Borrowing-time Scores and Its Application. Libr. Inf. Serv. 2012, 56, 117–120. [Google Scholar]
  9. Jomsri, P. Book recommendation system for digital library based on user profiles by using association rule. In Proceedings of the Fourth edition of the International Conference on the Innovative Computing Technology (INTECH 2014), Luton, UK, 13–15 August 2014; IEEE: New York, NY, USA, 2014; pp. 130–134. [Google Scholar]
  10. Goldberg, D.; Nichols, D.; Oki, B.M.; Terry, D. Using collaborative filtering to weave an information tapestry. Commun. ACM 1992, 35, 61–70. [Google Scholar] [CrossRef]
  11. Sarwar, B.; Karypis, G.; Konstan, J.; Riedl, J. Item-based collaborative filtering recommendation algorithms. In Proceedings of the 10th International Conference on World Wide Web, Hong Kong, China, 1–5 May 2001; pp. 285–295. [Google Scholar]
  12. Tewari, A.S.; Kumar, A.; Barman, A.G. Book recommendation system based on combine features of content based filtering, collaborative filtering and association rule mining. In Proceedings of the 2014 IEEE International Advance Computing Conference (IACC), Gurgaon, India, 21–22 February 2014; IEEE: New York, NY, USA, 2014; pp. 500–503. [Google Scholar]
  13. Ding, Y.; Zhang, Y.; Fu, Q.; Zhou, J.; Huang, Z. Precise Book Recommendation Based on SOM Neural Network and Ranking Factorization Machine. Inf. Stud. Theory Appl. 2019, 42, 133–138+170. [Google Scholar]
  14. LIU, Y. Deep Learning Recommendation Algorithm Based on Reader Preference Analysis. J. Southwest Univ. Nat. Sci. Ed. 2023, 45, 201–209. [Google Scholar]
  15. Huang, Y.; Zhang, W.; Zhang, S. Personalized Recommendation of Online Book Resources Based on Deep Distance Decomposition. Inf. Sci. 2021, 39, 76–81. [Google Scholar]
  16. Zhang, F. A personalized time-sequence-based book recommendation algorithm for digital libraries. IEEE Access 2016, 4, 2714–2720. [Google Scholar] [CrossRef]
  17. Fang, H.; Zhang, D.; Shu, Y.; Guo, G. Deep learning for sequential recommendation: Algorithms, influential factors, and evaluations. ACM Trans. Inf. Syst. (TOIS) 2020, 39, 1–42. [Google Scholar] [CrossRef]
  18. Wang, S.; Hu, L.; Wang, Y.; Cao, L.; Sheng, Q.Z.; Orgun, M. Sequential recommender systems: Challenges, progress and prospects. arXiv 2019, arXiv:2001.04830. [Google Scholar] [CrossRef]
  19. Rendle, S.; Freudenthaler, C.; Schmidt-Thieme, L. Factorizing personalized markov chains for next-basket recommendation. In Proceedings of the 19th International Conference on World Wide Web, Raleigh, NC, USA, 26–30 April 2010; pp. 811–820. [Google Scholar]
  20. He, R.; McAuley, J. Fusing similarity models with markov chains for sparse sequential recommendation. In Proceedings of the 2016 IEEE 16th International Conference on Data Mining (ICDM), Barcelona, Spain, 12–15 December 2016; IEEE: New York, NY, USA, 2016; pp. 191–200. [Google Scholar]
  21. Wu, C.Y.; Ahmed, A.; Beutel, A.; Smola, A.J.; Jing, H. Recurrent recommender networks. In Proceedings of the Tenth ACM International Conference on Web Search and Data Mining, Cambridge, UK, 6–10 February 2017; pp. 495–503. [Google Scholar]
  22. Hidasi, B.; Karatzoglou, A.; Baltrunas, L.; Tikk, D. Session-based recommendations with recurrent neural networks. arXiv 2015, arXiv:1511.06939. [Google Scholar]
  23. Tang, J.; Wang, K. Personalized top-n sequential recommendation via convolutional sequence embedding. In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining, Los Angeles, CA, USA, 5–9 February 2018; pp. 565–573. [Google Scholar]
  24. Yuan, F.; Karatzoglou, A.; Arapakis, I.; Jose, J.M.; He, X. A simple convolutional generative network for next item recommendation. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining, Melbourne, Australia, 11–15 February 2019; pp. 582–590. [Google Scholar]
  25. Wu, S.; Tang, Y.; Zhu, Y.; Wang, L.; Xie, X.; Tan, T. Session-based recommendation with graph neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 27 January–1 February 2019; Volume 33, pp. 346–353. [Google Scholar]
  26. Zhang, M.; Wu, S.; Yu, X.; Liu, Q.; Wang, L. Dynamic graph neural networks for sequential recommendation. IEEE Trans. Knowl. Data Eng. 2022, 35, 4741–4753. [Google Scholar] [CrossRef]
  27. Wang, C.; Zhang, M.; Ma, W.; Liu, Y.; Ma, S. Make it a chorus: Knowledge-and time-aware item modeling for sequential recommendation. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, Virtual Event, China, 25–30 July 2020; pp. 109–118. [Google Scholar]
  28. Ma, Z.; Liu, J.; Luo, X.; Huang, Z.; Zhu, Q.; Che, W. Advancing Tool-Augmented Large Language Models via Meta-Verification and Reflection Learning. arXiv 2025, arXiv:2506.04625. [Google Scholar]
  29. Zhang, J.C.; Zain, A.M.; Zhou, K.Q.; Chen, X.; Zhang, R.M. A review of recommender systems based on knowledge graph embedding. Expert Syst. Appl. 2024, 250, 123876. [Google Scholar] [CrossRef]
  30. Liu, Q.; Wu, X.; Wang, Y.; Zhang, Z.; Tian, F.; Zheng, Y.; Zhao, X. Llm-esr: Large language models enhancement for long-tailed sequential recommendation. Adv. Neural Inf. Process. Syst. 2024, 37, 26701–26727. [Google Scholar]
  31. Zhai, J.; Zheng, X.; Wang, C.D.; Li, H.; Tian, Y. Knowledge prompt-tuning for sequential recommendation. In Proceedings of the 31st ACM International Conference on Multimedia, Ottawa, ON, Canada, 29 October–3 November 2023; pp. 6451–6461. [Google Scholar]
  32. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. In Advances in Neural Information Processing Systems; NIPS Foundation: La Jolla, CA, USA, 2017; Volume 30. [Google Scholar]
  33. Liu, C.; Li, X.; Cai, G.; Dong, Z.; Zhu, H.; Shang, L. Noninvasive self-attention for side information fusion in sequential recommendation. In Proceedings of the the AAAI Conference on Artificial Intelligence, Virtually, 2–9 February 2021; Volume 35, pp. 4249–4256. [Google Scholar]
  34. Ba, J.L.; Kiros, J.R.; Hinton, G.E. Layer normalization. arXiv 2016, arXiv:1607.06450. [Google Scholar] [CrossRef]
  35. Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 2014, 15, 1929–1958. [Google Scholar]
  36. Pazzani, M.J.; Billsus, D. Content-based recommendation systems. In The Adaptive Web: Methods and Strategies of Web Personalization; Springer: Berlin/Heidelberg, Germany, 2007; pp. 325–341. [Google Scholar]
  37. Linden, G.; Smith, B.; York, J. Amazon. com recommendations: Item-to-item collaborative filtering. IEEE Internet Comput. 2003, 7, 76–80. [Google Scholar] [CrossRef]
  38. Wang, J.; De Vries, A.P.; Reinders, M.J. Unifying user-based and item-based collaborative filtering approaches by similarity fusion. In Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Seattle, WA, USA, 6–11 August 2006; pp. 501–508. [Google Scholar]
Figure 1. Overview of DPBD.
Figure 1. Overview of DPBD.
Bdcc 09 00222 g001
Figure 2. Impact of embedding size.
Figure 2. Impact of embedding size.
Bdcc 09 00222 g002
Figure 3. Impact of sequence length.
Figure 3. Impact of sequence length.
Bdcc 09 00222 g003
Table 1. Statistics of four datasets.
Table 1. Statistics of four datasets.
DatasetUniversity AUniversity B
# Users43436042
# Books37,38446,374
# Interactions81,171124,674
Avg Actions/User18.7020.63
Avg Actions/Books2.172.69
Sparsity99.95%99.96%
Table 2. Performance comparisons of different methods. The results of the best baseline are underlined in each row, and bold values denote the best results. The last column is the relative improvements compared with the best baseline results.
Table 2. Performance comparisons of different methods. The results of the best baseline are underlined in each row, and bold values denote the best results. The last column is the relative improvements compared with the best baseline results.
DatasetMetricCBRecItemCFUserCFGRU4RecCaserBERT4RecSASRecTiSASRecDPBDImprove
Univ. AHR@10.01850.02040.02270.02420.04490.07040.07210.07610.086513.67%
HR@50.02460.02640.02850.03110.05170.08740.08640.09040.102112.94%
HR@100.02840.03020.03270.03940.05320.09960.10110.10410.11389.32%
NDCG@10.01570.01690.01870.02020.04010.06260.06440.06840.078114.18%
NDCG@50.02150.02350.02430.02740.04670.07440.07540.08150.091111.78%
NDCG@100.02540.02720.02910.03450.04520.08690.08870.09270.09856.26%
Univ. BHR@10.02010.02210.02390.02740.04680.07340.07500.07810.090415.75%
HR@50.02640.03030.03140.03340.05340.09480.10040.10430.116912.08%
HR@100.03260.03210.03360.04070.05590.10470.11320.12340.13126.32%
NDCG@10.01640.01850.01980.02250.04170.06740.06810.07210.081412.90%
NDCG@50.02270.02540.02650.02820.04830.08440.09060.09110.09878.34%
NDCG@100.02710.02840.03040.03610.04790.09380.09680.09910.10728.17%
Table 3. The HR@10 and NDCG@10 performances achieved by DPBD variants on two datasets.
Table 3. The HR@10 and NDCG@10 performances achieved by DPBD variants on two datasets.
ModelDataset
Univ. AUniv. B
HR@10HDCG@10HR@10HDCG@10
(A) DPBD0.11380.09850.13120.1072
(B) w/o Time0.10720.09130.12470.1011
(C) w/o Fusion0.11140.09610.12810.1048
(D) w/o Feature0.10510.09190.12140.0997
(E) w/o Both0.10110.08870.11320.0968
Table 4. Impact of the number of attention blocks on HR@10.
Table 4. Impact of the number of attention blocks on HR@10.
pUniv. AUniv. B
q = 1 q = 2 q = 3 q = 1 q = 2 q = 3
10.11380.11300.11160.12610.13120.1294
20.11180.11100.10090.12510.13020.1274
30.10740.10630.10440.12400.12770.1251
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liao, Z.; Chen, L.; Qi, Y.; Li, F. DPBD: Disentangling Preferences via Borrowing Duration for Book Recommendation. Big Data Cogn. Comput. 2025, 9, 222. https://doi.org/10.3390/bdcc9090222

AMA Style

Liao Z, Chen L, Qi Y, Li F. DPBD: Disentangling Preferences via Borrowing Duration for Book Recommendation. Big Data and Cognitive Computing. 2025; 9(9):222. https://doi.org/10.3390/bdcc9090222

Chicago/Turabian Style

Liao, Zhifang, Liping Chen, Yuelan Qi, and Fei Li. 2025. "DPBD: Disentangling Preferences via Borrowing Duration for Book Recommendation" Big Data and Cognitive Computing 9, no. 9: 222. https://doi.org/10.3390/bdcc9090222

APA Style

Liao, Z., Chen, L., Qi, Y., & Li, F. (2025). DPBD: Disentangling Preferences via Borrowing Duration for Book Recommendation. Big Data and Cognitive Computing, 9(9), 222. https://doi.org/10.3390/bdcc9090222

Article Metrics

Back to TopTop