Next Article in Journal
A Short-Term Load Forecasting Method for Typical High Energy-Consuming Industrial Parks Based on Multimodal Decomposition and Hybrid Neural Networks
Previous Article in Journal
Prediction Models for Nitrogen Content in Metal at Various Stages of the Basic Oxygen Furnace Steelmaking Process
Previous Article in Special Issue
Comparison of Selected Algorithms in Movie Recommender System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

APAED: Time-Optimized Adaptive Parameter Exponential Decay Algorithm for Crowdsourcing Task Recommendation

1
College of Computer Science, Sichuan University, Chengdu 610065, China
2
Finance Office, Sichuan University, Chengdu 610065, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(17), 9577; https://doi.org/10.3390/app15179577
Submission received: 24 July 2025 / Revised: 29 August 2025 / Accepted: 29 August 2025 / Published: 30 August 2025
(This article belongs to the Special Issue Advanced Models and Algorithms for Recommender Systems)

Abstract

The explosive growth of tasks on crowdsourcing platforms has intensified information overload, making it difficult for workers to spot lucrative bids; yet mainstream recommenders inherit a user-independence assumption from e-commerce and therefore overlook the real-time competition among workers, which degrades ranking stability and accuracy. To bridge this gap, we propose the Adaptive Parameter Exponential Decay Algorithm (APAED), which first produces base relevance scores with an offline neural model and then injects a competition-aware exponential decay whose strength is jointly determined by the interquartile range of each worker’s score list (global factor) and the live bid distribution of every task (local factor). This model-agnostic adjustment explicitly quantifies competitive intensity without handcrafted features and can be paired with any backbone recommender. Experiments on a real-world dataset comprising 25,643 tasks and 19,735 workers show that APAED cuts the residual RMSE of HR@10 from 9.575 × 10 4 to 5.939 × 10 4 (−38%) and that of MRR from 2.920 × 10 4 to 0.736 × 10 4 (−75%), substantially reducing score fluctuations across epochs and consistently outperforming four strong neural baselines. These results confirm that explicitly modeling worker competition yields more accurate and stable task recommendations in crowdsourcing environments.

1. Introduction

Crowdsourcing platforms are experiencing rapid growth in task volume, making it increasingly difficult for workers to identify promising bids. As system dimensions scale up, operational platforms—which encompass critical financial oversight systems like Sichuan University’s fiscal management platform—experience proportional increases in computational and network loads. This platform, representative of high-stakes financial operations, executes complex functions including high-volume transaction processing, multi-campus budget allocation tracking, and comprehensive year-end settlement calculations. These systems must reliably execute such complex, high-stakes tasks, demanding meticulous precision and extended processing cycles. This necessity underscores the critical requirement for highly efficient, dynamic resource allocation strategies that ensure robustness under load while simultaneously reducing time consumption. Such efficiencies are crucial to reducing the possibility of human errors in intricate financial tasks. While recommender systems have been deployed to relieve this information overload, most existing models inherit the user-independence assumption from e-commerce and media domains and thus fail to capture the fierce, real-time competition among workers for the same task. Consequently, the development of efficient, competition-aware task recommendation algorithms becomes paramount.
Recommendation algorithms for crowdsourcing are primarily categorized into two types [1]: one is the mainstream recommendation algorithm based on neural networks [2,3,4,5], and the other is the recommendation algorithm built on manual modeling for crowdsourcing scenarios [6,7]. Both types of algorithms exhibit deficiencies in crowdsourcing scenarios. Neural network-based recommendation algorithms typically employ single-point methods [4], pairing methods [8], or list methods [9] when generating output sequences. All three methodologies consider the interrelation of items based on the presumption of user independence. However, these algorithms overlook the non-independence of users in crowdsourcing scenarios, leading to inadequate recommendation accuracy. Manual modeling-based recommendation algorithms in crowdsourcing scenarios account for user non-independence, but they require extensive domain knowledge and human input, resulting in inferior learning and predictive capabilities compared to neural network-based methods. Therefore, no existing recommendation algorithm can simultaneously account for user non-independence, particularly in crowdsourcing scenarios deployed within mission-critical platforms like the Sichuan University Finance Office Management System where precision and reliability are paramount due to sensitive financial data and zero tolerance for errors, or exploit the learning and predictive strengths of neural networks, resulting in severely compromised recommendation accuracy and significantly reduced work efficiency. Therefore, this study aims to answer a research question that addresses the issue of inaccurate and inefficient recommendations caused by traditional recommendation algorithms failing to account for the non-independence among workers.
To address these issues, we propose an Adaptive Parameter Exponential Decay Algorithm (APAED) that factors in user competition. APAED introduces user non-independence into crowdsourcing scenarios by quantifying real-time competition among workers and uses it to adjust the prediction scores generated by the neural network model. This not only leverages the robust learning and predictive capabilities of neural networks but also factors in the effect of user non-independence on the score. Moreover, since the distribution of predicted values generated by the neural network model varies among different models, recommendation lists, and tasks, APAED incorporates an adaptive decay factor into the decay functions, enhancing its generalizability. By attenuating the prediction scores from the neural network model, APAED effectively resolves the issue of user non-independence without compromising the model, thereby improving the accuracy of recommendations and thus reducing processing time and enhancing system efficiency. Experiments show that APAED cuts the residual RMSE of HR@10 from 9.575 × 10 4 to 5.939 × 10 4 (−38%) and that of MRR from 2.920 × 10 4 to 0.736 × 10 4 (−75%), substantially reducing score fluctuations across epochs and consistently outperforming four strong neural baselines. The enhanced accuracy enables faster processing to improve workflow efficiency, while facilitating optimal task allocation to mitigate risks and better satisfy financial management system requirements.
The primary contributions of this study are as follows:
  • The introduction of the Adaptive Parameter Exponential Decay Algorithm (APAED), an innovative recommendation algorithm that adjusts its decay factor adaptively based on the distribution of offline model predicted values. During the prediction process, APAED splits the worker’s tasks into an offline prediction part handled by the neural network model and an online prediction part managed by the APAED decay algorithm for score decay. The algorithm attenuates the score of the offline neural network model based on real-time competition intensity among workers, resulting in the final prediction.
  • A comprehensive evaluation of APAED’s effectiveness using real-world datasets. The results demonstrate that APAED significantly improves relevant indicators compared to models that solely rely on neural networks and verify APAED’s robust generalization capabilities across different offline models. Our extensive experiments demonstrate the effectiveness and robustness of this strategy. When applied to the OPCA-CF backbone, APAED cuts the residual RMSE of HR@10 from 9.575 × 10 4 to 5.939 × 10 4 (−38%) and that of MRR from 2.920 × 10 4 to 0.736 × 10 4 (−75%), markedly reducing indicator fluctuations across training epochs. Similar gains hold for three additional neural baselines (NAIS, NAIS+AP, NAIS+DCA), where APAED lifts HR@10 and MRR by 15 40 % on average while consistently tightening residual spreads. APAED marries the expressive power of deep recommenders with an explicit model of real-time worker competition, delivering state-of-the-art accuracy and stability for crowdsourcing task recommendation.
This paper is structured in the following manner: The Introduction segment elucidates the importance of this research and outlines the key contributions of this paper. The Related Work section offers a comprehensive review of existing research in the fields of crowdsourcing and recommendation algorithms. The section on the APAED Algorithm provides a detailed examination of the design and implementation of the APAED algorithm. The Algorithm Evaluation section uses real-world datasets to assess the performance and generalizability of the APAED algorithm. The Results section presents the data and descriptions of the experimental results, while the Discussion section discusses the main findings of the study as well as its limitations. Finally, the Conclusion summarizes the paper, encapsulating the key findings and their implications.

2. Related Work

This section provides an overview of recommendation algorithms as applied in crowdsourcing scenarios and discusses the utilization of decay functions in recommendation and other disciplines. Neural network-based recommendation algorithms can inherently decipher the implicit relationships among input time-series data or features in mainstream scenarios. This negates the need for an explicit application of decay functions, leading to their less frequent use in such algorithms. However, decay functions have numerous explicit applications in realms beyond recommendation. Consequently, this overview not only encapsulates the application of decay functions in recommendation scenarios but also extends to their extensive usage in various fields.

2.1. Recommendation Algorithms in Crowdsourcing Scenarios

Recommendation algorithms for crowdsourcing scenarios mainly include manual modeling methods and neural network algorithms.
Manual modeling methods in the crowdsourcing scenario requires a large number of manual features, such as modeling the worker’s completion probability and rate of return for the current task, modeling the possible remuneration of the publisher, and so on. Hossain MS et al. [10] used an association rule mining algorithm to generate a list of common skill sets used by workers in previous jobs, and also create feasible work lists based on workers’ common skills, customer ratings, minimum budget/hour rates, deadlines, etc. Kurup AR et al. [11] found that the participation rate and enthusiasm of new workers is higher than that of expert workers, so they distinguish skilled and unskilled workers through skill classification and the participation probability of existing expert workers, and recommend tasks separately.
Neural network in crowdsourcing scenarios is another approach. Pan Q et al. [12] employed Word2vec word vectors to calculate the tags used to characterize tasks and workers, and recommended tasks for workers by matching the similarity of tags between workers and tasks. The training sources for word vectors include historical bidding information and worker registration data. Liao Q et al. [13] proposed a role-based clustering model that permits each worker to bid in multiple task clusters and converts the entire worker–task evaluation matrix obtained from the worker’s history into a set of role-based, compact matrices. This not only achieves rapid system response but also mitigates the impact of irrelevant interactions. Shan C et al. [14] introduced reinforcement learning into crowdsourcing scenarios, using DQN (Deep Q Network) conjunction with neural networks to consider both long-term and short-term rewards, and can be updated in real-time to handle continuously updated data. Two DQNs are designed to capture the interests of workers and task issuers, optimizing the profit of the platform. Zhang D et al. proposed ApeGNN [15], a graph neural network (GNN) framework that dynamically adjusts aggregation weights for individual nodes in worker–task interaction graphs. By modeling local structural diversity (e.g., varying degrees of worker–task connectivity), ApeGNN achieves superior recommendation accuracy compared to static GNN models, particularly in sparse interaction scenarios. By leveraging a heterogeneous graph representation learning + reinforcement learning decision-making framework, Zhao B et al. [16] addresses the core challenges in spatiotemporal crowdsourcing, including dependency constraints, dynamic resource allocation, and large-scale generalization. Its innovation lies in modeling complex multi-task dependencies as a graph sequence decision problem and achieving efficient solutions through the dynamic embedding capability of CHANet, thereby providing a theoretical foundation and practical tools for industrial-grade crowdsourcing platforms. Chiang J-H et al. [17] integrates lifecycle modeling, context-aware computing, and the attention mechanism to address issues such as user interest drift and insufficient scenario adaptation in traditional recommendation systems.
From the literature, it is apparent that manual modeling methods in crowdsourcing scenarios heavily lean on artificial domain knowledge, resulting in a model that lacks generalizability and has poor reproducibility. The neural network method does not account for the non-independence among workers, leading to a decline in the accuracy of recommendations. This is particularly important for platform environments requiring refined financial oversight, such as the Sichuan University Finance Office, to minimize redundant bidding and resource waste.

2.2. Decay Algorithm

Medo M et al. [18] proposed an adaptive model that melds the similarity of user rating patterns with the popular dissemination of news on the evolving network. This model measures user similarity through the positive and negative feedback of users to news, facilitating novel news recommendation through the decay of recommendation scores over time. At each time step, the news score in the user recommendation queue decays, and when the attenuated score drops below a certain threshold, it is removed from the recommendation queue. Anelli V M et al. [19] introduced the popularity of items into the collaborative filtering algorithm and proposed the Time Pop algorithm. This algorithm does not consider global popularity, but regards the popularity of items among the user’s neighbors, avoiding the use of time windows and fixed number of tags when selecting candidates. Zheng Q et al. [20] introduced an improved collaborative filtering algorithm based on expert trust and time decay. The expert group is obtained through expert trust, and the similarity between the expert and the target user is used to calculate the weight of the expert’s rating. The predicted value of the target user is obtained through the user’s weighted level and time decay factor. Peng D et al. [21] also presumed that user interests in social networks were in a state of constant flux, thereby improving the single forgetting function algorithm.
Applications of the decay functions beyond the context of recommendation include the following: Yue Shubo et al. [22] studied the decay law of surge waves that changes with the initial shape determined by the water entry angle, speed, and height of a landslide body through sinusoidal decay. Yuan Zhihui et al. [23] used the tangent function of the principal stress of the original and reshaped loess introduced in the Nth dry–wet cycle to describe the shear strength decay value of the original and reshaped loess after the dry–wet cycle. Feng Shuai et al. [24] used the river activity coefficient, the average flow velocity of the river, and the average water depth to modify the comprehensive decay coefficient of the first-order reaction kinetics. The comprehensive decay coefficient is a basic logarithmic function composed of the pollution concentration and time before and after the decay. By leveraging a personalized temporal decay function and dynamic collaborative filtering, Ghiye A et al. [25] addressed the issue of interest drift in financial recommendation systems, which is caused by data non-stationarity and user heterogeneity. Its core innovation lies in extending the decay mechanism from global parameters to the user level, and achieving efficient adaptation through incremental learning. Chen X et al. [26] systematically revealed the core role of Weight Decay in the sparsification of neural networks and proposed an innovative method to achieve efficient and low-loss end-to-end compression.

3. APAED Algorithm

The APAED algorithm stands as the pioneering crowdsourcing recommendation model that integrates offline model predictions with real-time bidding dynamics to quantitatively evaluate the competitive intensity of present tasks in crowdsourcing environments. APAED employs both global and local distributions of predicted values to derive decay parameters, serving to attenuate these predicted values. Within APAED, decay and gain function symmetrically, yet our discussion will solely focus on the process of establishing decay parameters. This is predicated on the assumption that the primary influence on a worker’s bidding success stems from competition-induced decay. Consequently, the decay parameter, embodying real-time competition data, is adjusted to impact the gain at the end. The APAED accounts for the non-independence among staff members, thereby more effectively mitigating resource waste and enhancing work efficiency on high-pressure platforms, especially in systems with stringent task efficiency requirements such as Sichuan University’s Financial Management System.
When attenuating the predicted value, APAED will determine the decay or gain parameters from the predicted value distribution from two angles, which are the global distribution and local distribution of the predicted value, respectively. The global distribution is the distribution of the predicted value of the entire list when the recommendation list is generated for a certain worker, and the local distribution is the distribution of the predicted value of a task in the recommendation list. When generating the recommendation list, since only the predicted value of the current bid is known, the competition information is only reflected based on the record of the bid. The relationship between the global and local decay functions and the final decay functions, and the standards involved are shown in Figure 1. In this figure, s l ¯ is the final recommendation score after decay adjustment by the APAED algorithm while the s l represents the initial recommendation score generated by the offline neural network model for the l-th task. A g l o b a l denotes the global decay factor, calculated based on the interquartile range of the recommendation list as shown in Formula (2). d l o c a l , l refers to the local decay factor for the l-th task, which is determined by factors such as the number of bids, absolute score distribution, and relative score distribution of the task, and is used to adjust the decay magnitude of the recommendation score for that task, as illustrated in Formula (3). The exponential decay formula integrates global and local information, enabling the recommendation results to align with both the historical preferences of workers and the real-time competitive environment, thereby enhancing the accuracy and stability of task recommendations. A more detailed exposition of the parameters l, l o   f   f   s e t d, and d E M will be provided in subsequent sections.

3.1. Offline Prediction Model

In this research, NAIS [27] is used as the framework of the offline model, and the APAED algorithm is used to attenuate the offline prediction value generated by the offline model. You can find more details about NAIS in the “Comparison of Algorithms” section referenced in Section 4. Because the tasks in crowdsourcing are time-sensitive and cannot directly use the id in NASI as feature input, here we follow the method of learning news features according to news attributes in NEWS [28], and learn crowdsourcing features according to the attributes of crowdsourcing tasks.

3.2. Global Decay Parameter

The predicted value after decay s ¯ = [ s 1 ¯ , s 2 ¯ , , s l ¯ ] is shown in Formula (1). The decay object here is all the predicted values of the worker in the list when the recommendation list is generated for a certain worker s = [ s 1 , s 1 , s l ] , where l is the number of tasks in the list.
s ¯ l = s l × e α g l o b a l × d l o c a l , l .
In Formula (1), α g l o b a l is used as a global decay factor to determine the influence of the distribution of predicted values in the entire recommendation list on the decay intensity.
Because the purpose of decay is to adjust the relative position ranking of the predicted value s l of each task in a certain worker s according to the competition information, and α g l o b a l as a global decay factor determines the upper limit of the decay strength for changing the relative ranking position, α g l o b a l is thus determined by the overall distribution of s. When the value of s is more discrete, a greater decay strength α g l o b a l is needed to affect the relative position ranking in s.
Use the interquartile range Q = Q 1 Q 3 of s as the benchmark for α g l o b a l . The interquartile range is mainly used to measure the degree of dispersion of the predicted value of the worker recommendation list. It reflects the degree of dispersion of the middle 50% of the data, calculated as the difference between the upper quartile (Q3, which is located at 75%) and the lower quartile (Q1, which is located at 25%). The interquartile range can avoid the influence of extreme values, because after OPCA-CF prediction, the smallest extreme value in each recommended list tends to 0, and the largest extreme value tends to 1, which does not reflect the degree of dispersion of the data. α g l o b a l can make the list position shift caused by the decay of s l within the task up to 50%, that is, attenuate from Q 3 to Q 1 . Define α g l o b a l as the intensity required to attenuate Q 3 to Q 1 using exponential decay, which is obtained by Formula (2):
Q 1 = Q 3 × e A g l o b a l A g l o b a l = log  ( Q 1 Q 3 ) .

3.3. Local Decay Parameter

Local decay parameter d l o c a l , l is used as the local decay factor in each task, which represents the influence of the predicted value s l = [ s l , s l 1 , s l 2 , , s l m ] of other workers in the l-th task on the decay intensity. m is the predicted value of other workers’ bid records in the l-th task. d l o c a l , l is determined by the Formula (3). The parameters l, d, and d E M will be explained below.
d l o c a l , l = f ( l , d , d E M ) .

3.3.1. Bid Quantity Parameter Δ l

A bidding behavior greater than the predicted value of S l in a task will strengthen the competition of the current task and cause the decay of S l . Since the upper limit of the decay strength is α g l o b a l determined by the global distribution, it is hoped that Δ l , determined by the number of bids, can tend to 1 when the competition is strongest, so that the final decay strength can be between 0 and α g l o b a l . Here, Δ l = tanh ( l 0 × l × l o   f   f   s e t ) , where l o   f   f   s e t is used to map the number of bids to a range that makes t a n h meaningful, and l 0 is a hyperparameter defined according to the magnitude of the bid.

3.3.2. Parameter d, d E M Determined by the Absolute Distribution of the Predicted Value of the Bid Within the Task and the Relative Distribution of the Predicted Value Between the Lists

In addition to the influence of the number of bids on the decay intensity, the distribution of s l = [ s l , s l 1 , s l 2 , , s l m ] will also affect the current decay intensity, which can be used to correct l 0 . In the case of the same number of bids, different S l distributions will also cause different decay strengths. As shown in Figure 2, the blue and red shades represent the two types of predicted value density distributions with the same bid quantity but different distributions. Obviously, the decay intensity of the blue distribution to the predicted value is greater than that of the red distribution.
Therefore, d is introduced here to indicate the distance between S l and S l by two standard deviations, d ( 1 , 1 ) . In this way, d is determined by the degree of concentration and dispersion of S l . Obviously, the blue part in Figure 2 causes a greater degree of competition than the red part, so d 2 > 0 > d 1 .
In addition to the distance d between the distribution and the current predicted value S l , the shape of the distribution itself will also cause differences in the intensity of competition, and d only includes the mean and variance of the data and cannot reflect the shape of the distribution. Because the predicted value does not necessarily conform to the normal distribution, there may be information where the mean and variance are the same but the competition intensity is different. As shown in Figure 3, the blue and red distributions have the same mean and variance, that is, s is the same, but obviously the red distribution represents greater competition.
Therefore, d needs to be strengthened or suppressed according to the distribution pattern. In order to determine whether to strengthen or suppress and perform quantitative analysis, it is necessary to use a certain distribution as a benchmark. Here, we use the prediction value distribution s l i s t [ min i m a : max i m a ] of the current recommendation list between the extreme values of S l as the benchmark, as shown in the green part of Figure 4. Then, we use EMD (Earth Mover’s Distance) to calculate the distance from distribution S l to S l i s t in the task, which is obtained by Formula (4).
W ( s l , s l i s t ) = inf γ ( s l , s l i s t ) E ( x , y ) γ [ x y ]
where ( s l , s l i s t ) is the set of all possible joint distributions that combine the two distributions. For each possible joint distribution, samples x and y are obtained from it, and the distance of the pair of samples and the expected distance E ( x , y ) γ [ x y ] of all samples are calculated. The lower bound of this expected value in all possible joint distributions is from S l to S l i s t , the EM distance.
Intuitively, the EM distance is to imagine the probability distribution as a pile of dirt. In Figure 4, the task distribution (blue and red parts) moves in the direction of the arrow, and the energy required to pile it into another target shape s l i s t [ min i m a : max i m a ] (green part) is determined by carrying out the minimum amount of work.
In order to avoid the influence of the predicted value dimension on W, it is necessary to normalize S l and S l i s t before calculation because the dimension of the local predicted value is unrelated to the decay intensity caused by the predicted value distribution; for example, S l and S l are both in the order of magnitude 1 3 and 1 1 , and the intensity of competition faced by S l is the same. Subsequently, since the EM distances are all greater than zero, it is impossible to determine whether to strengthen or suppress d. Here, the mean difference between S l and S l i s t is used as the standard for strengthening or suppressing. If it is positive, it will be strengthened, and if it is negative, it will be suppressed, that is, m 1 m and m 2 m in Formula (5). Finally, the decay caused by the distribution of the predicted value is shown in Formula (5).
d E M = s i g m o i d ( m e a n ( s l ) m e a n ( s l i s t ) ) × e W ( s l , s l i s t ) .
Finally, d and d E M jointly determine l o   f   f   s e t , as shown in Formula (6).
The improved sigmoid function is used here, and the sigmoid function can be used to avoid the occurrence of outliers leading to excessive changes in l 0 , g ( d , d E M ) ( 0.5 , 1.5 ) . μ 1 and μ 2 are used as hyperparameters to scale and correct the overall intensity.
l o   f   f   s e t = g ( d , d E M ) = 1 1 + e μ 1 d × e μ 2 × d E M + 0.5 .
Finally, use a symmetrical method to calculate the gain part of S l m < S l . The gain here is not to increase S l , but to weaken S l m > S l obtained by d l o c a l , l . For example, even if the competition intensity is the same, the probability of winning the bid of S l in the top 10% and the bottom 10% of the task is obviously different. The part S l m > S l should be used to adjust d l o c a l , l , and d l o c a l , l will be suppressed as the gain increases. Here, it is believed that the importance of decay is greater than the gain in competition, so the local distribution gain parameter g g l o b a l , l obtained by the symmetric method is penalized. After obtaining g g l o b a l , l , we use the ratio r 1 of g g l o b a l , l to d l o c a l , l to weaken d l o c a l , l , the sigmoid function to smooth r 1 , and cosine decay to obtain the final local decay parameter d l o c a l , l , as shown in Formula (7).
d l o c a l , l = d l o c a l , l × cos ( 2 × ( 1 / ( 1 + e r l β ) 0.5 ) ) .
Among them, β is the maximum suppression ratio of the hyperparameter control gain part to d l o c a l , l .
The purpose of using the cosine function is that even if the decay reaches the maximum, that is, when r 1 = 1 , d l o c a l , l will not be attenuated to 0, so as to ensure that the decay is dominant in competition.

4. Algorithm Evaluation

The experiment in this research compares the difference between the recommendation list directly generated by the offline model and the recommendation list generated using the APAED algorithm in indicators HR@10 and MRR, which proves that the APAED algorithm has an inhibitory effect on the abnormal fluctuation of the indicators in the crowdsourcing scenario.
  • Dataset and experimental environment:
The experimental environment used comprised Windows 10 Professional 64-bit, Intel Core i7-8700 @ 3.20 six-core, NVIDIA Geforce RTX 2060 (6 GB). The algorithm was built by tensorflow 1.13.1 and python 3.6.8.
The filtered dataset statistics are shown in Table 1. There were 963,352 pieces of original data before filtering. It can be seen that half of the workers on the original website have never won the bid without a personalized recommendation algorithm.
  • Evaluation indicators:
This research compares the difference between the recommendation list directly generated by the offline model and the recommendation list generated using the APAED algorithm in indicators HR@10 and MRR, and uses a straight line to fit the fluctuation curve of HR and MRR with epoch, uses Root Mean Squared Error (RMSE) as the loss function, and trains 8000 steps with the SGD optimizer. The training residual can reflect the intensity of the fluctuation.
  • Comparison algorithm:
In order to verify that the APAED algorithm can improve the indicators related to the recommendation list and test the generalization ability of the algorithm among different offline models, we need different offline models to generate different offline prediction value distributions. However, most of the existing neural network algorithms cannot be directly applied to crowdsourcing. Therefore, the offline model adopts a fixed main network and changes the attention sub-network to generate different offline prediction value distributions. Four attention mechanisms are selected here.
NAIS: In this experiment, tasks are used instead of Item, and workers are used instead of User. Taking the sequence of workers as an example, the attention mechanism is shown in Formula (8). In this formula, g i represents the feature embedding vector of the i-th task historically interacted with by the target worker, such as task type, reward, deadline, etc. r denotes the worker embedding vector of the target worker g. g r represents the element-wise product between the worker embedding vector and the task embedding vector, capturing the matching degree between the task and the worker’s preferences or capabilities. W g is a d × d -dimensional matrix used for a linear transformation of features. R e L U serves as the activation function, filtering out negative signals while retaining positive matching features. h g is the output weight of the attention network with a dimension of d × 1 , which projects the worker–task matching features into a scalar weight. b g is the bias term of the network, applied to correct scoring errors. L represents the total number of tasks, that is, the number of tasks in f , which controls the calculation range of max-pooling to ensure coverage of all possible task matching items. After normalization, the attention weight a i , j for the i-th task encountered by the worker is obtained. The greater this weight, the higher the reference value of this historical task for the current recommendation.
a g , i = exp ( g ( g i , r ) ) j = 1 L exp ( g ( g j , r ) ) , g ( g , r ) = h g T R E L U ( W g ( g r ) + b g ) .
Attentive pooling networks [29] (hereinafter referred to as AP): Since its model is designed for QA (question and answer), in the experiment, it is used to replace the attention mechanism part of NAIS (hereafter replaced by NAIS+AP). Taking the worker sequence as an example, the attention mechanism is shown in Formula (9). In this formula, g denotes the worker-side original feature matrix with dimensions corresponding to the number of workers × worker feature dimensions, including features such as workers’ historical completion rates and specialized task types. G is the worker feature projection matrix that compresses high-dimensional original worker features into a low-dimensional embedding space, with dimensions of target embedding dimension × worker feature dimensions. f stands for the task-side original feature matrix with dimensions of number of tasks × task feature dimensions, including attributes such as task rewards, difficulty levels, and deadlines. F is the task feature projection matrix that functions similarly to G and compresses high-dimensional task features into a space with the same dimension as the worker embedding. U is the cross-feature attention weight matrix for workers and tasks with dimensions of target embedding dimension × target embedding dimension, whose core role is to capture high-matching interaction relationships between worker and task features. G ( g ) is the low-dimensional embedding matrix of original worker features projected by G , where each row corresponds to the embedding vector of one worker. F ( f ) T is the transpose of the embedding matrix of original task features projected by F , where each column corresponds to the embedding vector of one task. M is the worker–task interaction score matrix with dimensions of number of workers × number of tasks. The element M j , m represents the matching score between the j-th worker and the m-th task. After tanh activation, its value range is ( 1 , 1 ) , and a larger value indicates a higher matching degree. a g , j is the maximum matching weight of the j-th worker for the target task g, obtained by taking the maximum value of the scores M j , m between this worker and all tasks, and used to screen the contribution of task features most suitable for the worker.
a g , j = max 1 < m < L ( M j , m ) , M = tanh ( G ( g ) U F ( f ) T ) .
The collaborative attention mechanism used in ref. [30]: Since the matrix dot product is used in the attention mechanism, it is referred to as DCA (Dot Co-Attention), which replaces the attention mechanism part of NAIS, hereinafter referred to as NAIS+DCA. Take the worker sequence as an example of the attention mechanism, as shown in Formula (10). g is the low-dimensional feature embedding vector of the target worker with dimensions of target embedding dimension × 1. f is the low-dimensional feature embedding vector of the target task with the same dimension as g . W g g is the self-attention weight matrix on the worker side with dimensions of target embedding dimension × target embedding dimension. W g f is the cross-side attention weight matrix between workers and tasks with the same dimension as W g g . After tanh activation, their value range is ( 1 , 1 ) , which avoids gradient explosion and highlights valid features. h g is the worker–task bidirectional feature fusion vector obtained by element-wise multiplication of the above two activation vectors. W g is the fusion feature output weight matrix with dimensions of number of candidate recommended tasks K× target embedding dimension. It is used to project the high-dimensional fusion vector into K-dimensional candidate task scores. b g is the output layer bias term with dimensions of K× 1. It is used to correct the baseline bias of projected scores and avoid overall score deviation caused by uneven feature distribution. a g is the attention weight distribution of the target worker for candidate tasks. After Softmax normalization, its value range is ( 0 , 1 ) and the sum is 1. A higher weight for a task indicates a stronger willingness of the worker to accept it, leading to a higher ranking in the recommendation list.
a g = s o   f   t max ( W g h g + b g ) , h g = tanh ( W g g g ) tanh ( W g   f f ) .
The collaborative attention mechanism is used in this article, and it is also used to replace the attention mechanism part of NAIS, hereinafter referred to as OPCA-CF.
The distribution of predicted values generated by the four offline models is shown in Figure 5. It can be seen that replacing the sub-network can meet the requirements of generating different predicted value distributions.

5. Results

The HR@10 and MRR of using and not using APAED to generate the recommendation list after obtaining the offline predicted value are shown in Figure 6 (normalized epoch to 0 to 1).
On the HR@10 and MRR indicator graphs, the red part represents the offline predicted value considering worker competition. It can be seen that HR@10 decreased from 9.575 × 1 4 to 5.939 × 1 4 on the RMSE, and MRR decreased from 2.920 × 1 4 to 0.736 × 1 4 on the RMSE, which effectively alleviated the abnormal fluctuation of the recommendation list. The residuals of HR@10 and MRR linear fitting are shown in Figure 7. It can be seen that the residuals in the red part (after APAED decay) are more concentrated on the x-axis, that is, more stable.
After the other three comparison models use the APAED algorithm to generate the recommendation list, the corresponding indicators are also more stable and improved. The HR@10 and MRR of NAIS, NAIS+AP, and NAIS+DCA before and after using APAED and their linear fitting residuals are shown in Figure 8, Figure 9 and Figure 10 (the red dotted line indicates after using APAED).
The RMSE of each model’s linear fitting loss before and after using APAED is shown in Table 2. After using APAED, the RMSE of each model decreases, and the residual error is more concentrated on the x-axis, which suppresses the fluctuation of the relevant indicators of the recommendation list. It can be seen that APAED has the generalization ability between the offline predicted values generated by different models.

Parametric Comparison and Hyperparameter Search

Here, we present the comparative experiments conducted on the determination of hyperparameters and use control variables to verify the necessity of the global decay factor determined by the global distribution and the local decay factor determined by the local distribution in APAED. OPCA-CF is selected as the offline model.
As shown in Figure 11, the comparison of different l 0 values on HR@10 and MRR indicators is shown (the base curve is the corresponding hyperparameter set in Table 2), whereby l 0 determines the initial strength of the local decay within the task. It can be seen that if l 0 is set too small, it will be impossible to change the partial order of scores in the worker list, and too large a value will result in greater fluctuations.
Figure 12 shows the comparison of different parameters on HR@10 and MRR indicators, respectively. μ 1 determines the decay strength of the local decay factor within the task determined by the distance between the predicted value in the task and the predicted value of the current worker, and determines whether l 0 is a gain or suppression, that is, the decay strength caused by the absolute distribution of the predicted value within the task.
Figure 13 shows the comparison of HR@10, HR@10, MRR, and MRR residuals, respectively. μ 2 determines the gain or suppression strength of μ 1 to l 0 . It can be seen that when μ 2 is large, the list position shift is large, which results in poor suppression of fluctuations. On the contrary, when μ 2 is small, the strength of the list shift is small.
Finally, an experimental comparison is conducted on the gain part to compare whether it is necessary to introduce a higher score than the current worker’s predicted value to suppress the decay intensity. The comparison before and after the introduction of suppression is shown in Figure 14. It can be seen that when quantifying the intensity of competition based on the online real-time bidding information, both higher scores and lower scores than the current worker’s predicted value should be considered.

6. Discussion

To address the question concerning the failure of traditional models to account for the interrelationships among workers, we incorporated real-time competitive relationships among workers into the algorithm we designed. We propose the APAED algorithm as an online tool. The prediction model attenuates the offline prediction value according to the real-time competition information, and improves the relevant indicators while mitigating fluctuations. The experimental results show that APAED achieves better performance compared with traditional algorithms that directly generate recommendation lists, and Figure 12, Figure 13 and Figure 14 also indicate that the model has good robustness under different hyperparameters. However, in the Results section, we only used the indicator data of HR@10 and MRR for the results in a simplistic manner, without conducting more rigorous statistical analyses. This is a limitation of our study, and we will perform detailed statistical analysis in our subsequent research.

7. Conclusions

In order to ensure that the model uses high-quality feature expressions and prediction capabilities of the neural network model while taking into account real-time competition, this research uses neural network algorithms such as OPCA-CF as an offline prediction model, and on this basis, proposes the APAED algorithm as an online tool. The prediction model attenuates the offline prediction value according to the real-time competition information, and improves the relevant indicators while mitigating fluctuations. For the first time, APAED uses global and local distributions to quantitatively analyze real-time competition intensity within the task under the conditions of existing predicted values, and APAED has good generalization in different offline models after adjusting the hyperparameters. When applied to the OPCA-CF backbone, APAED cuts the residual RMSE of HR@10 from 9.575 × 10 4 to 5.939 × 10 4 (−38%) and that of MRR from 2.920 × 10 4 to 0.736 × 10 4 (−75%), markedly reducing indicator fluctuations across training epochs.
Furthermore, the applicability of APAED extends well beyond task recommendation. Its conceptual framework is readily adaptable to analogous scenarios. In future work, we will explore broader application contexts for APAED. For instance, in emerging digital electronic invoice (DEI) data streaming systems, distinct DEI data streams can be conceptualized as separate ‘tasks’. APAED could then be employed to intelligently route relevant data streams to specific processing nodes based on predicted relevance and real-time demand, thereby proactively mitigating potential risks. Additionally, for diverse risk categories encountered in such systems, APAED can enhance the accuracy of matching risk-specific data or alerts to specialized risk-handling nodes, thus improving overall risk mitigation efficiency.

Author Contributions

Conceptualization, L.C. and X.L.; methodology, Q.Z.; software, Q.Z.; validation, Z.L., Y.Z. and Q.Z.; formal analysis, X.L.; investigation, X.L.; resources, X.L.; data curation, Z.L. and Y.Z.; writing—original draft preparation, Z.L. and Y.Z.; writing—review and editing, Z.L.; visualization, Z.L. and Y.Z.; supervision, L.C. and X.L.; project administration, X.L.; funding acquisition, X.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the Fundamental Research Funds for the Central Universities (Grant YJ202420), and in part by Sichuan University Young Teachers Science and Technology Innovation Ability Improvement Project (Grant 2024SCUQJTX028).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The figures and tables used to support the findings of this study are included in the article.

Acknowledgments

The authors would like to express their sincere thanks to the technicians who contributed to this research.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Zhou, J.; Jin, X.; Yu, L.; Xue, L.; Ren, Y. Truthtrust: Truth inference-based trust management mechanism on a crowdsourcing platform. Sensors 2021, 21, 2578. [Google Scholar] [CrossRef] [PubMed]
  2. Kamel, M.M.; Gil-Solla, A.; Guerrero-Vásquez, L.F.; Blanco-Fernández, Y.; Pazos-Arias, J.J.; López-Nores, M. A Crowdsourcing Recommendation Model for Image Annotations in Cultural Heritage Platforms. Appl. Sci. 2023, 13, 10623. [Google Scholar] [CrossRef]
  3. He, X.; Liao, L.; Zhang, H.; Nie, L.; Hu, X.; Chua, T.S. Neural collaborative filtering. In Proceedings of the Companion Proceedings of the Web Conference, Perth, Australia, 3–7 April 2017; pp. 173–182. [Google Scholar]
  4. Deng, Z.H.; Huang, L.; Wang, C.D.; Lai, J.H.; Yu, P.S. Deepcf: A unified framework of representation learning and matching function learning in recommender system. In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 27 January–1 February 2019; Volume 33, pp. 61–68. [Google Scholar]
  5. Lian, J.; Zhou, X.; Zhang, F.; Chen, Z.; Xie, X.; Sun, G. xdeepfm: Combining explicit and implicit feature interactions for recommender systems. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, London, UK, 19–23 August 2018; pp. 1754–1763. [Google Scholar]
  6. Yu, Z.; Lian, J.; Mahmoody, A.; Liu, G.; Xie, X. Adaptive user modeling with long and short-term preferences for personalized recommendation. Int. Jt. Conf. Artif. Intell. 2019, 7, 4213–4219. [Google Scholar]
  7. Ouyang, W.; Zhang, X.; Li, L.; Zou, H.; Xing, X.; Liu, Z.; Du, Y. Deep spatio-temporal neural networks for click-through rate prediction. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, Anchorage, AK, USA, 4–8 August 2019; pp. 2078–2086. [Google Scholar]
  8. Song, Y.; Wang, H.; He, X. Adapting deep ranknet for personalized search. In Proceedings of the 7th ACM International Conference on Web Search and Data Mining, New York, NY, USA, 24–28 February 2014; pp. 83–92. [Google Scholar]
  9. Liang, J.; Hu, J.; Dong, S.; Honavar, V. Top-N-Rank: A Scalable List-wise Ranking Method for Recommender Systems. In Proceedings of the 2018 IEEE International Conference on Big Data (Big Data), Seattle, WA, USA, 10–13 December 2018; pp. 1052–1058. [Google Scholar]
  10. Hossain, M.S.; Arefin, M.S. An intelligent system to generate possible job list for freelancers. In Computing and Intelligent Systems; Springer: Singapore, 2020; pp. 311–325. [Google Scholar]
  11. Kurup, A.R.; Sajeev, G.P. Task Personalization for Inexpertise Workers in Incentive Based Crowdsourcing Platforms. In Proceedings of the 2018 International Conference on Advances in Computing, Communications and Informatics (ICACCI), Bangalore, India, 19–22 September 201; pp. 286–292.
  12. Pan, Q.; Dong, H.; Wang, Y.; Cai, Z.; Zhang, L. Recommendation of Crowdsourcing Tasks Based on Word2vec Semantic Tags. Wirel. Commun. Mob. Comput. 2019, 2019, 2121850. [Google Scholar] [CrossRef]
  13. Liao, Q.; Zhou, X.; Wang, D.; Feng, S.; Zhang, Y. Role-Based Clustering for Collaborative Recommendations in Crowdsourcing System. In Proceedings of the International Conference on Conceptual Modeling, Salvador, Brazil, 4–7 November 2019; Springer: Cham, Switzerland, 2019; pp. 287–301. [Google Scholar]
  14. Shan, C.; Mamoulis, N.; Cheng, R.; Li, G.; Li, X.; Qian, Y. An End-to-End Deep RL Framework for Task Arrangement in Crowdsourcing Platforms. arXiv 2019, arXiv:1911.01030. [Google Scholar]
  15. Zhang, D.; Zhu, Y.; Dong, Y.; Wang, Y.; Feng, W.; Kharlamov, E.; Tang, J. ApeGNN: Node-wise adaptive aggregation in GNNs for recommendation. In Proceedings of the ACM Web Conference 2023, Austin, TX, USA, 30 April–4 May 2023; pp. 759–769. [Google Scholar]
  16. Zhao, B.; Dong, H.; Wang, Y.; Pan, T. A task allocation algorithm based on reinforcement learning in spatio-temporal crowdsourcing. Appl. Intell. 2023, 53, 13452–13469. [Google Scholar] [CrossRef]
  17. Chiang, J.H.; Ma, C.Y.; Wang, C.S.; Hao, P.Y. An adaptive, context-aware, and stacked attention network-based recommendation system to capture users’ temporal preference. IEEE Trans. Knowl. Data Eng. 2022, 35, 3404–3418. [Google Scholar] [CrossRef]
  18. Medo, M.; Zhang, Y.C.; Zhou, T. Adaptive model for recommendation of news. EPL 2009, 88, 38005. [Google Scholar] [CrossRef]
  19. Anelli, V.W.; Di Noia, T.; Di Sciascio, E.; Ragone, A.; Trotta, J. Local popularity and time in top-n recommendation. In European Conference on Information Retrieval; Springer: Cham, Switzerland, 2019; pp. 861–868. [Google Scholar]
  20. Zheng, Q. An Improved Collaborative Filtering Algorithm Based on Expert Trust and Time Decay. In Proceedings of the 2018 11th International Symposium on Computational Intelligence and Design (ISCID), Hangzhou, China, 8–9 December 2018; Volume 2, pp. 12–15. [Google Scholar]
  21. Peng, D.; Li, Y.; Zhou, H.; Li, L. Time Decay Friend Recommender System for Social Network. In Proceedings of the Testbeds and Research Infrastructures for the Development of Networks and Communities, Shanghai, China, 1–3 December 2018. [Google Scholar]
  22. Yue, S.; Diao, M.; Wang, L. Research on initial formation and decay of landslide-generated waves. J. Hydraul. Eng. 2016, 47, 816–825. [Google Scholar]
  23. Yuan, Z.; Ni, W.; Tang, C.; Hu, S.; Gan, J. Experimental study of structure strength and strength decay of loess under wetting-drying cycle. Rock Soil Mech. 2017, 38, 1894–1902+1942. [Google Scholar]
  24. Feng, S.; Li, X.; Deng, J. Determination of comprehensive pollutants decay coefficients of the plain river networks in the upper reaches of Lake Taihu Basin. Acta Sci. Circumstantiae 2017, 37, 878–887. [Google Scholar]
  25. Ghiye, A.; Barreau, B.; Carlier, L.; Vazirgiannis, M. Adaptive collaborative filtering with personalized time decay functions for financial product recommendation. In Proceedings of the 17th ACM Conference on Recommender Systems, Singapore, 18–22 September 2023; pp. 798–804. [Google Scholar]
  26. Chen, X.; Pan, R.; Wang, X.; Tian, F.; Tsui, C.Y. Late breaking results: Weight decay is all you need for neural network sparsification. In Proceedings of the 2023 60th ACM/IEEE Design Automation Conference (DAC), San Francisco, CA, USA, 9–13 July 2023; pp. 1–2. [Google Scholar]
  27. He, X.; He, Z.; Song, J.; Liu, Z.; Jiang, Y.G.; Chua, T.S. Nais: Neural attentive item similarity model for recommendation. IEEE Trans. Knowl. Data Eng. 2018, 30, 2354–2366. [Google Scholar] [CrossRef]
  28. Wu, C.; Wu, F.; An, M.; Huang, J.; Huang, Y.; Xie, X. Neural news recommendation with attentive multi-view learning. arXiv 2019, arXiv:1907.05576. [Google Scholar] [CrossRef]
  29. Hu, B.; Shi, C.; Zhao, W.X.; Yang, T. Local and global information fusion for top-n recommendation in heterogeneous information network. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management, Torino, Italy, 22–26 October 2018; pp. 1683–1686. [Google Scholar]
  30. Zhang, Q.; Wang, J.; Huang, H.; Huang, X.; Gong, Y. Hashtag Recommendation for Multimodal Microblog Using Co-Attention Network. IJCAI 2017, 3420–3426. [Google Scholar] [CrossRef]
Figure 1. APAED algorithm structure diagram.
Figure 1. APAED algorithm structure diagram.
Applsci 15 09577 g001
Figure 2. Local decay parameter d. The blue and red shades represent the two types of predicted value density distributions.
Figure 2. Local decay parameter d. The blue and red shades represent the two types of predicted value density distributions.
Applsci 15 09577 g002
Figure 3. Same mean variance distribution. Red and blue represent different distributions.
Figure 3. Same mean variance distribution. Red and blue represent different distributions.
Applsci 15 09577 g003
Figure 4. Local decay parameter d E M . Blue and red represent different task distributions, while green represents the target task distribution.
Figure 4. Local decay parameter d E M . Blue and red represent different task distributions, while green represents the target task distribution.
Applsci 15 09577 g004
Figure 5. Distribution of predicted values generated by NAIS, NAIS+DCA, NAIS+AP, and OPCA-CF.
Figure 5. Distribution of predicted values generated by NAIS, NAIS+DCA, NAIS+AP, and OPCA-CF.
Applsci 15 09577 g005
Figure 6. HR@10 and MRR before and after OPCA-CF uses APAED.
Figure 6. HR@10 and MRR before and after OPCA-CF uses APAED.
Applsci 15 09577 g006
Figure 7. Linear fitting residuals after OPCA-CF uses APAED. The red part represents the values of APAED after decay, while the blue part represents the directly generated values.
Figure 7. Linear fitting residuals after OPCA-CF uses APAED. The red part represents the values of APAED after decay, while the blue part represents the directly generated values.
Applsci 15 09577 g007
Figure 8. HR@10, MRR, and fitting residual before and after NAIS uses APAED.
Figure 8. HR@10, MRR, and fitting residual before and after NAIS uses APAED.
Applsci 15 09577 g008
Figure 9. FR@10, MRR, and fitting residual before and after NAIS+AP uses APAED.
Figure 9. FR@10, MRR, and fitting residual before and after NAIS+AP uses APAED.
Applsci 15 09577 g009
Figure 10. HR@10, MRR, and fitting residual before and after NAIS+DCA uses APAED.
Figure 10. HR@10, MRR, and fitting residual before and after NAIS+DCA uses APAED.
Applsci 15 09577 g010
Figure 11. l 0 parameter comparison.
Figure 11. l 0 parameter comparison.
Applsci 15 09577 g011
Figure 12. u 1 parameter comparison.
Figure 12. u 1 parameter comparison.
Applsci 15 09577 g012
Figure 13. u 2 parameter comparison.
Figure 13. u 2 parameter comparison.
Applsci 15 09577 g013
Figure 14. Introduction of suppression parameter comparison.
Figure 14. Introduction of suppression parameter comparison.
Applsci 15 09577 g014
Table 1. Dataset statistics.
Table 1. Dataset statistics.
Number of SamplesNumber of TasksNumber of WorkersNumber of PublishersNumber of Winning Bids
513,61225,64319,73517,34052,351
Table 2. APAED super-parameter table for each model.
Table 2. APAED super-parameter table for each model.
Directly Generated ListAPAED-Generated List
HR@10 MRR HR@10 MRR
OPCA-CF9.5752.9205.9390.736
NAIS2.3583.4270.7891.229
NAIS+AP2.8573.2951.3990.885
NAIS+DCA3.0752.2451.2150.726
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Luo, Z.; Zhang, Y.; Zhao, Q.; Chen, L.; Liu, X. APAED: Time-Optimized Adaptive Parameter Exponential Decay Algorithm for Crowdsourcing Task Recommendation. Appl. Sci. 2025, 15, 9577. https://doi.org/10.3390/app15179577

AMA Style

Luo Z, Zhang Y, Zhao Q, Chen L, Liu X. APAED: Time-Optimized Adaptive Parameter Exponential Decay Algorithm for Crowdsourcing Task Recommendation. Applied Sciences. 2025; 15(17):9577. https://doi.org/10.3390/app15179577

Chicago/Turabian Style

Luo, Zhiwei, Yuanyuan Zhang, Qiwen Zhao, Liangyin Chen, and Xiaojuan Liu. 2025. "APAED: Time-Optimized Adaptive Parameter Exponential Decay Algorithm for Crowdsourcing Task Recommendation" Applied Sciences 15, no. 17: 9577. https://doi.org/10.3390/app15179577

APA Style

Luo, Z., Zhang, Y., Zhao, Q., Chen, L., & Liu, X. (2025). APAED: Time-Optimized Adaptive Parameter Exponential Decay Algorithm for Crowdsourcing Task Recommendation. Applied Sciences, 15(17), 9577. https://doi.org/10.3390/app15179577

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop