Next Article in Journal
Detecting Deceptive Utterances Using Deep Pre-Trained Neural Networks
Previous Article in Journal
Simple Compressive Strength Results of Sodium-Hydroxide- and Sodium-Silicate-Activated Copper Flotation Tailing Geopolymers
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Extracting Searching as Learning Tasks Based on IBRT Approach

1
School of Computer Science and Engineering, Northeastern University, Shenyang 110167, China
2
Software College, Northeastern University, Shenyang 110167, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(12), 5879; https://doi.org/10.3390/app12125879
Submission received: 15 May 2022 / Revised: 4 June 2022 / Accepted: 8 June 2022 / Published: 9 June 2022
(This article belongs to the Section Computing and Artificial Intelligence)

Abstract

:
With the rapid development of the World Wide Web and information retrieval technology, learning supported by searching engines (such as making travel plans) has boomed over the past years. With the help of search engines, learners can easily retrieve and find large amounts of information on the web. Recent research in the searching as learning (SAL) area has associated web searching with learning. In SAL processes, web learners recursively plan tasks, formulate search queries, obtain information from web pages, and change knowledge structures, to gradually complete their learning goals. To improve the experiences of web learners, it is important to accurately present and extract tasks. Using learning styles and similarity metrics, we first proposed an IBRT model to implement structured representations of the SAL process for each learner. SAL tasks were then extracted from the structures of IBRT. In this study, a series of experiments were carried out against assignment datasets from the Northeastern University (China) UWP Programming Course. Comparison results show that the proposed method can significantly improve the performance of SAL task extraction.

1. Introduction

Search engines are the most used tools for accessing online information and are increasingly being used for learning purposes, such as when making travel plans and finding properties to purchase. From the aspect of searching as learning (SAL), searching is conceptualized as a series of activities for learning, such as planning tasks, issuing queries, analyzing web sources, piecing information together, and evaluating and using information [1,2]. Unlike traditional studies on searching as a learning tool, which only focus on accurate search results, SAL focuses on learning activities and outcomes during the search process. Studies have determined that tasks are interleaved during SAL processes. Web learners must recursively plan learning tasks, formulate search queries, scan search results, and adapt their knowledge structure to complete learning tasks [3,4].
Therefore, accurate task extraction is important for information retrieval systems to support and foster a large variety of more sophisticated search strategies. Studies have shown that task extraction can help systems discover related and interesting tasks [5,6], improve learners’ engagement [7], and improve ranking performance [8]. Moreover, the accurate extraction of tasks also helps to accurately locate users’ learning statures in their task space, as well as further recommend and help users better finish their learning tasks [9]. In view of this importance, SAL task extraction has received considerable attention in recent years [10,11,12].
While some of these studies focused on task extraction, SAL task extraction is still a very challenging problem because systems struggle to quantify the concurrent learning during searching, each task can be defined at different levels of granularity, and learners have different learning styles [13]. Studies of constructivist epistemology have determined that, during the learning process, each learner must (i) identify the knowledge gap, (ii) plan search tasks, (iii) generate queries on the basis of their knowledge structure and learning style, and (iv) analyze and synthesize search results to improve the knowledge structure, recursively performing (i) until they complete the learning goal. From this perspective, learners with different learning styles demonstrate different planning preferences when developing search tasks, and these planning preferences lead to different issued queries. From the perspective of information retrieval system studies, the research and development of task extraction methods are always based on the query–query or query–results relationship. These methods may work well for search tasks such as look-up, but not necessarily for SAL tasks, as these require the searcher to reflect, learn, and analyze the results [14,15].
We are motivated by the need for an accurate representation and extraction of SAL tasks and propose an efficient improved Bayesian rose tree (IBRT) model. Most existing hierarchical task extraction methods cannot capture learning styles and SAL factors. In contrast, our proposed method can accurately represent tasks by capturing SAL factors and improve performance by employing learning styles. To test our method, we collected and analyzed students’ development processes from the UWP programming course at Northeastern University, where 46 students learned by searching the web to complete UWP programming assignments.
Collecting students’ periodic learning outcomes is difficult in real time, and manual observation of searching processes may introduce privacy concerns. Therefore, we designed the experimental environment based on two methods for collecting student trace data: (i) we developed an IDE plugin that can automatically collect snapshots; (ii) we developed an automated approach that can be applied to search behavior captured by a Chrome browser plugin. These methods help provide insight for a better understanding of how novice programmers go about developing search tasks. We found two learning styles in SAL through our analysis and used these insights to develop the IBRT model for SAL task representation and extraction. Specifically, we make the following contributions presented in this paper:
  • We successfully collect an SAL dataset of autonomous web learner trace data.
  • To our best knowledge, we are the first study to employ specific learning styles in SAL task studies.
  • We develop a novel efficient SAL task representation and extraction based on an extended Bayesian rose tree model.

2. Related Works

2.1. Searching as Learning

Research in the field of SAL either conceptualizes learning as a searching process or devotes itself to exploring the connection between search and learning. Most of the studies that deal with learning in the search process do not conceptualize the phenomenon as learning; however, they observe changes in the search process, which can be interpreted as learning [2]. The concept of SAL was first pointed out in the Dagstuhl seminar (2013) [14] and formulated a strong research agenda for follow-up studies. Reynolds et al. [16] figured out that e-learning and information retrieval may provide useful boundaries and definitions for SAL.
(1)
From the perspective of conceptualizing learning as a search process. Kuhlthau [17] first conceptualized learning as a search process and proposed the Information Search Process (ISP) model. This model emphasizes the stages of human cognition and the learning process. Gary and Marchionini [15] pointed out that searches for learning purposes involve multiple iterations, and learners must analyze and interpret search results to achieve the learning goal. Zarro [18] presented dual-process theory from the social psychology domain conceptualizing learning as searching behaviors and cognitive processes. On the basis of conceptualizing learning as a search process, Odijk and White [19] figured out that learners may appear to be struggling during the learning process. Rieh et al. [1] assessed learning from online searching behavior and develop a search system that supports SAL. Ghosh et al. [16] studied the relationship between search and learning by taking learning as the result of the information search process. Proao-Rí et al. [20] conceptualized searching as a learning path and focus on helping learners to plan their SAL path. Taibi et al. [21] analyzed search processes in web learning and proved the positive impact of SAL systems in stimulating students’ creativity and critical thinking.
(2)
From the aspect of exploring the relations between search and learning. Vakkari et al. [2] summarized existing studies and presented three types of relationships between searching and learning. Moraveji et al. [22] and Sun et al. [23] analyzed the relationship between search skills and learning outcomes and pointed out that search skills can help learners complete learning tasks in a shorter time. Liu and Belkin [24] pointed out that learners’ familiarity and experience with topics will affect search behaviors and further positively affect learning outcomes. Bron et al. [25] and Vakkari et al. [26] also focused on how variables in the search processes affect search results and learning outputs. Gimenez et al. pointed out that searching is the learning mode of learners in the learning process, which will affect the search process [27]. Marchionini [28] focused on the relationships among searching, sense-making, and learning. Yigit et al. [29] explored result diversification as a useful technique to support learning-oriented search and developed a new search engine for SAL. Liu et al. [30] determined that learning strategies will directly affect their learning outcomes, and they further presented two kinds of learning strategies; they found that learners with a task-adaptive strategy show better learning outcomes, e.g., knowledge points, facets, and scope.

2.2. Task Extraction

Task extraction is an important research topic in both information retrieval and e-learning [31,32,33]. Wang et al. [34] pointed out that, in order to complete complex tasks (such as learning), users usually need to submit a series of queries, and multiple sub-learning tasks are involved in the process. If the search engine can predict multiple sub-targets of the learning task, the user can complete the task efficiently.
Traditional SAL task extraction methods usually start from two aspects: (1) SAL behavior duration; (2) exploring relationships (e.g., the relationship between query and query, the relationship between search results and search results, or the relationship between learning content and learning content). While some of these studies did not clearly point out the aspects for learning tasks, they can be used to identify SAL tasks. The methods which focus on SAL behavior duration extract tasks on the basis of the time between queries. If the time interval between two consecutive queries exceeds a certain threshold, it will be regarded as two different sessions or tasks; u a 30 min timeout period is typically used to subdivide the session [35]. SAL task extraction methods based on exploring relationships mainly focus on classification and clustering methods to extract learning tasks from query sequences. The common method of SAL task extraction is to calculate the distance between queries. Kotov et al. [36] analyzed cross-session search behavior and proposed a method for identifying all related queries from previous sessions issued by a user. Li et al. [37] proposed a task extraction method to identify search tasks. The studies that explored the relationship between learning content mainly abstracted queries as knowledge or concepts. Mittal et al. [38] abstracted queries into concepts using relational graph theory and explored the connections between concepts.
In addition, some advanced studies used hierarchical models to identify the relationships between tasks. Most of the existing hierarchical clustering techniques produced a binary tree structure, where each node was decomposed into two sub-nodes [13]. Jones et al. proposed and verified a method of automatically dividing the user query flow into hierarchical units, which automatically divides the user query flow into hierarchical tasks [39]. Blundell [40] first proposed the Bayesian hierarchical structure model to realize the hierarchical structure division of search tasks. Recently, Mehrotra et al. [13] proposed an efficient Bayesian nonparametric model to discover the hierarchical structure, and they clarified that the model can recognize the task structure composed of any number of subtasks.

3. Data and Methods

3.1. The Definition of Tasks in SAL

Jones et al. [15] first figured out the importance of task extraction and proposed that a task is the atomic information requirement that results in one or more queries. On the basis of Jones’s work, Ahmed et al. [18] proposed a more general definition of the complex web search task that captures the hierarchical structure of tasks and their associated subtasks. They also proposed that each search task can be divided into one or more subtasks which can also be considered independent tasks of the search and can eventually be decomposed into simpler tasks until each task represents the atomic information needs of the user. In addition, Ahmed et al. defined a complex web search as a multifaceted or multistep information requirement consisting of a set of related subtasks, each of which may recursively be considered a search task [18,41]. This definition is more general because it captures all kinds of tasks, whether complex or noncomplex.
Research by Rieh et al. [1] pointed out that it is very important to distinguish different types of learning structures in SAL. When the information is considered to be knowledge or belief from constructivist epistemology, searching will be conceptualized as a learning process. Users develop learning tasks and measure search results according to the conceptual changes of knowledge structure and beliefs, and they gradually change their knowledge structure to complete search-based learning tasks.
In addition, the authors of [2,26,42,43] pointed out that learners show different learning styles, which lead to different search preferences. Kolb et al. [44] classified learning styles into four categories: diverger, assimilator, converger, and accommodator; and this theory has become one of the most widely used learning models for students [45]. Moreover, he also proposed that investigating learners’ learning styles before a course can help teachers achieve better teaching effects [46]. On the basis of Kolb’s theory, Li et al. [47] proposed that learning styles affect the learning process (e.g., planning tasks) and further affect students’ learning outcomes. Vizeshfar et al. [48] pointed out that the understanding of learning styles helps to develop methods to improve students’ performance.
According to the above studies, we consider that the tasks in SAL cover the following characteristics:
  • Hierarchical: Each process of SAL may cover one or more tasks, and each task may be divided into subtasks. In addition, subtasks can also be independent tasks. Therefore, an effective task extraction method should be able to accurately identify and represent this hierarchy.
  • Learning style: In the process of SAL, learners divide learning goals depending on their learning style. Therefore, it is also important to develop a learning style-oriented SAL task extraction method.
On the basis of the definition of tasks in complex search (Ahmed et al. [5]), in this paper, we propose the definition of SAL tasks from the perspective of the tasks in SAL below.
Multifaceted or multistep learning activities are generated by learning needs; in the process of satisfying a learning goal, the learner may partition the learning goal into a set of tasks according to their own learning style, and each task may be recursive to be partitioned to a set of low-granularity tasks.

3.2. Data Collection and Analysis

3.2.1. Data Collection

For an understanding of how students develop SAL tasks, we collected and analyzed students’ searching and learning behaviors in the Northeastern University’s UWP course. First, we modified the Integrated Development Environment (IDE) for UWP to log students’ learning processes. Specifically, a snapshot of the student’s program was logged when the student compiled their project. Second, we developed a Firefox browser plug-in to log students’ searching activities. This plug-in was installed voluntarily by the student and could log complete searching interactions between students and search engines. To protect the privacy of students, we removed information that may involve student privacy, and we also anonymized student information. For this investigation, we analyzed data from two types of assignments, described below.
(1)
Assignments with clear targets. In the first week of the UWP course, students were asked to develop a multimedia player with file opening and playing functions. The full solution needed to work on the Win10 system and be able to play video and audio files. During this process, students struggled between searching and learning to finally complete the assignment. Similarly, we also assigned four other UWP programming assignments in the subsequent 4 weeks.
(2)
Assignments with open targets. In the sixth week of the UWP course, each student was asked to develop a UWP application with open targets. In order to understand the actual learning task, each student was asked to submit their study reports in text form.

3.2.2. SAL Learning Style Analysis

To extract tasks in SAL, we must be able to identify the learning styles. However, asking a learner about their learning style is not feasible in most cases. We believe that learning styles can be deduced by examining the learning activities. Therefore, we manually analyzed the learning activities of the students on UWP course. During our analysis, we observed two distinct learning styles. Figure 1a,b are two representative examples of SAL learning style from the dataset we collected.
Figure 1a presents an example in which the student is learning to complete the first-week assignment. The student first issued a query “file open picker”. Combined with the click-on events and the context in the UWP website, we find that the query can be mapped to the UWP class “FileOpenPicker”. The second query “FileSavePicker” and the third query “FolderPicker” are the names of two classes in the UWP API. Through analysis, we found that “FileOpenPicker”, “FileSavePicker”, and “FolderPicker” are all classes under the namespace “Windows.Storage.Pickers”. Therefore, it is evident that the student was learning following the knowledge network on Microsoft’s official UWP API reference. In addition, we believe that these searching activities belong to a common task (learning how to open a file for programming the media player).
Figure 1b shows another learning style. During the SAL process, the student issued five queries following the time sequence. Combined with context on the Microsoft UWP website, we find that these learning events fall into three main learning tasks (the first and the second query belong to the first learning task, the second query and the third query belong to the second learning task, and the fourth query belongs to the third learning task). The analysis of the programming snapshots validated the deduction that the student completed three functions for the first-week assignment during this part of the learning process. Obviously, the student’s learning path did not follow the knowledge network of the UWP API reference.
The aforementioned examples represent the types of learning styles we observed during the data analysis. We summarized the two learning styles as follows:
(1)
Learning according to the knowledge network (LKN). These learners prefer to learn by following authoritative knowledge points in the knowledge network and creating queries on the basis of these knowledge points. In our experiments, these students commonly searched and learned following Microsoft’s official UWP API reference.
(2)
Learning according to information needs (LIN). Learners who learn according to information needs prefer to learn according to the tasks partitioned by their own subjective. Compared with LKN, LIN learners exhibited more interleaved tasks. In the UWP course, these students usually explored the knowledge points needed to complete the learning task and learned to program sequences.

3.3. Modeling Progress

The first step in our SAL task extraction modeling process was to learn a high-level representation of SAL tasks for each learner. To this end, we propose an improved Bayesian rose tree (IBRT) model for representation hierarchies of SAL tasks. While we cannot directly observe the task states when learning (they are latent variables), we can observe the search queries, search results, and the source code. These features serve as predictor variables for these latent variables. Search queries can reflect the learning objects in tasks, as well as possible learning outcomes, and the source code snapshot can reflect the task states. We propose a task extraction method to accurately extract tasks from the tree structure.

3.3.1. Improved Bayesian Rose Tree Model

The IBRT model is based on the BRT model used to construct multibranch hierarchies. Blundel et al. [40] first proposed an efficient Bayesian nonparametric model for discovering hierarchical community structure in social networks. BRT is based on a greedy probabilistic agglomeration approach to construct multibranch hierarchies. Mehrotray et al. [13] first proposed the BRT (Bayesian rose tree) model for extracting hierarchies of tasks and subtasks. Although the traditional BRT algorithm can achieve a hierarchical representation of search tasks, the BRT model cannot be used for SAL task representation as it cannot take learning outcomes and learning styles into account.
To solve these problems, we propose the IBRT model for the representation hierarchies of SAL tasks. IBRT can sense both searching features and learning features, further providing accurate SAL task representation on the basis of learning styles. The authors of [2,44] pointed out that the query is the starting point of a SAL process, and it can reflect learning objectives and learning structure. Furthermore, the authors of [2] proposed that queries can reflect the learning tasks of the learner. Therefore, we use query sets to represent the SAL task–subtask structures.
Specifically, we first propose Equation (1) in the IBRT model to build the representation of SAL tasks.
p D m | T m = π L K N g D m | T m + 1 π L K N π T m f D m + 1 π T m T i c h T m p D i | T i ,
where D m = Q 1 , Q 2 , , Q m is the set of issued queries. The queries in D m are sorted by the time sequence, where Q i is the i-th issued query. p D m | T m is the probability of data D m given a partitioning by the tree T m . g D m | T m is the marginal probability for D m which is partitioned by T m when the user’s learning style is LKN. π L K N is the prior probability of the learners whose learning style is LKN, whereas 1 π L K N is the prior probability of the learners whose learning style is LIN. π T m is the mixture probability, which represents the prior probability that all data points in T m belong to a single cluster. f D m is the marginal probability of the data D m . p D i | T i is a hyperparameter representing the probability that the nodes in query set D i are partitioned given the tree T i .
At the beginning of the modeling process, each query in D m is considered an independent tree. Specifically, T i = x i , where x i is the i-th query in D m . A query may independently be a task or part of a task. We recursively define the representation of SAL task hierarchy, where a tree T may contain queries T = x 1 ,   x 2 , , x n or may be split into subtrees T = T 1 , T 2 , , T n , where each T i   ( 1 < i < n ) is a subtree in T . This allows us to consider T as a hierarchical tree structure for the task–subtask relationship, where tasks and subtask are represented by query sets.
During the modeling process, for each step, the IBRT method chooses two trees T i and T j from T and merges them into a new tree T m . In this paper, we adopt the same tree merging strategy along with BRT, as shown in Figure 2. Inspired by the study proposed by Mehrotra and Yilmaz [13], at each step, IBRT greedily finds and merges two trees when the following formula is maximum:
p D m | T m p D i | T i p D j | T j ,
where p D m | T m is the possibility of the partition of query set D m given the tree T m . All queries in D m are leaves of tree T m , and p D m | T m is calculated recursively through its child nodes.
(1)
Calculating the marginal probability for LKN
For learners whose learning style is LKN, g D m | T m is calculated according to the knowledge network. As discussed above, in our experiment, students usually searched and learned using Microsoft UWP official documents. Thus, we propose calculating g D m | T m according to the hierarchical association between classes in the application programming interface provided by Microsoft’s official website for UWP app development.
As discussed in Section 3.2.2, students always learned following the Microsoft API reference for UWP. Thus, we built a tree structure for the knowledge network on the basis of the API reference. Specifically, on the API reference, each class belongs to a unique namespace, and a namespace contains multiple classes. Accordingly, we built each namespace as a subtree of the tree structure, while the classes under the namespace constituted the leaves of the subtree.
All namespaces belonged to the set of Windows UWP namespaces. Therefore, these subtrees were used to construct a three-layer tree structure for the UWP knowledge network. In the tree structure, all UWP classes were leaves. Figure 3 shows the tree structure of the UWP API reference, where “Windows.UI.Xaml.Media” is a namespace that contains classes such as “AcrylicBrush”, and “ArcSegment”. To determine the consistency of the representation of the tree structure, we removed the nodes other than leaves in the tree for the knowledge network.
Second, we mapped user queries into the knowledge network to build tree structures for each student. Specially, we mapped user queries to corresponding tree nodes in the tree of the UWP knowledge network by semantic similarity. To this end, we crawled websites about UWP programming to supplement our corpus and trained the word2vec model. We calculated the semantic similarity from two aspects: (i) the semantic distance between queries and classes; (ii) the belonging of newly added classes (one of the learning outcomes) and the others to the knowledge network tree structure. We propose Equation (3) for mapping a user query to the corresponding tree node.
S e m s i m q i , c l a s s j = α c o s v q i , v c l a s s j + β b e l o n g c l a s s q i , c l a s s j ,
where v q i is the semantic vector for q i , and v c l a s s j is the semantic vector for c l a s s j . b e l o n g ( ) is an indicator function. when c l a s s q i and c l a s s j belong to the same namespace, b e l o n g c l a s s q i , c l a s s j = 1 ; otherwise, b e l o n g c l a s s q i , c l a s s j = 0 .
Lastly, g D m | T m is calculated by the tree edit distance between a student’s knowledge network tree and T m according to the method proposed by Dinler et al. [49].
(2)
Calculating the marginal probability for LIN
For the students who study according to their learning needs, this paper proposes the following method to calculate the marginal probability f D m :
f D m = k = 1 n p i 1 . Q j 1 . Q r q i , q i k | α k β k ,
where r represents the degree of association strength between two nodes in the tree. We calculated r on the basis of the SAL features which we extracted from students’ web search data and learning outcomings. Table 1 shows the influencing factors that we selected to calculate the degree of association strength between user-issued queries in the SAL process. We employed the method proposed by Mehrotra et al. [6] to calculate π T m as follows:
π T m = 1 1 γ c h i l d T m 1 ,
where c h i l d T m presents the number of children in the tree T m , and γ is a hyperparameter.
IBRT adopted the influencing factors from three main aspects:
  • Queries: (i) Queries with identical or similar terms tend to have the same learning tasks. We, thus, used this factor to capture task relationships between a pair of search queries. To this end, we adopted cosine similarity, edit distance, and Jaccard distance to calculate these relationships. (ii) Queries with the same UPW-related terms tend to belong to the same tasks. In this paper, we adopted ‘percentage of identical UWP terms between queries’ to capture this feature. (iii) Pairs of search queries that are closer in the semantic space tend to belong to the same task. We, thus, used the ‘cosine distance between query semantics‘ to capture this factor.
  • Search results. Search queries that belong to the same tasks tend to have similar search results. Therefore, search results are also an important factor for us to measure the relationships between a bunch of queries. Similarly, we used the ‘average cosine similarity between term sets of clicked on URLs after two queries’, ‘average edit distance between term sets of clicked on URLs after two queries’, etc. to calculate these relationships.
  • Learning outcomes. We used snapshots to analyze how learners develop tasks since snapshots are the bridge linking queries and learning outcomes. In this paper, we used the ‘cosine similarity between class sets of snapshots after two queries’ and the ‘edit distance between class sets of snapshots after two queries’ to calculate these relationships.

3.3.2. Extraction of the Tasks in SAL Based on IBRT

The final step in our algorithm was to find tasks through the IBRT structure. We employed the dataset collected from the UWP course. To find these tasks, we employed the task coherence score in the BRT. To calculate the task coherence score, we ran the method developed by Mehrotra [6]. Using this relationship measure, we identified whether a subtree was a task or should be divided into multiple tasks. For each student, we constructed an IBRT to represent their learning status. We then computed the task coherence score for each leaf in each level.

4. Experiment Evaluation

In this section, we describe the experimental setup and evaluate the proposed SAL task extraction method. We evaluate the proposed SAL task extraction method from two aspects. First, we compare the performance of IBRT with existing state-of-the-art task extraction methods on the manually labeled UWP dataset. Second, subjective metrics (opinions from the participants) are collected to evaluate the performance, such as prediction accuracy and task hierarchy structure of IBRT.

4.1. Experimental Setup

We performed several experiments to study the performance of our approach. We designed experiments using the data described in Section 3.2.1. We collected repositories with 402 programing assignments (335 of the programing assignments with clear targets, and 67 of them with open targets) from 67 Northeastern University students. For each student, we collected an average of 63 snapshots of their learning outcomes, and we logged an average of 102 queries and an average of 251 clicks on events (these data were time-stamped, and we anonymized students’ search logs and used the ZEALOUS algorithm to protect student privacy data [50]). Lastly, we collected a study report for each student.
In this paper, we also used the best-performing hyperparameters (in our method and baseline methods). Additionally, for π T m , we employed the proportion of learning style LKN that we found during manual classification, π T m = 0.4 , which led to the best performance using our method, and this parameter was employed for all experiments in this paper. To evaluate our model, we then manually classified and annotated the UWP dataset, and all methods were trained on the dataset.

4.2. Comparison with State-of-the-Art Methods on UWP Dataset

To evaluate our model, we compared the results of the performance with seven existing state-of-the-art task extraction methods (BRT [13], BHC [51], LDA-Hawkes [37], Bestlink-SVM [52], Cluster-SVM [53], QC-HTC [54], and QC-WCC [54]). For a fair comparison, we experimented with these methods using the same data. Figure 4 compares the proposed model with the state-of-the-art task identification methods using F1 scoring.
In Figure 4, it is obvious that IBRT significantly outperformed other baseline algorithms. We first observed that BRT (to obtain a fair comparison, we employed all the factors in BRT that we employed in IBRT) performed worse than IBRT. The reason is that BRT cannot optimize according to students’ learning patterns. In contrast, in IBRT, we optimized two kinds of learning styles. QC-WCC and QC-HTC showed a difference in performance because they use different strategies in postprocessing.

4.3. Subjective Metrics

In this section, we applied subjective analysis to identify the performance of the proposed method.
We first discuss the subjective analysis setup. On the basis of the experiments we discussed in Section 4.1, we convened the participating students from the UWP course to conduct a subjective analysis of IBRT and other baseline methods. Each student was asked to evaluate their own search log. Students were asked to recall their own learning process and to evaluate the performance of the different algorithms. For a fair comparison, we anonymized the algorithm names, and we also shuffled the order of the extraction results when displaying them to students. Specifically, each student was asked to score the performance according to the following two aspects.
(1)
The accuracy of SAL task extraction
For this, the participants were asked to score the accuracy of each method. To this end, each student was asked to answer the following question:
“Does the task extraction result match the real situation when you are learning?”
We list the scoring principles in Table 2.
(2)
The accuracy of the hierarchy
For this, the participants were asked to evaluate the hierarchy of each method. To this end, each student was asked to answer the following question:
“Do you think the hierarchy of your learning process is valid?”
We list the scoring principles in Table 3.
Table 4 presents the evaluating results for the accuracy of SAL task extraction. A higher score indicates that the method had a better performance. The score of IBRT was the best with 2.7 (about 85% of students believed that the extraction results completely match their learning process). BRT performed second best with 2.5. This result also validates the performance of the BRT model in task extraction. In addition, the student evaluations were consistent with our experimental results in Section 4.2.
Table 5 presents the evaluating results for the accuracy of the hierarchy. A higher score indicates that the method had a better hierarchy. The score of IBRT was the best with 2.3 (about 69% of students believed that the extraction results completely matched their learning hierarchy). BRT performed second best with 2.0. BHC presented the worst score, as it adopts a binary tree structure, which is usually not the case for real learning tasks.

5. Conclusions

Studies on SAL are promising for designing advanced searching and education systems. To improve the experience of web learners, it is important to accurately present and extract SAL tasks. On the basis of learning styles and similarity metrics, this study proposed the IBRT model to implement structured SAL task representations for learners and, thereafter, extracted tasks on the basis of that structure. To test the proposed method, we collected and analyzed experimental data from the Northeastern University UWP course. Subsequently, a series of experiments were performed against the collected datasets. In comparison to the state-of-art task extraction methods, the experimental results indicated that our proposed methods could significantly improve the performance of SAL task extraction in terms of both objective and subjective metrics.
The success of the IBRT model implies that SAL task extraction studies combining both searching and learning activities are promising. In the IBRT model, searching activities can be conceptualized as a part of learning that further affect learning outcomes. The analysis and experimental results lead us to believe that the IBRT model can further explore the relationships between searching and learning activities.
Our results also demonstrate the advantage of employing learning styles in SAL task extraction studies. Web learners with different learning styles have different task planning preferences. These preferences lead learners to make different task plans for the same learning tasks. As reflected in our empirical results, the learning style employed in SAL task extraction can be successful in predicting web learning tasks.
Although our findings are promising, there are still some limitations of our study. First, the learning styles of a learner are highly dependent on the learning tasks. In this study, we manually analyzed and classified learning styles for UWP programs. However, methods to automatically extract learning styles may be worthy of further investigation. Second, there were limitations to the experimental data. In this study, we used the collected Northeastern University UWP coursework data as the dataset for the SAL study. This is the only dataset that contains both search activities and learning outcomes. We expect that the proposed method will be validated using large-scale datasets.
Future studies should include the following aspects: (1) developing methods for automatically extracting learning styles; (2) studying the use of latent features to predict learning outcomes to help expand the dataset and further improve the performance of SAL task extraction; (3) considering the time dimension to predict SAL tasks in future studies; (4) developing task-oriented personalized recommendation systems for web learners; (5) exploring the optimization of the learning path based on SAL tasks. One of our goals is to adopt this in recommending systems for university courses, and we will further explore the SAL path on the basis of the obtained results.

Author Contributions

Conceptualization, P.L. and B.Z.; Data curation, P.L. and Y.Z.; Formal analysis, P.L., B.Z. and Y.Z.; Funding acquisition, B.Z.; Investigation, P.L.; Methodology, P.L. and Y.Z.; Resources, P.L.; Software, P.L.; Validation, P.L., B.Z. and Y.Z.; Writing—original draft, P.L.; Writing—review & editing, P.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Key Project of the National Natural Science Foundation of China: U1908212.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Rieh, S.Y.; Collins-Thompson, K.; Hansen, P.; Lee, H.-J. Towards searching as a learning process: A review of current perspectives and future directions. J. Inf. Sci. 2016, 42, 19–34. [Google Scholar] [CrossRef]
  2. Vakkari, P. Searching as learning: A systematization based on literature. J. Inf. Sci. 2016, 42, 7–18. [Google Scholar] [CrossRef]
  3. Zhang, P.; Soergel, D. Process patterns and conceptual changes in knowledge representations during information seeking and sensemaking: A qualitative user study. J. Inf. Sci. 2016, 42, 59–78. [Google Scholar] [CrossRef] [Green Version]
  4. Liu, J. Deconstructing search tasks in interactive information retrieval: A systematic review of task dimensions and predictors. Inf. Process. Manag. 2021, 58, 102522. [Google Scholar] [CrossRef]
  5. Awadallah, A.H.; White, R.W.; Pantel, P.; Dumais, S.T.; Wang, Y.-M. Supporting complex search tasks. In Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management, Shanghai, China, 3–7 November 2014; ACM: New York, NY, USA, 2014; pp. 829–838. [Google Scholar]
  6. Mehrotra, R.; Yilmaz, E. Terms, topics & tasks: Enhanced user modelling for better personalization. In Proceedings of the 2015 International Conference on the Theory of Information Retrieval, Northampton, MA, USA, 27–30 September 2015; ACM: New York, NY, USA, 2015; pp. 131–140. [Google Scholar]
  7. O’Brien, H.L.; Arguello, J.; Capra, R. An empirical study of interest, task complexity, and search behaviour on user engagement. Inf. Process. Manag. 2020, 57, 102226. [Google Scholar] [CrossRef]
  8. Wang, H.; Song, Y.; Chang, M.-W.; He, X.; Hassan, A.; White, R.W. Modeling action-level satisfaction for search task satisfaction prediction. In Proceedings of the 37th International ACM SIGIR Conference on Research & Development in Information Retrieval (SIGIR ‘14), Gold Coast, Australia, 6–11 July 2014; ACM: New York, NY, USA, 2014; pp. 123–132. [Google Scholar] [CrossRef]
  9. Zhou, X.; Chen, J.; Wu, B.; Jin, Q. Discovery of Action Patterns and User Correlations in Task-Oriented Processes for Goal-Driven Learning Recommendation. IEEE Trans. Learn. Technol. 2014, 7, 231–245. [Google Scholar] [CrossRef]
  10. Shi, J.; Li, H.; Zhou, J.; Pang, Z.; Wang, C. Optimizing emotion–cause pair extraction task by using mutual assistance single-task model, clause position information and semantic features. J. Supercomput. 2021, 78, 4759–4778. [Google Scholar] [CrossRef]
  11. Aliannejadi, M.; Harvey, M.; Costa, L.; Pointon, M.; Crestani, F. Understanding Mobile Search Task Relevance and User Behaviour in Context. In Proceedings of the 2019 Conference on Human Information Interaction and Retrieval (CHIIR ‘19), Scotland, UK, 10–14 March 2019; ACM: New York, NY, USA, 2019; pp. 143–151. [Google Scholar] [CrossRef] [Green Version]
  12. Liu, J.; Sarkar, S.; Shah, C. Identifying and Predicting the States of Complex Search Tasks. In Proceedings of the 2020 Conference on Human Information Interaction and Retrieval, Vancouver, BC, Canada, 14–18 March 2020; ACM: New York, NY, USA, 2020; pp. 193–202. [Google Scholar] [CrossRef] [Green Version]
  13. Mehrotra, R.; Yilmaz, E. Extracting Hierarchies of Search Tasks & Subtasks via a Bayesian Nonparametric Approach. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR ‘17), Tokyo, Japan, 7–11 August 2017; ACM: New York, NY, USA, 2017; pp. 285–294. [Google Scholar] [CrossRef] [Green Version]
  14. Collins-Thompson, K.; Hansen, P.; Hauff, C. Search as Learning (Dagstuhl Seminar 17092). Dagstuhl. Rep. 2017, 7, 135–162. [Google Scholar]
  15. Marchionini, G. Exploratory search: From finding to understanding. Commun. ACM 2006, 49, 41–46. [Google Scholar] [CrossRef]
  16. Reynolds, R.; Meyers, E.; Ghosh, S.; Novin, A. Beyond Bloom’s Taxonomy: Integrating “Searching as Learning” and E-Learning Research Perspectives. Proc. Assoc. Inf. Sci. Technol. 2018, 55, 726–729. [Google Scholar] [CrossRef]
  17. Kuhlthau, C. Seeking Meaning; Libraries Unlimited: Westport, CT, USA, 2004. [Google Scholar]
  18. Zarro, M. Developing a dual-process information seeking model for exploratory search. In Proceedings of the HCIR 2012, Cambridge, MA, USA, 4–5 October 2012. [Google Scholar]
  19. Odijk, D.; White, R.W.; Awadallah, A.H.; Dumais, S.T. Struggling and Success in Web Search. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management (CIKM ‘15), Melbourne, Australia, 18–23 October 2015; ACM: New York, NY, USA, 2015; pp. 1551–1560. [Google Scholar] [CrossRef] [Green Version]
  20. Proao-Ríos, V.; González-Ibáez, R. Dataset of Search Results Organized as Learning Paths Recommended by Experts to Support Search as Learning. Data 2020, 5, 92. [Google Scholar] [CrossRef]
  21. Taibi, D.; Fulantelli, G.; Marenzi, I.; Nejdl, W.; Rogers, R.; Ijaz, A. SaR-WEB: A Semantic Web Tool to Support Search as Learning Practices and Cross-Language Results on the Web. In Proceedings of the 2017 IEEE 17th International Conference on Advanced Learning Technologies (ICALT), Timisoara, Romania, 3–7 July 2017; pp. 522–524. [Google Scholar] [CrossRef]
  22. Moraveji, N.; Russell, D.; Bien, J.; Mease, D. Measuring improvement in user search performance resulting from optimal search tips. In Proceedings of the 34th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR ‘11), Beijing, China, 24–28 July 2011; ACM: New York, NY, USA, 2011; pp. 355–363. [Google Scholar]
  23. Sun, K.; Zhu, J. Searching and Learning Discriminative Regions for Fine-Grained Image Retrieval and Classification. IEICE Trans. Inf. Syst. 2022, E105.D, 141–149. [Google Scholar] [CrossRef]
  24. Liu, J.; Belkin, N.J. Searching vs. writing: Factors affecting information use task performance. Proc. Am. Soc. Inf. Sci. Technol. 2012, 49, 1–10. [Google Scholar] [CrossRef]
  25. Bron, M.; van Gorp, J.; Nack, F.; de Rijke, M.; Vishneuski, A.; de Leeuw, S. A subjunctive exploratory search interface to support media studies researchers. In Proceedings of the 35th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR ‘12), Portland, OR, USA, 12–16 August 2012; ACM: New York, NY, USA, 2012; pp. 425–434. [Google Scholar]
  26. Vakkari, P.; Huuskonen, S. Search effort degrades search output but improves task outcome. J. Assoc. Inf. Sci. Technol. 2012, 63, 657–670. [Google Scholar] [CrossRef]
  27. Margulieux, L.E.; Catrambone, R.; Schaeffer, L.M. Varying effects of subgoal labeled expository text in programming, chemistry, and statistics. Instr. Sci. 2018, 46, 707–722. [Google Scholar] [CrossRef] [Green Version]
  28. Marchionini, G. Search, sense making and learning: Closing gaps. Inf. Learn. Sci. 2019, 120, 74–86. [Google Scholar] [CrossRef]
  29. Yigit-Sert, S.; Altingovde, I.S.; Macdonald, C.; Ounis, I.; Ulusoy, Ö. Explicit diversification of search results across multiple dimensions for educational search. J. Assoc. Inf. Sci. Technol. 2020, 72, 315–330. [Google Scholar] [CrossRef]
  30. Liu, C.; Song, X. How do Information Source Selection Strategies Influence Users’ Learning Outcomes’. In Proceedings of the 2018 Conference on Human Information Interaction & Retrieval (CHIIR ‘18), New Brunswick, NJ, USA, 11–15 March 2018; ACM: New York, NY, USA, 2018; pp. 257–260. [Google Scholar] [CrossRef]
  31. Catrambone, R. Aiding subgoal learning: Effects on transfer. J. Educ. Psychol. 1995, 87, 5–17. [Google Scholar] [CrossRef]
  32. Liu, J.; Belkin, N.J. Personalizing information retrieval for multi-session tasks: Examining the roles of task stage, task type, and topic knowledge on the interpretation of dwell time as an indicator of document usefulness. J. Assoc. Inf. Sci. Technol. 2014, 66, 58–81. [Google Scholar] [CrossRef]
  33. Marulli, F.; Verde, L.; Marrone, S.; Barone, R.; De Biase, M.S. Evaluating Efficiency and Effectiveness of Federated Learning Approaches in Knowledge Extraction Tasks. In Proceedings of the 2021 International Joint Conference on Neural Networks (IJCNN), Shenzhen, China, 18–22 July 2021; pp. 1–6. [Google Scholar] [CrossRef]
  34. Wang, T.X.; Lu, W.H. Constructing Complex Search Tasks with Coherent Subtask Search Goals. ACM Trans. Asian Lang. Inf. Process. 2016, 15, 6.1–6.29. [Google Scholar] [CrossRef]
  35. He, D.; Göker, A.; Harper, D.J. Combining evidence for automatic Web session identification. Inf. Process. Manag. 2002, 38, 727–742. [Google Scholar] [CrossRef]
  36. Kotov, A.; Bennett, P.N.; White, R.W.; Dumais, S.T.; Teevan, J. Modeling and analysis of cross-session search tasks. In Proceedings of the 34th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR ‘11), Beijing, China, 24–28 July 2011; ACM: New York, NY, USA, 2011; pp. 5–14. [Google Scholar] [CrossRef]
  37. Li, L.; Deng, H.; Dong, A.; Chang, Y.; Zha, H. Identifying and labeling search tasks via query-based hawkes processes. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Virtual Event, CA, USA, 23–27 August 2020; ACM: New York, NY, USA, 2014; pp. 731–740. [Google Scholar]
  38. Mittal, A.; Pagalthivarthi, K.V. Use of Relational and Conceptual Graphs in Supporting E-Learning Tasks. Int. J. E Learn. 2005, 4, 69–82. [Google Scholar]
  39. Jones, R.; Klinkner, K.L. Beyond the session timeout: Automatic hierarchical segmentation of search topics in query logs. In Proceedings of the 17th ACM Conference on Information and Knowledge Management (CIKM ‘08), Napa Valley, CA, USA, 26–30 October 2008; ACM: New York, NY, USA, 2008; pp. 699–708. [Google Scholar] [CrossRef]
  40. Blundell, C.; The, Y.W. Bayesian hierarchical community discovery. In Proceedings of the 26th International Conference on Neural Information Processing Systems—Volume 1 (NIPS’13), Lake Tahoe, NV, USA, 5–10 December 2013; Curran Associates Inc.: Red Hook, NY, USA, 2013; pp. 1601–1609. [Google Scholar]
  41. Agosti, M.; Fuhr, N.; Toms, E.; Vakkari, P. Evaluation methodologies in information retrieval dagstuhl seminar 13441. ACM SIGIR Forum 2014, 48, 36–41. [Google Scholar] [CrossRef]
  42. Mohssine, B.; Mohammed, A.; Abdelwahed, N.; Mohammed, T. Adaptive Help System Based on Learners ‘Digital Traces’ and Learning Styles. Int. J. Emerg. Technol. Learn. (IJET) 2021, 16, 288–294. [Google Scholar] [CrossRef]
  43. Hassan, M.A.; Habiba, U.; Majeed, F.; Shoaib, M. Adaptive gamification in e-learning based on students’ learning styles. Interact. Learn. Environ. 2019, 29, 545–565. [Google Scholar] [CrossRef]
  44. Kolb, A.Y.; Kolb, D.A. Experiential learning theory: A dynamic, holistic approach to management learning, education and development. In The SAGE Handbook of Management Learning, Education and Development; Armstrong, S.J., Fukami, C.V., Eds.; SAGE Publications Ltd.: Southend Oaks, CA, USA, 2009; pp. 42–68. [Google Scholar] [CrossRef] [Green Version]
  45. Stander, J.; Grimmer, K.; Brink, Y. Learning styles of physiotherapists: A systematic scoping review. BMC Med. Educ. 2019, 19, 2. [Google Scholar] [CrossRef]
  46. Kolb, A.Y.; Kolb, D.A. Experiential learning theory as a guide for experiential educators in higher education. Exp. Learn. Teach. High. Educ. 2017, 1, 7–14. [Google Scholar]
  47. Li, C.; Yang, Y.; Jing, Y. Formulation of teaching strategies for graduation internship based on the experiential learning styles of nursing undergraduates: A non-randomized controlled trial. BMC Med. Educ. 2022, 22, 153. [Google Scholar] [CrossRef]
  48. Vizeshfar, F.; Torabizadeh, C. The effect of teaching based on dominant learning style on nursing students’ academic achievement. Nurse Educ. Pract. 2018, 28, 103–108. [Google Scholar] [CrossRef] [PubMed]
  49. Dinler, D.; Tural, M.K.; Ozdemirel, N.E. Centroid based Tree-Structured Data Clustering Using Vertex/Edge Overlap and Graph Edit Distance. Ann. Oper. Res. 2020, 289, 85–122. [Google Scholar] [CrossRef]
  50. Gotz, M.; Machanavajjhala, A.; Wang, G.; Xiao, X.; Gehrke, J. Publishing Search Logs—A Comparative Study of Privacy Guarantees. IEEE Trans. Knowl. Data Eng. 2012, 24, 520–532. [Google Scholar] [CrossRef]
  51. Blundell, C.; Teh, Y.W.; Heller, K.A. Bayesian Rose Trees. Comput. Ence 2012, 22, 217. [Google Scholar]
  52. Wang, H.; Song, Y.; Chang, M.-W.; He, X.; White, R.W.; Chu, W. Learning to extract cross-session search tasks. In Proceedings of the 22nd International Conference on World Wide Web, Rio de Janeiro, Brazil, 13–17 May 2013; ACM: New York, NY, USA, 2013; pp. 1353–1364. [Google Scholar] [CrossRef]
  53. Finley, T.; Joachims, T. Supervised clustering with support vector machines. In Proceedings of the 22nd International Conference on Machine Learning (ICML ‘05), Bonn, Germany, 7–11 August 2005; ACM: New York, NY, USA, 2005; pp. 217–224. [Google Scholar] [CrossRef]
  54. Liao, Z.; Song, Y.; He, L.-W.; Huang, Y. Evaluating the effectiveness of search task trails. In Proceedings of the 21st International Conference on World Wide Web (WWW ‘12), Lyon, France, 16–20 April 2012; ACM: New York, NY, USA, 2012; pp. 489–498. [Google Scholar] [CrossRef]
Figure 1. Examples of SAL learning styles ((a) an example of the first learning style, (b) an example of the second learning style).
Figure 1. Examples of SAL learning styles ((a) an example of the first learning style, (b) an example of the second learning style).
Applsci 12 05879 g001
Figure 2. IBRT subtree merging strategy.
Figure 2. IBRT subtree merging strategy.
Applsci 12 05879 g002
Figure 3. The tree structure of the UWP API reference.
Figure 3. The tree structure of the UWP API reference.
Applsci 12 05879 g003
Figure 4. F1 score comparison on UWP dataset.
Figure 4. F1 score comparison on UWP dataset.
Applsci 12 05879 g004
Table 1. The influencing factors.
Table 1. The influencing factors.
DescriptionFactors
Query
Cosine similarity between term sets of two queries c o s i n T e r m S e t q i , T e r m S e t q j
Edit distance between term sets of two queries e d i t T e r m S e t q i , T e r m S e t q j
Jaccard distance between term sets of two queries j a c T e r m S e t q i , T e r m S e t q j
Percentage of identical UWP terms between queries p c t _ u w p _ t e r m s T e r m S e t q i , T e r m S e t q j
Cosine semantics distance between queries s e m _ c o s i n T e r m S e t q i , T e r m S e t q j
Search results
Average cosine similarity between term sets of clicked on URLs after two queries a v g _ u r l _ c o s i n U R L T e r m S e t q i , U R L T e r m S e t q j
Average edit distance between term sets of clicked on URLs after two queries a v g _ u r l _ e d i t U R L T e r m S e t q i , U R L T e r m S e t q j
Average cosine semantics similarity between clicked on URLs after two queries u r l _ a v g _ c o n t e t x _ c o s i n U R L T e r m S e t q i , U R L T e r m S e t q j
Cosine distance of UWP terms contained in clicked links after two queries u r l _ a v g _ t e r m s _ c o s i n U R L T e r m S e t q i , U R L T e r m S e t q j
Cosine similarity between UWP class sets of two search results a v g _ r e s u l t _ c l a s s _ c o s i n S e a r c h R e s u l t q i , S e a r c h R e s u l t q j
Learning outcomes
Cosine similarity between class sets of snapshots after two queries s n a p s h o t _ c o s i n S n a p s h o t q i , S n a p s h o t q j
Edit distance between class sets of snapshots after two queries s n a p s h o t _ e d i t S n a p s h o t q i , S n a p s h o t q j
Table 2. The scoring principles for question 1.
Table 2. The scoring principles for question 1.
ScoreScoring Principles
0Mismatches the real situation
1Only a small amount matches the real situation
2Mostly matches the real situation
3Completely matches the real situation
Table 3. The scoring principles for question 2.
Table 3. The scoring principles for question 2.
ScoreScoring Principles
0Invalid
1Valid in some parts; however, most of the extraction is invalid
2Valid in some parts; however, some parts are invalid
3Valid
Table 4. Evaluating results for the accuracy of SAL task extraction.
Table 4. Evaluating results for the accuracy of SAL task extraction.
Score
IBRT2.7
BRT2.5
BHC2.1
LDA-Hawkes2.5
BestLink-SVM2.4
Cluster-SVM2.3
QC-HTC1.9
QCWCC2.2
Table 5. The evaluating results for the accuracy of the hierarchy.
Table 5. The evaluating results for the accuracy of the hierarchy.
Score
IBRT2.3
BRT2.0
BHC0.9
LDA-Hawkes1.6
BestLink-SVM1.4
Cluster-SVM1.4
QC-HTC1.5
QCWCC1.5
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Li, P.; Zhang, B.; Zhang, Y. Extracting Searching as Learning Tasks Based on IBRT Approach. Appl. Sci. 2022, 12, 5879. https://doi.org/10.3390/app12125879

AMA Style

Li P, Zhang B, Zhang Y. Extracting Searching as Learning Tasks Based on IBRT Approach. Applied Sciences. 2022; 12(12):5879. https://doi.org/10.3390/app12125879

Chicago/Turabian Style

Li, Pengfei, Bin Zhang, and Yin Zhang. 2022. "Extracting Searching as Learning Tasks Based on IBRT Approach" Applied Sciences 12, no. 12: 5879. https://doi.org/10.3390/app12125879

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop