Next Article in Journal
Cognitive Activity of an Individual Under Conditions of Information Influence of Different Modalities: Model and Experimental Research
Next Article in Special Issue
LOMDP: Maximizing Desired Opinions in Social Networks by Considering User Expression Intentions
Previous Article in Journal
BoxesZero: An Efficient and Computationally Frugal Dots-and-Boxes Agent
Previous Article in Special Issue
Persistent Homology Combined with Machine Learning for Social Network Activity Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multilevel Context Learning with Large Language Models for Text-Attributed Graphs on Social Networks

Electronic Information School, Wuhan University, Wuhan 430072, China
*
Author to whom correspondence should be addressed.
Entropy 2025, 27(3), 286; https://doi.org/10.3390/e27030286
Submission received: 16 January 2025 / Revised: 23 February 2025 / Accepted: 8 March 2025 / Published: 10 March 2025
(This article belongs to the Special Issue Spreading Dynamics in Complex Networks)

Abstract

:
There are complex graph structures and rich textual information on social networks. Text provides important information for various tasks, while graph structures offer multilevel context for the semantics of the text. Contemporary researchers tend to represent these kinds of data by text-attributed graphs (TAGs). Most TAG-based representation learning methods focus on designing frameworks that convey graph structures to large language models (LLMs) to generate semantic embeddings for downstream graph neural networks (GNNs). However, these methods only provide text attributes for nodes, which fails to capture the multilevel context and leads to the loss of valuable information. To tackle this issue, we introduce the Multilevel Context Learner (MCL) model, which leverages multilevel context on social networks to enhance LLMs’ semantic embedding capabilities. We model the social network as a multilevel context textual-edge graph (MC-TEG), effectively capturing both graph structure and semantic relationships. Our MCL model leverages the reasoning capabilities of LLMs to generate semantic embeddings by integrating these multilevel contexts. The tailored bidirectional dynamic graph attention layers are introduced to further distinguish the weight information. Experimental evaluations on six real social network datasets show that the MCL model consistently outperforms all baseline models. Specifically, the MCL model achieves prediction accuracies of 77.98%, 77.63%, 74.61%, 76.40%, 72.89%, and 73.40%, with absolute improvements of 9.04%, 9.19%, 11.05%, 7.24%, 6.11%, and 9.87% over the next best models. These results demonstrate the effectiveness of the proposed MCL model.

1. Introduction

The rise in social networks has changed the way people acquire information, leading to a surge in online users [1]. For example, X (formerly known as Twitter) is one of the world’s largest social media platforms. Users can share short text messages in short posts commonly known as “tweets” (officially “posts”) and repost or comment other users’ content. Similarly, Sina Weibo is one of the most popular microblog platforms in China, similar to X but with some localized features. Users on such social platforms publish a large amount of text to express emotions, intentions, and opinions, and their interaction behaviors, such as reposting and commenting, create complex graph structures.
Text plays a crucial role in information propagation within the graph, while the graph structure provides abundant context for interpreting the text’s meaning. Therefore, analyzing text based on the graph structure can provide valuable information for social network tasks, such as user classification [2], personalized recommendations [3], and community detection [4].
Due to the rich textual information and complex graph structures present on social networks, the recent research topic in this field focuses on combining natural language processing (NLP) techniques and network science [5,6,7]. Contemporary researchers tend to represent these kinds of data on such social networks by text-attributed graphs (TAGs). A TAG is a graph where nodes or edges are associated with text, commonly found in domains like citation networks [8,9], web page hyperlink networks [10], and social networks [5]. Unlike traditional NLP methods that transform text attributes into shallow or hand-crafted features, such as bag-of-words [11] or skip-gram [12], the core of TAG representation learning lies in the integration of graph structure and textual information. Given that graph neural networks (GNNs) excel at capturing graph structures and large language models (LLMs) perform well in various natural language processing tasks, most existing TAG-based methods focus on frameworks that convey graph structures to LLMs in order to generate semantic embeddings for GNNs. For example, SimTeG [13] fine-tunes an LLM to generate semantic embeddings for a GNN through a consistent downstream task loss, while LMaaS [14] utilizes a pre-trained language model (PLM) as an interpreter which transforms the explainable texts generated by the LLM into embedding vectors for the GNN. DGTL [15] inputs the text information encoded by an LLM into a disentangled GNN to capture the graph structural information, then feeds the learned representation vectors into the LLM predictor.
However, these methods only provide text attributes for nodes, which fail to capture the multilevel context on social networks and lead to the loss of valuable information from the outset. The neglected context is crucial in the analysis of users’ posts. First, online accounts typically include supplementary personal descriptions, such as occupation, hobbies, and education, which are essential for understanding their posts. Meanwhile, interactions such as reposts and comments provide important local context to clarify the meaning of individual texts. Furthermore, users’ current topics offer significant global context, as the same expression may have different meanings in different topics. As shown in Figure 1, when a user describes something as “unconventional” on the social network, it could have opposite meanings depending on the context. If it is used to describe a piece of art, it could express admiration. If used to describe food, it is more likely to be a subtle criticism. The different levels of context mentioned above, i.e., the personal context, the local context, and the global context, can provide important information to clarify the meaning. Therefore, it is essential to capture and utilize the multilevel context for semantic embeddings of users’ posts.
To capture the complex multilevel context and generate semantic embeddings for downstream tasks, the following challenges must be addressed: (1) How to model both multilevel context and graph structures in TAGs.
TAGs that only provide text attributes for nodes fail to capture multilevel contexts, which in turn affects the quality of the semantic embeddings. Firstly, providing only text attributes for user nodes fails to differentiate between personal descriptions and posts. Additionally, interactions such as reposts and comments between users form text pairs, but classical TAGs fail to capture the semantic relationships based on these text pairs. For example, when two users engage in interactions based on different text pairs, classical TAGs struggle to distinguish between them. Therefore, it is essential to design TAGs that capture more detailed information, enabling the simultaneous modeling of multilevel context and graph structure. (2) How to leverage the multilevel context to generate semantic embeddings for downstream tasks. Enabling LLMs to leverage multilevel context is also challenging because the multilevel context on social networks is distinguished not only in terms of granularity but also in form. The personal context is reflected at the node level and may vary significantly in form across different users. The local context is represented through the edges, and the complex semantic relationships are difficult to describe using natural language. Additionally, the global context is embodied throughout the overall graph and does not have predefined descriptions. Therefore, directly applying the multilevel context as prompts for LLMs is not practical. This highlights the importance of developing a more effective approach to utilize the multilevel context based on the designed TAG.
To solve these challenges, we propose the Multilevel Context Learner (MCL) model, which can leverage multilevel context on social networks to enhance the semantic embedding capabilities of LLMs for downstream tasks. First, we model the social network as a multilevel context textual-edge graph (MC-TEG), with personal descriptions as node attributes and interaction texts as edge attributes. Node attributes capture personal context, while edge attributes provide the basis for utilizing local context. Second, the proposed MCL leverages LLMs’ reasoning abilities to infer and update global context from posted text. Then, it combines the local context through relevant edges with personal and global context to generate semantic embeddings for each node. Moreover, a group of tailored bidirectional dynamic graph attention network (GAT) layers [16] are developed to further distinguish the weight information on social networks. Two types of attention are trained separately to collectively represent the relationships among nodes. To demonstrate the effectiveness of our proposed model, we evaluated it on the fundamental graph representation learning task: node classification. Extensive experiments on real social network datasets demonstrate the effectiveness of MCL.
Our main contributions are summarized as follows:
  • We model the social network as a multilevel context textual-edge graph (MC-TEG). Personal descriptions are regarded as node attributes, while interaction texts are treated as edge attributes, effectively capturing both graph structure and semantic relationships.
  • We propose the Multilevel Context Learner (MCL) model, which utilizes LLMs’ reasoning abilities to leverage multilevel context for generating semantic embeddings. The proposed bidirectional dynamic graph attention layers further distinguish the weight information.
  • Experimental evaluations on six social network datasets demonstrate the effectiveness of the proposed MCL model, which consistently outperforms all baseline methods across all datasets.

2. Related Works

In this section, we review related works on TAG representation learning and compare the existing approaches with the proposed MCL model on social networks.

2.1. Text-Attributed Graphs

TAGs are employed to represent structured data where nodes or edges are associated with text attributes [17], which are ubiquitous across various domains, including citation networks [8,9], web page hyperlink networks [10], and social networks [5]. TAGs have attracted considerable attention in the field of graph machine learning in recent years [13,18,19]. GNNs have been proven to be an effective framework for graph machine learning following the neighborhood aggregation scheme [20,21]. Classical TAG representation learning methods based on GNNs convert text attributes into shallow or hand-crafted features such as bag-of-words [11] or skip-gram [12] representations. The bag-of-words model represents text by counting the occurrences of each word within a document, disregarding grammar and order. The skip-gram model focuses on predicting the surrounding words within a fixed window to capture semantics. However, these methods are unable to capture polysemy and the semantic relationships between words, resulting in only basic semantic embeddings.
With the development of natural language processing (NLP) technologies, pre-trained language models (PLMs) [22] and topic models (TMs) [23,24] are used to handle text attributes, which can learn contextualized language representations and document embeddings. PLMs achieve deeper semantic understanding by first being trained on large-scale datasets and then fine-tuning for specific tasks. For example, TextGNN [25] extends the twin tower structured encoders of PLM with complementary graph information from user historical behaviors to generate better text representations. AdsGNN [26] leverages PLMs to obtain text representations at the granularities of nodes, edges, and tokens, respectively. In contrast to PLMs, TMs assume that texts contain a small number of latent topics that summarize distinct and broad concepts. Adjacent-Encoder [27] and DBN [28] use latent topics to generate the textual content of a document’s neighbors based on GNNs. Although PLM-based models and TM-based models can achieve deeper semantic embeddings, the encoding process remains independent of structural information and therefore does not fully leverage the complex semantic relationships in TAGs. Consequently, there is a growing need for the integration of text attributes and graph structure for TAG representation learning.

2.2. LLMs for TAGs

With the remarkable capabilities demonstrated by LLMs in various NLP tasks, recent research studies have explored the use of LLMs to address graph-related tasks. By treating graph problems as conventional NLP problems, pioneer researchers convert graph data into a representation that is comprehensible to LLMs on synthetic graph tasks. NLGraph [29] treats graphs as natural language descriptions and evaluates the performance of LLMs on various graph reasoning tasks, including connectivity, shortest path, maximum flow, and the simulation of GNNs. GPT4Graph [30] also converts graphs into specific vectors to evaluate the capabilities of LLMs in structural and semantic understanding tasks. InstructGLM [31] enhances the vocabulary by introducing new tokens for each node in the TAG, allowing LLMs to be fine-tuned for handling various TAG tasks in a generative way.
More recently, there has been a growing exploration into applying LLMs to TAGs. SimTeG [13] fine-tunes the LLM to generate semantic embeddings for the GNN through the consistent downstream loss function such as link prediction or node classification. LMaaS [14] utilizes a PLM model as the interpreter, converting the explanation and prediction from LLMs into embedding vectors for GNNs. DGTL [15] inputs the text information encoded by the LLM into disentangled GNNs to capture the structural information, and then feeds the learned representation vectors into the LLM predictor.
However, most existing representation learning approaches on TAGs focus on designing frameworks to convey graph structures to LLMs while overlooking the graph modeling process. When applied to social networks, traditional TAG structures fail to accurately capture the heterogeneous texts and the structural information. As a result, although these methods have designed sophisticated frameworks to enable LLMs to leverage graph structures, a significant amount of valuable information is lost in modeling the graph structures, which leads to limitations in fully exploiting the potential power of LLMs. In our method, we propose the LLM Enhanced Text Learner on Social Networks (MCL) model, which models the social network as a multilevel context textual-edge graph (MC-TEG). MCL captures and captures both graph structure and semantic relationships on social networks to enhance the reasoning capabilities of LLMs for downstream tasks.

3. Problem Formulation and Preliminaries

In this section, we introduce notation and formalize some concepts related to textual-edge graphs, graph neural networks, and large language models.

3.1. Textual-Edge Graphs

A textual-edge graph can be formulated as G = ( V , E , y ) , where V denotes the set of nodes, E V × V denotes the set of edges, and y denotes the the labels of nodes. In a TEG, each node v V is associated with a text description, and each edge e i j E also contains its text description, which is absent in traditional TAGs. These textual descriptions provide rich contextual information about the complex relationships between nodes, enabling a more detailed and comprehensive representation of data relations than traditional TAGs.
In this paper, we focus on node classification, one of the most typical tasks on graphs. We adopt the semi-supervised settings, where all the text information and the adjacency matrix A are given during the training procedure, while only a part of the node labels { y u u V t r } are provided, where V t r is the training node set. The task aims at predicting the labels of { y u u V t e } , where V t e is the set of test nodes.

3.2. Large Language Models

LLMs have introduced a new paradigm for task adaptation known as “pre-train, prompt, and predict”. In this paradigm, the LLM is first pre-trained on a large corpus of text data to learn general language representations. Instead of fine-tuning the model, a natural language prompt that defines the task and context is then provided to the model. The prompt can be presented in various formats, ranging from a concise sentence to a more extensive passage, and may incorporate supplementary details or constraints to direct the model’s behavior accordingly. Based on the prompt and input tokens, the model generates the output directly. Formally, for the sequence of input tokens x = ( x 1 , x 2 , , x q ) and the prompt p r o m , we can concatenate them into a new sequence x ^ = ( p r o m , x 1 , x 2 , , x q ) . Then, the probability of the output sequence s = ( s 1 , s 2 , , s m ) given x ^ is
p ( s | x ^ ) = i = 1 m p ( s i | s < i , x ^ )
where s < i represents the prefix of sequence s up to position i 1 , and p ( s i | s < i , x ^ ) represents the probability of generating token s i given s < i and x ^ .

3.3. Graph Neural Networks

Graph neural networks are a class of deep learning models specifically designed to handle graph-structured data [32]. GNNs extend the capabilities of traditional neural networks, enabling direct operation on graph structures, thereby capturing complex relationships and dependencies between nodes. GNNs typically follow a message-passing scheme where nodes aggregate information from their neighbors in each layer, formulated as
m i ( l ) = Agg ( h j ( l ) j N i )
h i ( l + 1 ) = Update ( h i ( l ) , m i ( l ) )
where h i ( l ) is the representation vector of node i at the l-th layer, N i is the neighbors of node i, Agg(·) is the aggregation function, and Update(·) is an updating function that typically includes linear functions and activation functions. For node classification, the output ŷ of GNNs is a normalized vector, where the dimension corresponds to the number of node categories and the values represent the probabilities of the node belonging to the corresponding category.

4. Method

In this section, we describe our proposed MCL model for node classification on social networks. An overall framework of our method is shown in Figure 2; it involves three main steps: (1) constructing an MC-TEG based on social networks; (2) utilizing LLMs to extract multilevel contexts and generate embeddings; and (3) training a bidirectional dynamic graph attention for prediction.

4.1. MC-TEG Construction Based on Social Networks

The first step in our method was to construct an MC-TEG that captures both graph structure and semantic relationships based on social network data. We collected textual data from real social networks based on tags and keywords and obtained personal descriptions of corresponding accounts. The dataset may include considerable noise due to users attaching irrelevant tags to their posts in an effort to increase visibility and engagement. Additionally, some keywords may appear in discussions across multiple topics and could carry different meanings. Given the scale of the dataset, manually filtering out this noise is not feasible, which highlights the necessity of using LLMs to leverage multilevel context to obtain accurate semantic embeddings.
We treat users as nodes in the MC-TEG, with personal descriptions serving as the text attributes t i for node i. The interaction texts between node i and node j are represented as pairs of original and repost or comment texts, serving as the corresponding edge attributes t i j . Notably, t i j and t j i are distinct, as the edges are directed. Moreover, if there are multiple interactions between users, the corresponding t i j will include multiple text pairs.
As shown in Figure 3, our MC-TEG provides textual attributes for both nodes and edges. The user descriptions serve as the textual attributes of the nodes, while the textual pairs representing interactions between users act as the textual attributes of the edges. The textual pairs on the edges not only effectively handle multiple interactions, but also distinguish the directionality of these interactions. Additionally, this structure facilitates the utilization of personal contexts through textual attributes on nodes and enhances the updating and utilization of global contexts.
In summary, the constructed MC-TEG preserves the graph structure while providing a foundation for LLMs to leverage multilevel context.

4.2. LLM-Based Multilevel Context Extraction and Embedding Generation

In this subsection, we utilize the powerful reasoning and comprehension abilities of LLMs to leverage personal context, local context, and global context, and then generate semantic embeddings based on the multilevel context for downstream GNNs.

4.2.1. Personal Context

The challenge in extracting user context lies in the fact that personal descriptions are unstructured. There are differences in style and content across personal descriptions of different users, making it difficult to unify them. This issue can be effectively addressed by designing specific prompts for the LLM. We design a fixed template and set key information as tokens that the LLM needs to predict based on personal descriptions. At the same time, we determined a set of default values in the prompt to handle cases where some personal descriptions may lack the key information. This structured output standardizes the format of the personal context for each node, facilitating subsequent utilization. The personal context of node i is
c i = f pr t i
where c i is structured personal context of node i, and f pr is the LLM prediction function for personal context.

4.2.2. Local Context

We treat the 1-hop interaction texts among users as the local context. This is reasonable because it reflects the most direct semantic relationships, and increasing the number of hops would lead to an exponential increase in complexity. In the designed MC-TEG, interaction texts are already treated as edge attributes, which effectively reflect the local context. At the same time, the directionality of the edges also introduces distinctions in the local context. We denote r i n and r o u t as the local contexts when node i acts as the target node and the source node, respectively, which is reflected in the differing positions of node i’s text within the text pairs t i j or t j i .

4.2.3. Global Context

Online users often engage in discussions based on specific topics, which provides valuable global context. However, these topics are not explicitly predefined and are accompanied by significant noise. We fully utilize the powerful reasoning capabilities of LLMs by constructing prompts from the text pairs on the edges to infer the topics within the graph. Since the input consists of text pairs, some important texts may be repeatedly included. LLMs can reduce the impact of noise in the topic-update process by effectively leveraging patterns identified in repeated texts. The global context is inferred as follows:
T = f gl e i j E t i j
where T is the inferred global context, f gl is the LLM-specific function for the global context, ⋃ represents the iterating progress, and t i j is the text pair associated with edge e i j .

4.2.4. Semantic Embedding

To fully leverage the LLM’s capability to understand and model complex patterns and semantics in MC-TEG, we injected the associated multilevel context into the LLM, which includes c i (personal context), r i n and r o u t (local context), and T (global context). Each context level provides valuable information that helps the LLM form a comprehensive understanding of the node’s semantics within the MC-TEG. Specifically, we reserved a set of token positions for placing the multilevel context in the prompt input. Even if the different levels of context are not unified in form, the form of our input will still allow the LLM to think that they are aligned with the natural semantic space that humans can understand, as shown in Figure 4. Through this approach, we enabled the LLM to benefit from a comprehensive understanding of both the graph structure and textual information, generating semantic embeddings for downstream GNNs tasks. These semantic embeddings facilitate a direct gradient flow to the GNNs, resulting in more accurate and informative gradient updates. This fusion of language modeling and graph representation learning enables our MCL model to leverage the multilevel context captured by the LLM alongside the structural patterns learned by the GNNs, driving effective learning and enhanced performance.

4.3. Prediction by Bidirectional Dynamic Graph Attention Layers

As a classic GNN, the graph convolution mechanism uses a uniform weight matrix to aggregate features, which is unsuitable for social networks because different neighbors have distinctly varying impacts on different users. The attention mechanism can effectively address this issue, which is initially employed in computer vision [33] and then in NLP [22]. The attention mechanism has subsequently been proven to be competitive in graph analysis and has led to the popularity of graph attention networks [34]. An attention score is denoted as α i j , which indicates the importance from the neighbor j to the node i. The unnormalized attention score for edge ( i , j ) in layer m is computed as follows:
e ( h i ( m ) , h j ( m ) ) = LeakyReLU a T · [ W h i ( m ) W h j ( m ) ]
where a R 2 d m + 1 and W R d m + 1 × d m are learned in the training process, and ∥ represents vector concatenation. After computing all e ( h i ( m ) , h j ( m ) ) , j N i , a softmax layer is used to normalize them and obtain the attention score α i j :
α i j = exp { e ( h i ( m ) , h j ( m ) ) } j N i exp { e ( h i ( m ) , h j ( m ) ) }
Then, the attention weighted average in N i is used to update the representation of node i in layer m + 1 :
h i ( m + 1 ) = σ j N i α i j · W h j ( m )
where σ is a nonlinear function. Although this attention mechanism can distinguish the weight matrix, it still has a limitation: nodes are ranked relatively equally for nodes, only differing in absolute values. This significant limitation diverges from the nature of social networks because the importance ranking of different users varies. To address this limitation, we use the dynamic attention [16], where the order of operations in the scoring function is modified as follows:
e ( h i ( m ) , h j ( m ) ) = a T LeakyReLU W · [ h i ( m ) h j ( m ) ]
where the simple modification makes a significant difference in the expressiveness of the attention function.
However, simply using dynamic attention is still not adequate to simulate the characteristics of social networks. In Equation (10), the attention scores are normalized among all neighbors of the node without distinguishing between out-degree and in-degree. But on social networks, retweeting and being retweeted not only represent the direction of edges but also reflect the active and passive nature of user behavior, thus requiring attention to be distinguished accordingly. Therefore, we propose bidirectional dynamic graph attention layers. The out-degree attention and in-degree attention are trained with distinctions during the normalization process as follows:
e Out ( h i , h j ) = a 1 T LeakyReLU W 1 · [ h i h j ]
e In ( h i , h j ) = a 2 T LeakyReLU W 2 · [ h i h j ]
α i j Out = exp e Out ( h i , h j ) j N i Out exp e Out ( h i , h j )
α i j In = exp e In ( h i , h j ) j N i In exp e In ( h i , h j )
where N i O u t and N i I n represent the out-degree neighbors and in-degree neighbors, respectively. a 1 , a 2 , W 1 , and W 2 denote different training parameters while h i is shared between the out-degree layer and in-degree layer. The representation vector will be updated to the weighted sum of two types of attention as follows:
h i ( m + 1 ) = σ λ j N i Out α i j Out · W 1 h j ( m ) + ( 1 λ ) j N i In α i j In · W 2 h j ( m )
where λ is the hyperparameter. h i ( m + 1 ) is shared in the next layer and the node representation is iteratively updated according to Equations (10)–(14). Due to the presence of multiple categories, we use the cross-entropy loss function:
L = 1 | V t r | v V t r j = 1 K y v j log ( y ^ v j )
where V t r is the training node set, | V t r | denotes the size of this space, K is the number of categories, y v j is the true label vector element for node v and category j, and y ^ v j is the predicted probability of the node v belonging to the category j. To prevent overfitting, we add a regularization term. The final loss function is as follows:
L t o t a l = L + i = 1 2 θ i W i 2 2
where W i 2 2 is the L2 norm of each weight matrix W i , and θ i is the corresponding coefficient.

5. Experiment

In this section, we will compare the MCL model with several baseline methods and demonstrate the effectiveness of node classification on social networks. The experiment was conducted on a server with an Ubuntu 22.04 LTS system.(Canonical Ltd., London, United Kingdom). The server has 96 cores and a clock speed of 2.5 GHz. The GPU used by the MCL model is Nvidia GeForce RTX 4090. (NVIDIA Corporation, Santa Clara, CA, USA).

5.1. Datasets

To verify the validity and robustness of our model, we collected data related to political elections from X (formerly Twitter) and data related to school food safety from Sina Weibo. X is one of the world’s largest social media platforms, while Sina Weibo is one of the most popular microblog platforms in China, similar to X but with localized features. Users on such social media platforms can add words or phrases starting with “#” (also known as hashtags) to categorize their posts and make them discoverable to a wider audience. To collect the data required for this study, we utilized publicly available APIs provided by the social media platforms (https://developer.twitter.com (accessed on 10 January 2024) and https://open.weibo.com (accessed on 3 June 2023)). These APIs enabled us to systematically retrieve relevant data, including posts, comments, and metadata, based on specific hashtags, all in a structured and compliant manner. All data were carefully de-identified to ensure that no personal information was leaked. Subsequently, these data underwent preprocessing, which included handling missing values and excluding posts in languages other than the target language. We constructed the network structure based on symbols in the text that reflect reposting and replying relationships, such as [ r t t ] and ‘@’. The detailed description of the datasets is as follows:
  • Datasets on X. These three English datasets record discussions on three topics related to the 2024 United States Presidential Election on X. They include information about the posting users, the content of the posts, posting times, retweet counts, and content relationships. The recording period covers discussions from 10th January 2024 to 10th February 2024. Users in these datasets are categorized into three classes: supporters of Trump, neutral users, and supporters of Harris. The three datasets are arranged in ascending order of size as X_A, X_B, and X_C. X_A contains data related to the hashtags # DemocraticParty and # RepublicanParty , X_B contains data related to # Harris and # Trump , and X_C contains data related to # 2024 UnitedStatesElections .
  • Datasets on Sina Weibo. The three Chinese datasets record discussions on three topics related to school food safety on Sina Weibo. They also include information about the posting users, the content of the posts, posting times, retweet counts, and content relationships. The recording period spans 3rd June 2023 to 30th June 2023. Users in these datasets are categorized into three classes based on their level of support for students: supporters, neutral users, and opponents. The three datasets are arranged in ascending order of size as Weibo_A, Weibo_B, and Weibo_C. Weibo_A contains data related to hashtags about media coverage, Weibo_B contains data related to hashtags about school statements, and Weibo_C contains data related to hashtags about official statements.
The statistics of all datasets are shown in Table 1.

5.2. Experimental Setup

In the proposed MCL model, we used GPT-4 as the LLM to generate semantic embeddings. We compared our model with baselines from the following two categories:
  • GNN Predictors. We considered different GNN-based models to enhance the node features on TAG. Our baselines include mT5 [35] and DeBERTa [36]. We selected the most suitable GNN backbones based on the descriptions of their methods.
  • LLM Predictors. We also considered using different LLMs as baselines, where the text is directly input into the models for prediction. We performed predictions using Llama [37], ChatGLM [38], QwenLM [39], and ERNIE BOT [40].
For all methods, we adopted the classical semi-supervised learning setting, randomly selecting 10% of the data from each category to form the training set. We directly used classification accuracy as the evaluation metric.

5.3. Overall Evaluation

The results of our comparison with the baselines are shown in Table 2. From the table, we can find that LLM predictors are generally better GNN predictors, indicating that the text on social networks cannot be adequately represented by solely pre-trained models. The reasoning ability of LLMs allows them to adapt and better understand these complexities. Our MCL model outperforms all baselines on all six datasets, demonstrating its effectiveness in both Chinese and English datasets. Unlike LLM-based predictors, our approach leverages the multilevel context on social networks, which is crucial for obtaining accurate text embeddings and enhancing the comprehensibility of the model’s decision-making process. In addition, we propose tailored bidirectional dynamic graph attention layers to further distinguish the weight information among nodes, which aligns more closely with the structural characteristics of social networks. Our MCL model excels by fully leveraging multilevel context and graph structure within social networks. As the dataset expands, its performance remains robust and relatively stable, whereas baselines typically encounter a decline in effectiveness.

5.4. Ablation Study

In this section, we designed ablation experiments to separately analyze the contributions of different levels of context and the bidirectional dynamic attention to the final results.

5.4.1. The Multilevel Context Layers

To analyze the impact of different contexts on semantic embeddings, we conducted ablation experiments by removing token positions for personal, local, and global context in the input. As shown in Figure 5, removing context at any level affects the final results. The removal of local context has the largest impact, as user interactions in social networks are closely tied to text semantics. In contrast, removing personal context has the least impact, which we attribute to the fact that most user descriptions are either irrelevant or missing, so the personal context is set to default values, resulting in minimal impact.

5.4.2. The Bidirectional Dynamic Attention Layers

We further analyzed the effect of the GNN backbone in the MCL model by replacing the bidirectional dynamic attention layer with multilayer perceptron (MLP), graph convolutional network (GCN), and GAT layers, respectively. The results of the ablation experiment are shown in Figure 6.
Our observations are as follows: (1) Our MCL model demonstrates the best performance. This indicates that the bidirectional dynamic attention layer indeed captures the structural information and is better suited for the directed nature of social networks. (2) The MLP backbone performs the worst. This is to be expected, as it cannot model graph-based dependencies and relationships, limiting its capacity to capture the complex structure of social networks. (3) The performance of the GAT backbone and GCN backbone is moderate. Although they leverage structural information, they do not distinguish the importance ranking of neighboring nodes, which diverges from the nature of social networks. (4) As the dataset size increases and the structure becomes more complex, the performance of our MCL model is not significantly affected, while the performance of the GAT and GCN architectures tends to deteriorate.
To quantify the contribution of our bidirectional dynamic graph attention layer, we sampled a network containing eight nodes and compared the weight matrices of the traditional GAT layer with those of our bidirectional dynamic attention. As shown in Figure 7, the traditional GAT assigns the highest weight to the second node for all nodes, indicating that the representation of each node is overly influenced by the second node. This does not capture the diversity of user attention in social networks. In contrast, our bidirectional dynamic attention layer produces unique weight rankings for each node, highlighting its effectiveness in capturing the complex structure of social networks.

6. Discussion

In the present work, we propose the Multilevel Context Learner (MCL) model to tackle the challenge of text-attributed graph (TAG) representation learning on social networks. Specifically, we model the social network as a multilevel context textual-edge graph (MC-TEG), effectively capturing both graph structure and multilevel context. Our method enables the large language models (LLMs) to leverage multilevel context to generate semantic embeddings for downstream tasks. The tailored bidirectional dynamic graph attention layers further capture the complex structural relationships of social networks. The impressive experimental results strongly demonstrated the effectiveness of MCL.
On the other hand, our method provides a new perspective on TAG representation learning. It emphasizes the importance of modeling TAGs for specific scenarios, which can capture valuable information for downstream tasks. Based on the constructed TAG structure, it is reasonable and effective to design models that leverage LLMs and GNNs. In future work, we will explore additional TAG structures to improve the effectiveness of TAG representation learning for other scenarios. For example, a hierarchical text-attributed graph is more suitable for capturing a hierarchical graph structure in citation networks, while a heterogeneous text-attributed graph is better suited for networks with different types of nodes, such as product recommendations or movie reviews. Furthermore, we will further explore the adaptation of MCL to other types of networks, such as biological networks, or power grids in Appendix A.

Author Contributions

Conceptualization, X.C. and H.J.; Methodology, X.C.; Software, X.C.; Validation, X.C.; Data curation, R.G.; Writing—original draft preparation, X.C.; Writing—review and editing, X.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (NSFC) under Grant U19B2004.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Data for this article can be obtained by contacting the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Appendix A. Additional Experiments

To validate the performance of the MCL model on different networks, we conducted experiments on Protein–Protein Interaction (PPI) networks. We collected the network related to the nagA-2 protein of Rhodopirellula baltica from the STRING database. The dataset includes 61 nodes and 498 edges, where nodes represent proteins and edges represent various types of interactions (with 8 distinct edge types). Nodes are classified into six groups based on their functions and metabolic pathways. We adapted the construction of MC-TEGs to fit this network’s characteristics. Specifically, we used protein domains and descriptions as node features, and protein interaction types as edge features to differentiate interactions between the same nodes. We then conducted the node classification task under the same experimental setup, as shown in Table A1. The GNN predictors performed poorly due to their inability to handle the specialized terminology of protein domains. While LLM predictors can understand these terms, they cannot leverage the complex network structure. Our MCL model, however, captures both the specialized terms and the intricate context of protein interactions, delivering the best performance. These results demonstrate the generalization ability of the MCL model.
Table A1. The results on PPI dataset.
Table A1. The results on PPI dataset.
MethodsAccuracy
mT50.3061
DeBERTa0.3673
QwenLM0.5235
ERNIE0.5102
ChatGLM0.5306
Llama0.5714
MCL0.6122

References

  1. Gottfried, J.; Shearer, E. News Use Across Social Media Platforms 2016. Pew Research Center, 2016. Available online: https://apo.org.au/node/64483 (accessed on 3 June 2024).
  2. Huang, A.; Xu, R.; Chen, Y.; Guo, M. Research on multi-label user classification of social media based on ML-KNN algorithm. Technol. Forecast. Soc. Chang. 2023, 188, 122271. [Google Scholar] [CrossRef]
  3. Berkani, L.; Belkacem, S.; Ouafi, M.; Guessoum, A. Recommendation of users in social networks: A semantic and social based classification approach. Expert Syst. 2021, 38, e12634. [Google Scholar] [CrossRef]
  4. Su, X.; Xue, S.; Liu, F.; Wu, J.; Yang, J.; Zhou, C.; Hu, W.; Paris, C.; Nepal, S.; Jin, D.; et al. A comprehensive survey on community detection with deep learning. IEEE Trans. Neural Netw. Learn. Syst. 2024, 35, 4682–4702. [Google Scholar] [CrossRef] [PubMed]
  5. Zeng, H.; Zhou, H.; Srivastava, A.; Kannan, R.; Prasanna, V. Graphsaint: Graph sampling based inductive learning method. arXiv 2019, arXiv:1907.04931. [Google Scholar]
  6. Jiang, J.; Ferrara, E. Social-LLM: Modeling User Behavior at Scale using Language Models and Social Network Data. arXiv 2023, arXiv:2401.00893. [Google Scholar]
  7. Xie, B.; Ma, X.; Shan, X.; Beheshti, A.; Yang, J.; Fan, H.; Wu, J. Multiknowledge and LLM-Inspired Heterogeneous Graph Neural Network for Fake News Detection. IEEE Trans. Comput. Soc. Syst. 2024. [Google Scholar] [CrossRef]
  8. Bai, H.; Chen, Z.; Lyu, M.R.; King, I.; Xu, Z. Neural relational topic models for scientific article analysis. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management, Torino, Italy, 22–26 October 2018; pp. 27–36. [Google Scholar]
  9. Xie, Q.; Zhu, Y.; Huang, J.; Du, P.; Nie, J.Y. Graph neural collaborative topic model for citation recommendation. ACM Trans. Inf. Syst. (TOIS) 2021, 40, 1–30. [Google Scholar] [CrossRef]
  10. Ma, Z.; Dou, Z.; Xu, W.; Zhang, X.; Jiang, H.; Cao, Z.; Wen, J.R. Pre-training for ad-hoc retrieval: Hyperlink is also you need. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, Virtual Event, 1–5 November 2021; pp. 1212–1221. [Google Scholar]
  11. Zhang, Y.; Jin, R.; Zhou, Z.H. Understanding bag-of-words model: A statistical framework. Int. J. Mach. Learn. Cybern. 2010, 1, 43–52. [Google Scholar] [CrossRef]
  12. Mikolov, T.; Sutskever, I.; Chen, K.; Corrado, G.S.; Dean, J. Distributed representations of words and phrases and their compositionality. In Proceedings of the Advances in Neural Information Processing Systems 26 (NIPS 2013), Lake Tahoe, NV, USA, 5–10 December 2013. [Google Scholar] [CrossRef]
  13. Duan, K.; Liu, Q.; Chua, T.S.; Yan, S.; Ooi, W.T.; Xie, Q.; He, J. Simteg: A frustratingly simple approach improves textual graph learning. arXiv 2023, arXiv:2308.02565. [Google Scholar]
  14. He, X.; Bresson, X.; Laurent, T.; Perold, A.; LeCun, Y.; Hooi, B. Harnessing explanations: Llm-to-lm interpreter for enhanced text-attributed graph representation learning. arXiv 2023, arXiv:2305.19523. [Google Scholar]
  15. Qin, Y.; Wang, X.; Zhang, Z.; Zhu, W. Disentangled representation learning with large language models for text-attributed graphs. arXiv 2023, arXiv:2310.18152. [Google Scholar]
  16. Brody, S.; Alon, U.; Yahav, E. How attentive are graph attention networks? arXiv 2021, arXiv:2105.14491. [Google Scholar]
  17. Yang, J.; Liu, Z.; Xiao, S.; Li, C.; Lian, D.; Agrawal, S.; Singh, A.; Sun, G.; Xie, X. Graphformers: Gnn-nested transformers for representation learning on textual graph. In Proceedings of the Advances in Neural Information Processing Systems 34 (NeurIPS 2021), Virtual, 6–14 December 2021; pp. 28798–28810. [Google Scholar]
  18. Wang, X.; Cui, P.; Wang, J.; Pei, J.; Zhu, W.; Yang, S. Community preserving network embedding. In Proceedings of the AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017; Volume 31. [Google Scholar]
  19. Wang, X.; Ji, H.; Shi, C.; Wang, B.; Ye, Y.; Cui, P.; Yu, P.S. Heterogeneous graph attention network. In Proceedings of the World Wide Web Conference, San Francisco, CA, USA, 13–17 May 2019; pp. 2022–2032. [Google Scholar]
  20. Kipf, T.N.; Welling, M. Semi-supervised classification with graph convolutional networks. arXiv 2016, arXiv:1609.02907. [Google Scholar]
  21. Hamilton, W.; Ying, Z.; Leskovec, J. Inductive representation learning on large graphs. In Proceedings of the Advances in Neural Information Processing Systems 30 (NIPS 2017), Long Beach, CA, USA, 4–9 December 2017; pp. 1025–1035. [Google Scholar]
  22. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. In Proceedings of the Advances in Neural Information Processing Systems 30 (NIPS 2017), Long Beach, CA, USA, 4–9 December 2017. [Google Scholar] [CrossRef]
  23. Srivastava, A.; Sutton, C. Autoencoding variational inference for topic models. arXiv 2017, arXiv:1703.01488. [Google Scholar]
  24. Zhang, D.C.; Lauw, H.W. Topic modeling for multi-aspect listwise comparisons. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, Virtual Event, 1–5 November 2021; pp. 2507–2516. [Google Scholar]
  25. Zhu, J.; Cui, Y.; Liu, Y.; Sun, H.; Li, X.; Pelger, M.; Yang, T.; Zhang, L.; Zhang, R.; Zhao, H. Textgnn: Improving text encoder via graph neural network in sponsored search. In Proceedings of the Web Conference 2021, Ljubljana, Slovenia, 19–23 April 2021; pp. 2848–2857. [Google Scholar]
  26. Li, C.; Pang, B.; Liu, Y.; Sun, H.; Liu, Z.; Xie, X.; Yang, T.; Cui, Y.; Zhang, L.; Zhang, Q. Adsgnn: Behavior-graph augmented relevance modeling in sponsored search. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, Virtual Event, 11–15 July 2021; pp. 223–232. [Google Scholar]
  27. Zhang, C.; Lauw, H.W. Topic modeling on document networks with adjacent-encoder. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 6737–6745. [Google Scholar]
  28. Zhang, D.C.; Lauw, H.W. Topic Modeling on Document Networks with Dirichlet Optimal Transport Barycenter. IEEE Trans. Knowl. Data Eng. 2023, 36, 1328–1340. [Google Scholar] [CrossRef]
  29. Wang, H.; Feng, S.; He, T.; Tan, Z.; Han, X.; Tsvetkov, Y. Can language models solve graph problems in natural language? In Proceedings of the Advances in Neural Information Processing Systems 36 (NIPS 2023), New Orleans, LA, USA, 10–16 December 2023; pp. 30840–30861. [Google Scholar]
  30. Guo, J.; Du, L.; Liu, H.; Zhou, M.; He, X.; Han, S. Gpt4graph: Can large language models understand graph structured data? an empirical evaluation and benchmarking. arXiv 2023, arXiv:2305.15066. [Google Scholar]
  31. Ye, R.; Zhang, C.; Wang, R.; Xu, S.; Zhang, Y. Language is All a Graph Needs. arXiv 2023, arXiv:2308.07134. [Google Scholar]
  32. Scarselli, F.; Gori, M.; Tsoi, A.C.; Hagenbuchner, M.; Monfardini, G. The graph neural network model. IEEE Trans. Neural Netw. 2008, 20, 61–80. [Google Scholar] [CrossRef]
  33. Mnih, V.; Heess, N.; Graves, A. Recurrent models of visual attention. In Proceedings of the Advances in Neural Information Processing Systems 27 (NIPS 2014), Montreal, QC, Canada, 8–13 December 2014; pp. 2204–2212. [Google Scholar]
  34. Veličković, P.; Cucurull, G.; Casanova, A.; Romero, A.; Lio, P.; Bengio, Y. Graph attention networks. arXiv 2017, arXiv:1710.10903. [Google Scholar]
  35. Xue, L.; Constant, N.; Roberts, A.; Kale, M.; Al-Rfou, R.; Siddhant, A.; Barua, A.; Raffel, C. mT5: A massively multilingual pre-trained text-to-text transformer. arXiv 2020, arXiv:2010.11934. [Google Scholar]
  36. He, P.; Liu, X.; Gao, J.; Chen, W. Deberta: Decoding-enhanced bert with disentangled attention. arXiv 2020, arXiv:2006.03654. [Google Scholar]
  37. Touvron, H.; Lavril, T.; Izacard, G.; Martinet, X.; Lachaux, M.A.; Lacroix, T.; Rozière, B.; Goyal, N.; Hambro, E.; Azhar, F.; et al. Llama: Open and efficient foundation language models. arXiv 2023, arXiv:2302.13971. [Google Scholar]
  38. Glm, T.; Zeng, A.; Xu, B.; Wang, B.; Zhang, C.; Yin, D.; Zhang, D.; Rojas, D.; Feng, G.; Zhao, H.; et al. Chatglm: A family of large language models from glm-130b to glm-4 all tools. arXiv 2024, arXiv:2406.12793. [Google Scholar]
  39. Bai, J.; Bai, S.; Chu, Y.; Cui, Z.; Dang, K.; Deng, X.; Fan, Y.; Ge, W.; Han, Y.; Huang, F.; et al. Qwen Technical Report. arXiv 2023, arXiv:2309.16609. [Google Scholar]
  40. Sun, Y.; Wang, S.; Feng, S.; Ding, S.; Pang, C.; Shang, J.; Liu, J.; Chen, X.; Zhao, Y.; Lu, Y.; et al. Ernie 3.0: Large-scale knowledge enhanced pre-training for language understanding and generation. arXiv 2021, arXiv:2107.02137. [Google Scholar]
Figure 1. An example of someone describing something as “unconventional”. This could be interpreted as a positive comment without context. The personal context shows that the user is a “foodie”. The local context indicates that the person he is replying to or commenting on has just expressed dissatisfaction with a certain food. The global context suggests that he is participating in a discussion about food. Based on the above multilevel context, this text is more likely to be interpreted as a criticism.
Figure 1. An example of someone describing something as “unconventional”. This could be interpreted as a positive comment without context. The personal context shows that the user is a “foodie”. The local context indicates that the person he is replying to or commenting on has just expressed dissatisfaction with a certain food. The global context suggests that he is participating in a discussion about food. Based on the above multilevel context, this text is more likely to be interpreted as a criticism.
Entropy 27 00286 g001
Figure 2. An overview of our proposed Multilevel Context Learner (MCL) model. First, we model the social network as a multilevel context textual-edge graph (MC-TEG). Second, we leverage the large language model (LLM) to capture multilevel contexts and generate embeddings for downstream tasks. Third, we train the bidirectional dynamic attention layer for node classification.
Figure 2. An overview of our proposed Multilevel Context Learner (MCL) model. First, we model the social network as a multilevel context textual-edge graph (MC-TEG). Second, we leverage the large language model (LLM) to capture multilevel contexts and generate embeddings for downstream tasks. Third, we train the bidirectional dynamic attention layer for node classification.
Entropy 27 00286 g002
Figure 3. A simple comparison of our MC-TEG with other graphs. Traditional TAG treats all text as node attributes, making it unable to handle interactions or distinguish between user descriptions and post texts. Although TEG adds textual attributes to edges, it only reflects the direction of interactions and cannot handle multiple interactions. Our MC-TEG not only distinguishes between user descriptions and post texts but also further differentiates the relationships and directions of interactions.
Figure 3. A simple comparison of our MC-TEG with other graphs. Traditional TAG treats all text as node attributes, making it unable to handle interactions or distinguish between user descriptions and post texts. Although TEG adds textual attributes to edges, it only reflects the direction of interactions and cannot handle multiple interactions. Our MC-TEG not only distinguishes between user descriptions and post texts but also further differentiates the relationships and directions of interactions.
Entropy 27 00286 g003
Figure 4. The illustration of prompts and typical responses. The blue parts indicate the texts populated based on the content of the dataset, including the multilevel context and the list of categories. The red parts indicate the text for semantic embedding. The green parts indicate the output given by the LLM.
Figure 4. The illustration of prompts and typical responses. The blue parts indicate the texts populated based on the content of the dataset, including the multilevel context and the list of categories. The red parts indicate the text for semantic embedding. The green parts indicate the output given by the LLM.
Entropy 27 00286 g004
Figure 5. The results of the ablation experiment on different levels of context. MCL w/o P, MCL w/o L, and MCL w/o G, respectively, represent models where the personal context, local context, and global context are removed from the prompt. Subfigure (a) shows the ablation results of datasets on X, while subfigure (b) shows the ablation results of datasets on Sina Weibo.
Figure 5. The results of the ablation experiment on different levels of context. MCL w/o P, MCL w/o L, and MCL w/o G, respectively, represent models where the personal context, local context, and global context are removed from the prompt. Subfigure (a) shows the ablation results of datasets on X, while subfigure (b) shows the ablation results of datasets on Sina Weibo.
Entropy 27 00286 g005
Figure 6. The results of the ablation experiment on bidirectional dynamic attention layer. The bidirectional dynamic attention layer is, respectively, replaced by multilayer perceptron (MLP), graph convolutional network (GCN), and graph attention network (GAT) layers. Subfigure (a) shows the ablation results of datasets on X, while subfigure (b) shows the ablation results of datasets on Sina Weibo.
Figure 6. The results of the ablation experiment on bidirectional dynamic attention layer. The bidirectional dynamic attention layer is, respectively, replaced by multilayer perceptron (MLP), graph convolutional network (GCN), and graph attention network (GAT) layers. Subfigure (a) shows the ablation results of datasets on X, while subfigure (b) shows the ablation results of datasets on Sina Weibo.
Entropy 27 00286 g006
Figure 7. A comparison of the attention weight metrics, where ( q i , k j ) represents the attention of node i to node j. Subfigure (a) shows the attention weight matrix of classical GAT layers, while subfigure (b) shows the attention weight matrix of our bidirectional dynamic attention layer.
Figure 7. A comparison of the attention weight metrics, where ( q i , k j ) represents the attention of node i to node j. Subfigure (a) shows the attention weight matrix of classical GAT layers, while subfigure (b) shows the attention weight matrix of our bidirectional dynamic attention layer.
Entropy 27 00286 g007
Table 1. Dataset statistics.
Table 1. Dataset statistics.
PlatformDatasetNodesEdgesClasses
XX_A139412973
X_B27503242
X_C44326484
Sina WeiboWeibo_A9968783
Weibo_B21682172
Weibo_C38674344
Table 2. Performance comparison with baselines.
Table 2. Performance comparison with baselines.
XSina Weibo
Dataset X_A X_B X_C Weibo_A Weibo_B Weibo_C
mT50.56460.56840.55600.56820.55850.5508
DeBERTa0.58540.59530.57760.53910.55440.6020
QwenLM0.67140.65780.59520.61640.63050.5702
ERNIE0.63990.64580.61730.60840.60190.5833
ChatGLM0.68510.67960.63560.63450.65260.6214
Llama0.68940.68440.62930.67360.66780.6353
MCL0.77980.77630.74610.76400.72890.7340
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cai, X.; Gong, R.; Jiang, H. Multilevel Context Learning with Large Language Models for Text-Attributed Graphs on Social Networks. Entropy 2025, 27, 286. https://doi.org/10.3390/e27030286

AMA Style

Cai X, Gong R, Jiang H. Multilevel Context Learning with Large Language Models for Text-Attributed Graphs on Social Networks. Entropy. 2025; 27(3):286. https://doi.org/10.3390/e27030286

Chicago/Turabian Style

Cai, Xiaokang, Ruoyuan Gong, and Hao Jiang. 2025. "Multilevel Context Learning with Large Language Models for Text-Attributed Graphs on Social Networks" Entropy 27, no. 3: 286. https://doi.org/10.3390/e27030286

APA Style

Cai, X., Gong, R., & Jiang, H. (2025). Multilevel Context Learning with Large Language Models for Text-Attributed Graphs on Social Networks. Entropy, 27(3), 286. https://doi.org/10.3390/e27030286

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop