Next Article in Journal
Impact-Rubbings Dynamics Behavior of Magnetic-Liquid Double Suspension Bearing in Electromagnetic Failure Model
Next Article in Special Issue
Developing Language-Specific Models Using a Neural Architecture Search
Previous Article in Journal
Fuzzy Risk Evaluation and Collision Avoidance Control of Unmanned Surface Vessels
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deep Learning for Information Triage on Twitter

1
Division of Information and Communication Engineering, Kitami Institute of Technology, Kitami 090-8507, Japan
2
Research Center for Strategic Assistance in the Prevention of Floods, Earthquakes and Regional Hazards (SAFER), Kitami Institute of Technology, Kitami 090-8507, Japan
3
Division of Civil and Environmental Engineering, Kitami Institute of Technology, Kitami 090-8507, Japan
*
Author to whom correspondence should be addressed.
Current address: 165 Koen-cho, Kitami, 090-8507, Japan.
Yuuto Fukushima graduated Kitami Institute of Technology in 2016.
Appl. Sci. 2021, 11(14), 6340; https://doi.org/10.3390/app11146340
Submission received: 21 May 2021 / Revised: 9 June 2021 / Accepted: 10 June 2021 / Published: 8 July 2021
(This article belongs to the Special Issue Recent Developments in Creative Language Processing)

Abstract

:
In this paper, we present a Deep Learning-based system for the support of information triaging on Twitter during emergency situations, such as disasters, or other influential events, such as political elections. The system is based on the assumption that a different type of information is required right after the event and some time after the event occurs. In a preliminary study, we analyze the language behavior of Twitter users during two kinds of influential events, namely, natural disasters and political elections. In the study, we analyze the credibility of information included by users in tweets in the above-mentioned situations, by classifying the information into two kinds: Primary Information (first-hand reports) and Secondary Information (second-hand reports, retweets, etc.). We also perform sentiment analysis of the data to check user attitudes toward the occurring events. Next, we present the structure of the system and compare a number of classifiers, including the proposed one based on Convolutional Neural Networks. Finally, we validate the system by performing an in-depth analysis of information obtained after a number of additional events, including an eruption of a Japanese volcano Ontake on 27 September 2014, as well as heavy rains and typhoons that occurred in 2020. We confirm that the methods works sufficiently well even when trained on data from nearly 10 years ago, which strongly suggests that the model is well-generalized and sufficiently grasps important aspects of each type of classified information.

1. Introduction

Social Networking Services (SNS), such as Facebook (https://www.facebook.com/ accessed on 23 June 2021) or Instagram (https://www.instagram.com, accessed on 23 June 2021), have mainly served as tools for communication with friends and acquaintances.
Twitter (https://Twitter.com/, accessed on 23 June 2021) is also one of the most popular Social Networking Services (SNS). It specializes in information dissemination in the form of short messages. Compared to other forms of SNS, such as Facebook or Pinterest, the information disseminated on Twitter is also characterized by higher anonymity. For example, by utilizing unique features such as “retweets,“ (ReTweet, or RT is a function on Twitter that allows users to highlight that a tweet contains a citation of a tweet posted by another user). By the use of “hash tags”, or local options, such as Japanese kakusan kibo (“spread the news”), Twitter allows for easier transmission of information to an unspecified number of users, compared to other forms of SNS. The usefulness of such functions has made Twitter an important source of information in daily life, influencing the decision-making processes of many people.
The popularity of Twitter has also made it an effective instrument for tracking social tendencies. Therefore, much research has currently been actively conducted using data obtained from Twitter [1,2,3,4,5,6,7]. For example, Kuwano et al. [8] extracted tourist information from Twitter, or Umejima et al. [7] made an attempt to prevent the spread of false rumors by analyzing the phenomenon of Twitter hoaxes. Moreover, Aramaki et al. used Twitter to predict the spread of influenza [9]. In addition, Twitter has been considered an effective tool in information transmission during emergencies, such as the Great East Japan Earthquake which occurred on 11 March 2011 [10]. In a more recent study, Karami et al. [11] proposed an analytical framework for Twitter Situational Awareness, in which they used text-mining methods, such as sentiment analysis and topic modeling, for disaster preparedness and response.
Furthermore, Twitter has become useful in political activities. For example, lawmakers are using Twitter in public relations activities (http://www.soumu.go.jp/senkyo/senkyo_s/naruhodo/naruhodo10.html, accessed on 23 June 2021). Moreover, in July 2013, members of the House of Councilors of Japan for the first time in history lifted the ban on the use of Internet in political campaigns [12]. Thus, Twitter, being a useful tool for gathering information, has become an influential element of social infrastructure. Additionally, Casero–Ripollés et al. [13] analyzed how geolocation corresponds to political preferences on over 120 million tweets, and showed that the geographical location of users is strongly correlated with the polarity of their political conversations on Twitter, thus confirming the usefulness of Twitter for political information analysis.
With regard to the above, an appropriate selection of information is important, especially when it comes to the gathering of information in times of emergency and making decisions based on such information. Much of the information appearing on Twitter contains private opinions about a variety of topics. This also includes the appearance of various hoax tweets and false rumors unrelated to the general topic and mixed into the main thread. Therefore, a method for extracting only valid and useful tweets from a jumble of information on Twitter becomes essential. It is important to ensure the accuracy and the uniformity of the extracted information.
One of the means to determine the accuracy of information is using the concept of primary and secondary information. Primary information refers to the kind of information that a person directly saw, heard or personally did. Secondary information refers to indirect information, such as re-posting or re-telling what was described by someone else (third party), such as describing friend’s opinions about books, or what someone saw on TV.
Moreover, it has been pointed out by Kobayashi et al. that when making decisions or when evaluating something (books, movies, products), people are always subject to psychological effects caused by external information [14]. Kahneman and Tversky call this the “cognitive bias”, which hinders the perception of pure facts [15]. In situations of decision-making on the basis of ambiguous information, the existence of the cognitive bias factor causes the “initial value” (person’s background, or what they experienced previously) to affect their final judgment through the “anchor effect” (taking our background for granted). This causes the person to collect or remember only the information that is convenient for them, or to reinforce prejudicial information, which is also called the “confirmation bias”. The existence of the cognitive bias and related effects influencing the decision-making process of a person becomes a problem in situations of emergency or events of great importance when obtaining accurate and unbiased information is crucial for making appropriate judgments.
In this study, in order to obtain accurate information of high uniformity to perform information triaging [16,17,18], we firstly perform a preliminary study using a sample of tweet logs from the time of the Great East Japan Earthquake. The basic tweet classification rules defined in the preliminary study are further used to classify other tweets by dividing them into representing either primary or secondary information. We also use these rules to analyze tweets from the time of general elections. In the latter, we found out that a third kind of information needs to be recognized. We call it “sesquiary” information, and place it between primary and secondary information. We analyze the tweet logs related to elections in accordance with the new rules, and investigate the effectiveness of the classification rules.
After validating the approach, we propose a system for automatic classification of tweets into primary, sequiary and secondary. To optimize the system, we compare its performance on multiple feature sets and a number of classifiers. Finally, we test the system on completely new data, containing tweets obtained after an eruption of a Japanese volcano Ontake on 27 September 2014.
The outline of this paper is as follows. In Section 2 we describe the general idea of information triage and how it can be realized on Twitter. Section 3 presents an initial study into types of information found on Twitter, which becomes the basis for further analysis. We also describe the classification criteria for information triaging on Twitter, developed on the basis of the initial study, and describe the hypothesis regarding the change of information over time. In Section 6, we present the description of the proposed system for automatic information triaging on Twitter. Section 4 describes various experiments we performed to validate and test the proposed system. In Section 5 we present the analysis and classification results for the analyzed tweet logs, confirm the proposed hypothesis, and describe further improvement of classification criteria. Finally, we conclude the paper in Section 8.

2. Information Triage on Twitter

When using tools such as Twitter in the support of decision-making or determination of the present situation, appropriate selection of information becomes a crucial issue. This is especially difficult on Twitter, where topics of the tweets users search and the tweets unrelated to the searched topic often get intermixed. Thus, quick extraction of valid data according to its importance and urgency from the large amount of miscellaneous data becomes an important task.
The task of classification of information according to its importance and urgency is called information triage [17,18]. In cases when a mission or a task cannot be fully completed due to the limitations in time and resources, information triage becomes an important task, helping determine the priority of information according to certain criteria.
In information triaging, it is important to ensure the accuracy and the uniformity of information. In determination of the accuracy of the information, it is useful to apply the concepts of the primary information and secondary information. The primary information refers to the kind of information a user has directly seen or heard. The secondary information refers to a second-hand report of what someone said, or what was written in a book or a newspaper or appeared on television or on the Internet. In general, it is indirect information obtained through other sources.
When determining the uniformity of information, it is useful to consider the impact of cognitive bias. The existence of cognitive bias has been pointed out by Kahneman and Tversky [15] as a factor impairing the cognition of information based on facts. Cognitive bias consists of the “anchor effect” (when making decisions based on ambiguous information the initial value affects the decision-making process) and “confirmation bias” (when an observation is based on preconceptions of an individual, and they collect only the information convenient to themselves, thereby reinforcing self-preconceptions).
Automatic classification and dynamic switching through the above two types of information (primary and secondary) could help effectively provide information needed by users at the moment, which could be helpful in emergency situations such as disasters.
In previous research [19], we performed the analysis of primary and secondary information on Twitter and provided additional definition and classification criteria for sesquiary information (“sesquiary" meaning “1.5", or “neither primary nor secondary"). In this research, based on the concept of information triage, we constructed an automatic classification method for tweets containing information of high uniformity and accuracy, by providing a set of classification criteria for different types of information. Furthermore, in previous research we developed a hypothesis that in the occurrence of emergency situations such as earthquakes, a different kind of information is required right after the emergency and after some time has passed from the occurrence of the emergency situation.

3. Preliminary Study: Types of Information Found on Twitter

3.1. Analysis of Tweets from the Great Earthquake

In this section, we perform the study of tweets from the time of the Great East Japan Earthquake and describe the results of classification of those tweets.
We used the data provided by Twitter Japan in the Big Data Project (https://sites.google.com/site/prj311/, accessed on 23 June 2021). The tweets represent a time period of one week from 11 March to 17 March 2011, from the time of the Great East Japan Earthquake.
After omitting 151 tweets from before the earthquake, we randomly extracted 6000 tweets and analyzed them manually by six project members (expert annotators, each annotator analyzed 100 tweets). The manual analysis revealed that many of the tweets were actually retweets or contained other second-hand information. This led us to divide the information into primary information and secondary information.
Primary information refers to the kind of information that a person directly saw, heard or personally did. Secondary information refers to indirect information, such as re-posting or re-telling what was described by someone else, or describing a friend’s opinion about books, or what someone saw on TV. It is indirect information described on the Internet by the third party.
The primary information was represented by 1539 tweets (26%), secondary information was represented by 2083 tweets (36%), and other kinds of information not coming under the definition of either primary or secondary information was represented by 2227 tweets (38%) (Figure 1). Examples of each type of information were represented in Table 1.
Primary information was represented by tweets in which users directly describe their own state, such as samui (“I’m cold”) or tsurai (“It’s so hard...”). There were many tweets of this kind. In such tweets, the speaker refers to themselves using first-person expressions, such aswatashi (“I” [general]), or boku (“I”, or “me” [masculine]). Apart from this, a frequent expression appearing in primary information tweets was nau (“now”). Additionally, an expression indicating that the user is describing their own present state was the use of a rhetorical figure called taigen-dome (ending a sentence with a noun or noun phrase often used in Japanese poems, such as “shining stars” instead of “stars are shining”).
Compared to primary information, there was a larger amount of secondary information. However, much of this information was directed outside of the affected areas. Apart from that, there were numerous retweets containing information about the lifeline between the outside areas and the areas affected by the disaster, or in-tweet citations such as ...to no koto (“it is said that/they say that...”), or ...rashii (“apparently...”).

Definition of Rules for Detection of Primary and Secondary Information

Based on the classification results of the tweet logs of the Great East Japan Earthquake, we defined the rules (a set of criteria) for the classification of this kind of content.
In tweets like these, posted during the times requiring urgent decision-making, what is important is the accuracy of the information. If one wants to focus on the accuracy information, it is useful to classify the content into primary and secondary information.
Primary information refers to the kind of information that a person directly saw, heard or personally did. Secondary information refers to indirect information, such as re-posting or re-telling what was described by someone else (third party), or describing friend’s opinions about books, or what someone saw on TV.
The detailed criteria for classification of primary information are represented in Table 2. The detailed criteria for classification of secondary information are represented in Table 3.

3.2. Analysis of Election Tweets

Next, we analyzed two types of elections taking place in different periods of time. Firstly, we analyzed tweets about Lower House elections (elections to the House of Representatives). Based on the analysis of those tweets, we redefined the classification criteria. Next, we performed additional analysis of tweets about Upper House elections (elections to the House of Councilors).

Lower House Election Tweets

In order to classify the tweets about the general elections, we used a service site called Hashtag Cloud ( http://hashtagcloud.net/ (accessed on 23 June 2014, domain closed) at present, the site’s functionalities are incorporated in Twitter). This site provides all tweets saved from a week back, grouped by hashtags. Therefore, we started the search by looking for a hashtag “#GeneralElections” and downloading all tweets containing this hashtag. We began the downloading process on 3 December 2012 (the day of public announcement of the elections) and started collecting the tweets day by day.
In the analysis, we used only the tweets which appeared between 3 to 4 December 2012. There were 1503 tweets. The tweets obtained by using the “#GeneralElections” hashtag were longer in number of characters than the length of usual tweets (http://teapipin.blog10.fc2.com/blog-entry-294.html, accessed on 23 June 2021) (approximately 30 characters longer). They also contained more decisive and conclusive expressions. Moreover, compared to 23% of the average ratio of replies (http://b.hatena.ne.jp/entry/www.tommyjp.com/2010/10/7123rt6.html, accessed on 23 June 2021), the number of replies to election-related tweets was very low, only 5%.
Among all 1503 tweets, the primary information was represented by 88% (1317) of the tweets, and the secondary information by merely 12% (186) of the tweets (see Figure 2)
In addition, out of 82 unofficial retweets, 57 were classified as primary information, and out of 64 tweets containing the annotation “#RT”, 52 was classified as primary information. In unofficial RT, there were many comments supporting the tweets introducing political parties, and personal opinions about tweets regarding amendments to the constitution. In “#RT” there were also people who put a link to the site supporting the election.
In secondary information, there were official RT which copied the poster of the elections, and tweets containing phrases like “They say that there is an election campaign speech at...”. Moreover, out of a total number of 1503 tweets, 997 contained some kind of URL information, and information about candidates which could be influential on the choice of a candidate. For example, there were 197 tweets containing links to a list of election candidates divided by each district. This kind of information is often included in primary information tweets and often contains positive or negative opinions about political parties.
Next, from the total of 1317 primary information tweets, we randomly extracted 1000 and classified them into three categories: positive, negative, or neutral. Objective opinions were considered as neutral. For the results of the classification, out of 1000 tweets, there were 68 (7%) positive tweets, 771 (77%) neutral tweets, and 161 (16%) negative tweets (see Figure 3).
In positive tweets, there were many direct positive expressions about particular political parties, such as, “I support the party of...!”. In negative tweets, there were many expressions, such as, “I will never vote for the party of...”. There was also a great deal of neutral tweets which contained lists of candidates for each city district.
Primary, Secondary and “Sesquiary” Information.
Based on the analysis results of tweet logs from both the Great East Japan Earthquake and the general election tweet logs, we redefined our classification criteria.
It was not possible to apply to the general election tweets the classification criteria from the Great East Japan Earthquake alone, due to the fact that in tweets about the general elections, different information is often considered as important. In the disaster tweets, the factual information was the most important. In the election tweets, users often wrote about their political preferences; thus, it was also important to take into consideration information from the borderline of pure fact and rumor, such as opinions, or emotional attitudes. This kind of information in the Earthquake tweets is mostly considered as noise. However, in election tweets, private opinions and emotional comments could become useful as a reference. Therefore, it is important to distinguish this kind of information from the rest and annotate it separately.
To do this, we have defined a third kind of information which is neither primary nor secondary, though keeping a structure of its own, namely, “sesquiary” (“sesqui-” = 1.5) information (see Table 4).
Many tweets represent mixed information. In order to solve this conflict, we applied the following heuristic rules.
(1)
If a tweet contains different kinds of information, priority is given to the kind of lower density;
(2)
In cases of only sesquiary and secondary information appearing in a tweet, priority is given to secondary information.

3.3. Additional Analysis

Having redefined the original rules, we performed additional analysis of the Great Earthquake tweets and the Lower House Election tweets. Moreover, we performed additional analysis on completely new data containing tweets from the time of Upper House Elections.

3.3.1. Great Earthquake Tweets

Except the tweets we used as the data for classification, we randomly selected another 600 tweets. These data, unlike the data that were collected by limiting the topic to specific keywords, such as ”#GeneralElections” and so forth, were gathered from all tweets that appeared after the earthquake. Therefore, the average string length of a tweet and the number of replies was close to the general average.
When categorized according to the original definitions, the data contained 30% of primary information (183 tweets), 25% of secondary information (148 tweets), and 45% of other (269 tweets). See Figure 4 for details. The tweets that stood out consisted of incomprehensible entries, such as “@—njgo”, or contained single words, such as Yahoo-bokin (“Yahoo donations”).
When categorized according to the redefined rules, the data contained 23% of primary information (141 tweets), 35% of sesquiary information (210 tweets), 25% of secondary information (148 tweets), and 17% of other (101 tweets). See Figure 5 for details. The tweets that stood out contained entries resembling talking to oneself, such as “Souieba sotsugyoushiki dounaru no darou” (“BTW, What will happen with the graduation ceremony?”), or greetings sent to other users, such as “@—Itterasshaaai!” (“Bon voyage!/See you later!”).

3.3.2. Lower House Election Tweets

Next, we categorized the previously used election tweets according to the redefined rules. The data contained 30% of primary information (449 tweets), 57% of sesquiary information (849 tweets), and 13% of secondary information (194 tweets). See Figure 6 for details.
There was still a small number of tweets (11 cases, 0.7%) which did not fit into any of the information categories. There were incomprehensible tweets or greetings like in the Earthquake tweets. However, there were tweets which used hashtags and unofficial retweets, such as “#Sousenkyo RT @show-you-all : ishin/jimin no ushiro ni wa, Hashimoto Tōru ya...” (“#GeneralElections RT @show-you-all : Behind the Restoration and the Liberal Democratic Party, Oh, its Tōru Hashimoto...”).

3.3.3. Upper House Election Tweets

Using Hashtag Cloud, we collected an additional set of tweets for classification. The set consisted of tweets from the 23rd regular election of members of the Japanese House of Councilors (Upper House election). We collected 22,176 tweets from the time period of 4 July 2013 (official announcement of the election) to 21 July 2013.
From the obtained data, we extracted 93 official retweets as secondary information, and from the remaining 22,083 we randomly extracted 2000 tweets for classification. The average length of election tweets this time was 45 characters, which is approximately 15 characters longer than in usual tweets. This means that election tweets contain more information in general. Primary information was represented by 711 tweets (36%), sesquiary information by 933 tweets (47%), and secondary information by 286 tweets (14%) (see Figure 7).
There were also 70 tweets (3%) which did not fit in any of the three categories. These tweets consisted of only hastags, place names, or greetings. Unofficial retweets were, in most cases, classified as sesquiary information (135 tweets), and sometimes as primary information (29 tweets). As an interesting remark, 90% of all data classified as primary information also contained some amount of secondary information. Moreover, none of the primary information tweets contained either positive or negative content (all primary information tweets were neutral).

3.4. Discussion

In general, the Great East Japan Earthquake tweets contained much highly accurate and reliable information (primary information), but they also contained much noise, especially with regard to the areas affected by the earthquake.
Within all unofficial retweets from the general election for the Lower House (82 tweets), there were 57 tweets (69.5%) which contained primary information. Before the classification, we assumed that all or almost all of unofficial retweets would contain primary information. The fact that more than 30% of unofficial retweets contained secondary information was an unexpected result. This means that there were many tweets for which the authors did not want to send an official retweet but still wanted the cited tweet to spread.
Additionally, in tweets containing URL information, there were many which contained information helpful for voters. This finding could be useful in specifying helpful information in the future.
When focused on the opinionated information within the primary information, 77% of tweets contained neutral expressions. Neutral tweets contained objective opinions written from a neutral perspective, which could become useful for other voters to make their choice in the election. This means that automatically extracting neutral tweets from primary information could help in the extraction of useful information in the future. Tweets that contained either positive or negative information usually imposed a user’s personal, biased ideas and cannot be considered as useful in gathering information for elections. However, since the ratio of positive and negative information for each political party could become useful information (such as in predicting election results), a deeper analysis and more strict discrimination standards are required.
On the 3rd of December (official announcement day of the beginning of election period) there were 623 tweets, whereas on 16th December (voting day) there were as many as 13,093 tweets. The number of tweets per day increased as voting day approached, reached its peak on the two days of 16th and 17th of December, and slowly decreased thereafter. From the fact that there was a large number of tweets during the voting days, we infer that many users focused on the results of the election. Therefore, there is a possibility that the tweets that appeared around the official announcement day and the voting day are likely to contain different kinds of information.
In the election tweets that were classified by redefined rules based on the analysis results of the two first experiments, we were able to extract primary information more precisely. However, there were still tweets which remained unclassified, such as the ones containing greetings. In the future, it is necessary to consider how to handle these cases as well.
The classification results show that there was as much as 47% of the sesquiary information, and the average length of one tweet was longer than usual. This means that people were more interested in expressing their own opinions than retweeting other people’s opinions, which indicates an interest in politics in general. However, the third lowest voting rate in the history of postwar Japan (https://www.jiji.com/jc/graphics?p=ve_pol_election-sangiin20130717j-04-w380 accessed on 23 June 2021) does not confirm this interest in the actual behavior of voters.
Figure 8 shows that the closer it is to the election day, the number of tweets increases, which suggests that people are generally interested in the election as such. Additionally, the age of Twitter users corresponds to the increase in the younger age groups taking part in the elections (http://web-tan.forum.impressrd.jp/e/2012/05/11/12694, accessed on 23 June 2021), which means that SNS platforms such as Twitter could positively influence social and political awareness. Because of this visible influence of SNS on social and political life, on 19 April 2013 Japan allowed using the Internet as a venue for political campaigns. However, this does not necessarily result in people going to the elections and the number of election tweets cannot be considered as related in any way to the voting rate.
All primary information tweets were neutral. This indicates that by applying the idea of sesquiary information, we were able to reduce the noise in primary information. Although there were tweets containing both primary and secondary information (for example, “Historically first elections after raising the ban for Internet elections. Let’s go voting everyone!”), we did not go into the details of those this time. The presence of such cases means that sometimes tweets classified as primary information could contain some amount of positive or negative opinions. This could influence the user’s cognitive bias, and therefore, all tweets classified as primary information on the document level should also be re-classified on sentence level in the future.
Moreover, in reality, different kinds of information are often mixed in one tweet, which causes the ambiguities in the comprehension of information. Therefore, in addition to the presented classification criteria, we propose the following heuristic rules to deal with such conflicts.
  • If different kinds (primary, sesquiary, secondary) of information are mixed in one tweet, priority is given to information of the lower kind (e.g., “primary” over “sesquiary” and “secondary”).
  • If only secondary and sesquiary information is contained in the tweet, priority is given to secondary information (e.g.,: “I think that...”, or “News about...”).
The outline of the system is represented in Figure 9.

3.5. Data Preprocessing

Since with our information triage method, we aimed at providing help primarily in Japan (although making other language versions is possible later), we focused on developing the method firstly on data in the Japanese language.
The datasets used in this research (see Section 3) were in Japanese, which posed several challenges. Firstly, in transcription of the Japanese language, spaces (“ ”) are not used. Therefore, we needed to preprocess the dataset and make the sentences separable into elements for feature extraction. We used MeCab (http://taku910.github.io/mecab/, accessed on 23 June 2021), a Japanese morphological analyzer, and CaboCha (http://taku910.github.io/cabocha/, accessed on 23 June 2021), a Japanese dependency structure analyzer to preprocess the dataset in the following ways (performance of MeCab is reported around 95–97% [20], and Cabocha around 90% [21] for normal language. Although we acknowledge that in some cases the language used on Twitter could cause errors in POS tagging and word segmentation, we did not want to retrain the basic tools to fit our data because we wanted the method to work using widely available resources, so it was easily reproducible. Additionally, we assumed that even if such errors occur, as long as they are systematic, they will not cause trouble):
  • Tokenization: All words, punctuation marks, and so forth are separated by spaces (later: TOK).
  • Lemmatization: Like the above, but the words are represented in their generic (dictionary) forms, or “lemmas” (later: LEM).
  • Parts of speech: Words are replaced with their representative parts of speech (later: POS).
  • Tokens with POS: Both words and POS information is included in one element (later: TOK + POS).
  • Lemmas with POS: Like the above, but with lemmas instead of words (later: LEM + POS).
  • Tokens with Named Entity Recognition: Words encoded together with with information on what named entities (private name of a person, organization, numericals, etc.) appear in the sentence. The NER information is annotated by CaboCha (later: TOK + NER).
  • Lemmas with NER: Like the above but with lemmas (later: LEM + NER).
  • Chunking: Larger sub-parts of sentences separated syntactically, such as noun phrase, verb phrase, predicates, etc., but without dependency relations (later: CHNK).
  • Dependency structure: Same as above, but with information regarding syntactical relations between chunks (later: DEP).
  • Chunking with NER: Information on named entities is encoded in chunks (later: CHNK + NER).
  • Dependency structure with Named Entities: Both dependency relations and named entities are included in each element (later: DEP+NER).
Five examples of preprocessing are represented in Table 5. Theoretically, the more generalized a sentence is, the less unique and frequent patterns it will contain, but the produced patterns will be more frequent (e.g., there are more ADJ N patterns than “pleasant day”).
We compared the results for different preprocessing methods to find out whether it is better for information triage to represent sentences, such as more generalized or more specific.

3.6. Feature Extraction

From each of the 11 dataset versions, a Bag-of-Words language model was generated, producing 11 different models (Bag-of-Words/Tokens, Bag-of-Lemmas, Bag-of-POS, Bag-of-Chunks, etc.). Sentences from the dataset processed with those models were used later in the input layer of classification. We also applied the traditional feature weight calculation scheme, namely term frequency with inverse document frequency (tf*idf). Term frequency t f ( t , d ) refers here to the traditional raw frequency, meaning the number of times a term t (word, token) occurs in a document d. Inverse document frequency i d f ( t , D ) is the logarithm of the total number of documents | D | in the corpus divided by the number of documents containing the term n t , as in Equation (1). Finally, t f * i d f refers to term frequency multiplied by inverse document frequency.
i d f ( t , D ) = l o g | D | n t

3.7. Classification Methods

In classification, we compared seven classifiers, beginning from simple kNN and Naïve Bayes, through SVMs and Tree-based classifiers, up to Neural Networks. After all thorough experiments, we also propose a Convolutional Neural Network-based approach with a best-matching data-preprocessing method.
The Naïve Bayes classifier is a supervised learning algorithm applying Bayes’ theorem with the assumption of a strong (naive) independence between pairs of features, traditionally used as a baseline in text classification tasks.
kNN, or the k-Nearest Neighbors classifier takes, as input, k-closest training samples with assigned classes and classifies input samples to a class by a majority vote. It is often applied as a baseline, next to Naïve Bayes. Here, we used the k = 1 setting, in which the input sample is assigned to the class of the first nearest neighbor, up to k = 5.
JRip, also known as Repeated Incremental Pruning to Produce Error Reduction (RIPPER) [22], learns rules incrementally to further optimize them. It has been especially efficient in the classification of noisy text [23].
J48 is an implementation of the C4.5 decision tree algorithm [24], which firstly builds decision trees from a labeled dataset, and each tree node selects the optimal splitting criterion further chosen to make the decision.
Random Forest in the training phase creates multiple decision trees to output the optimal class (mode of classes) in the classification phase [25]. An improvement of RF to standard decision trees is their ability to correct over-fitting to the training set common in decision trees [26].
SVM or support-vector machines [27] are a set of classifiers well-established in AI and NLP. SVM represent data belonging to specified categories, as points in space, and find an optimal hyperplane to separate the examples from each category. We used four types of SVM functions, namely: linear, the original function which finds the maximum-margin hyperplane dividing the samples; plynomial kernel, in which training samples are represented in a feature space over polynomials of the original variables; radial basis function (RBF) kernel, which approximates multivariate functions with a single univariate function, further radialised to be used in higher dimensions; and sigmoid, that is, hyperbolic tangent function [28].
CNN or Convolutional Neural Networks are an improved type of a feed-forward artificial neural network model (i.e., multilayer perceptron). Although originally CNN were designed for image recognition, their performance has been confirmed in many tasks, including NLP [29] and sentence classification [30].
We applied a Convolutional Neural Network implementation with Rectified Linear Units (ReLU) as a neuron activation function [31], and max pooling [32], which applies a max filter to non-overlying sub-parts of the input to reduce dimensionality and in effect, corrects over-fitting by down-sampling input representation. We also applied dropout regularization on the penultimate layer, which prevents co-adaptation of hidden units by randomly omitting (dropping out) some of the hidden units during training [33].
We applied two versions of CNN. First, one hidden convolutional layer containing 100 units was applied as a proposed baseline. Second, the final proposed method consisted of two hidden convolutional layers, containing 20 and 100 feature maps, respectively, both layers with a 5 × 5 patch size and 2 × 2 max-pooling, and Stochastic Gradient Descent [34].

4. Evaluation Experiments

4.1. Datasets

In preparation of the datasets for the experiment, we reused the data collected for previous manual analysis, described in Section 3. We analyzed two types of situations, namely, the time of occurrence of a disaster and the period of elections, and on this basis, specified the criteria for classifying tweets as containing each type of information (primary, sesquiary, and secondary). Therefore, in the automatic classification experiment, we included samples representing each of the three types of information. Moreover, each type of situation (disasters and elections) could have, in reality, a different ratio of tweets representing the specified criteria. Therefore, to make the experiment reveal how an automatic classifier deals with the data in an objective and unbiased way, we randomly extracted 100 samples of each kind of information for each analyzed situation. We decided to normalize the number of samples to eliminate any bias in the data. When there was an insufficient number of samples for any kind of information, those were additionally collected to reach 100. This provided 600 samples. Moreover, we prepared additional samples from another disaster situation that took place more recently, namely, the eruption of a volcano on Mt. Ontake, on 27 September 2014.
This additional dataset is later presented and analyzed in detail in Section 5. For the automatic classification experiment, we prepared two versions of the Ontake volcano eruption dataset: first, containing 300 samples, similarly to previous datasets, and second, containing all 874 we collected during one week from the eruption, with the actual ratio of each type of information.
All the above datasets, including those applied in manual analysis, were summarized in Table 6.
Unfortunately, since most of the tweets included in the above-mentioned datasets were more than six, up to even 10 years old, we additionally collected and annotated tweets about heavy rains and typhoons that occurred in 2020. The overview of periods from when the datasets were collected has been included for reference in Table 7. The number of tweets in each of the three categories in these additional datasets was curated to contain 350 samples per information type (see Table 8).

4.2. Experiment Setup

To develop the optimal model for automatic tweet analysis according to the information it represents, we divided the experiment into several phases.
Firstly, we aimed at selecting the best-performing classifier (see Section 3.7) with its optimal parameters, and the most adequate data preprocessing method (see Section 3.5). In this phase, we applied a 10-fold cross-validation on all of the balanced datasets used together, and divided it into three types of represented information. We chose the top three performing classifiers, since there could always be differences when a classifier is applied to completely new data. We also checked whether the differences between the top three classifiers were statistically significant.
Next, we used the classifier that performed best, to train on both Earthquake and Election tweets, and then tested the whole system on the whole Ontake Eruption dataset, to see how the optimally trained classifier would perform on the data with a real-world ratio of information types.
Subsequently, we analyzed the performance of the best classifier on the Ontake Eruption dataset on a day-by-day basis. This final experiment was done to check if the optimally trained classifier would sustain the quality of classification thorough a longer period of time, which would be required in a practical application, such as searching for survivors of a disaster.
Finally, we verified the best-performing model on two datasets containing tweets from events that occurred more recently, in 2020, to verify if the model is well-generalized and can grasp the important information despite change in time.
As for the environment of all experiments, the preprocessed original dataset provides 11 separate datasets for the experiment (see Section 3.5 for details). Thus, the first experiment was performed 11 times, one time for each kind of preprocessing. Each of the classifiers (Section 3.7) was tested on each version of the dataset in a 10-fold cross-validation procedure (which gives an overall number of 1210 experiment runs). The results in all experiment phases were calculated using standard Precision (P), Recall (R), balanced F-score (F1), and Accuracy (A). As for the winning condition, we looked at which classifier achieved the highest balanced F-score, with a confirming condition of higher Accuracy in the case of two equally performing classifiers.

4.3. Results and Discussion

4.3.1. Best Classifier Selection

As for the general results, the classifiers can be divided into three groups. The first, represented by simple classifiers, such as kNN or Naive Bayes, obtained the lowest results. Additionally, SVMs using polynomial, radial and sigmoid functions fit in this group, with polynomial SVMs scoring the lowest of all used classifiers.
The second group of classifiers contains linear SVM, JRip and Random Forest, as well as simple CNN with one hidden layer. Interestingly, from this mediocre scoring group, the simple CNNs usually scored highest, with Random Forest scoring second best in this group.
Random Forest also scored highest of all for the dataset, which was the most problematic for all classifiers, namely, the one using only part-of-speech preprocessed features. Unfortunately, although Random Forest scored for this dataset as highest, the score was still very low and did not exceed 50%.
Finally, the highest scoring classifier of all was the one based on Deep Convolutional Neural Networks with two hidden layers, which scored as the highest for all dataset preprocessing methods (except POS). For most datasets, the two-hidden-layer CNN scored over 90%, outperforming all other classifiers.
When it comes to the best -erforming dataset preprocessing method, the simple tokenized dataset, tokenization with parts-of-speech, and tokenized with named entities achieved the highest scores for most classifiers. The lemmatized dataset also scored highest twice for kNN and J48, with an F-score equal to .617 and .744, respectively.
The highest combination of appropriate dataset preprocessing with classifier parameters belonged to the proposed two-layer CNN with a dataset preprocessed with a shallow parsing method, using chunks as features. This version of the classifier obtained a remarkable 99% for all used metrics, including a balanced F-score and accuracy.
The second and third best were, respectively, also the two-layer CNN, but with feature sets based on dependency relations with named entities (F1 = 0.987), and lemmas with parts-of-speech (F1 = 0.939).
As for statistical properties of the three best classifiers, at first we calculated Cohen’s kappa statistic values for all three classifiers, based on their agreement with expected values, represented in contingency tables (see Table 9).
Beginning from the worst, the kappa values were κ = 0.9083, κ = 0.98, and κ = 0.985. For all classifiers, the strength of agreement was considered to be `very good’.
As the final step of the analysis, we performed an analysis of the most common types of errors which the proposed classifier made. From Table 9 one can see that, when the classifier made a mistake, it most often annotated a tweet as “primary”. This could suggest that the “primary” class has a tendency to be stronger in general, and even when a tweet expresses an opinion, it is mistakenly considered as primary information. Although six mistakes of this kind for 300 cases is not much (2%), we will focus on optimizing the criteria for primary information in the future.
Interestingly, the system did not make a mistake when distinguishing between secondary and sesquiary information, despite the fact that both of those types of information tend to contain some sort of opinionated expression.
All results have been summarized in Table 10.

4.3.2. Validation on New Data

After choosing the best-performing model, we additionally validated it on newer data. All results have been represented in Table 11 and Figure 10, Figure 11 and Figure 12.
Firstly, we tested the model performance on data collected after the eruption of the Ontake volcano in 2014. The data were collected for a week after the eruption and not normalized for the number of classes, therefore representing a close to real-life ratio of all types of information. The results reached 0.855 of the averaged F1-score. This is a lower result than the best, near-perfect score reached in the initial experiment. This decrease of performance can be interpreted by the non-normalized number of classes, with the system mostly confusing secondary information as sesquiary and vice-versa. However, primary information was classified on a high level of 0.913 of the F1-score, which is a promising and reassuring result, since when the system is used after a natural disaster, the main focus will be put on finding tweets expressing primary information.
However, the most satisfying result was obtained when classifying the newest data, collected during heavy rains and typhoons in 2020. We suspected that after over six years, and for part of the data, even 10 years since data collection, the performance of the model would greatly decrease due to multiple conditions, such as changes in topics, changes in the appearing frequent features, or the general evolution of language used on Twitter. To our surprise, the model almost did not lose any performance, and was able to classify the new data correctly with an F1-score of 0.976 for the Heavy rains dataset and 0.979 for the Typhoons dataset. This strongly suggests the following. The data collected for this research were prepared and curated with high quality, and correctly represents the concepts of the three types of information. Moreover, the model itself is well-generalized on the provided data and can also be applied in practice in the future on other types of data.

5. Analysis of Information Change Over Time

The preliminary study (see Section 3) classified the tweet data related to the Great East Japan Earthquake and proposed the following hypothesis regarding the changes in information appearing in tweets over time.
1.
Sesquiary information grows right after the event occurs: not directly related to the event, or related to some extent, but not posted by users who directly experienced the event.
2.
Secondary information grows over time: some time after the event occurs, various kinds of information get mixed and the secondary information is used in the general spread of information.
In the present research, we verified this hypothesis by analyzing a new set of tweets from the emergency situation of an eruption of a volcano in Japan in October 2014.
In this section, we describe the results of analysis of tweet logs with the proposed system. The overall dataset contained 3706 tweets collected from 27 September 2014 (the day of eruption of Mt. Ontake) to 6 October 2014. From this dataset, we randomly extracted up to 100 tweets from each day (a total of 874 tweets). On October 5 and 6, the overall number of tweets did not exceed 100.
As for the result of the classification, primary information was represented by 129 tweets (14.76%), sesquiary information was represented by 513 tweets (58.70%), and secondary information was represented by 232 tweets (26.54%) (Figure 13). Among the primary information tweets, many contained content such as:
“Volcano fumes are approaching the Kuzo Pass. Volcanic ash has fallen here as well. My throat is itching. I am evacuating further.”
The sesquiary information contained tweets such as:
“I hope the climbers descent safely. I hope the injured will be rescued quickly.”
Among secondary information, there were tweets such as:
“I heard there were some injured people.”
In the result, similarly to the preliminary study, sesquiary information covered the largest amount of data, reaching 58.7% of all analyzed tweets. Moreover, similarly to the analysis of the tweets from the time of the Great East Japan Earthquake, sesquiary information contained numerous tweets by people located at a further distance from the event, but interested in what was happening, such as “I wonder what happened?”, or “I hope everyone is alright”. From this, we can infer that people sending sesquiary information tweets during an occurrence of an emergency are usually people who are away from the center of the event.

6. System for Information Triage on Twiter

In this section, we describe the structure of the proposed system for information triaging on Twitter.
To create the system, we firstly developed classification criteria for the types of information appearing on Twitter, explained in previous sections. The classification criteria were based on the definitions of primary and secondary information and, as well as the additional type, namely, sesquiary information (Table 4), which includes the user’s opinions, sentiments, feelings, and so forth.
These criteria were then used to collect the initial datasets (see Section 3), containing tweets representing each type of information. The datasets were used to train and test a classifier for the optimal performance in distinguishing between the three types of information.
In the experimental phase, we used seven different classifiers for comparison with additional parameter modifications to chose the best-performing classifier. Moreover, we tested 11 methods of data preprocessing to further optimize the classifier performance. The final best-performing model was implemented in the proposed method.
Next, we investigated the transition of information each day (Figure 14).
Previous research [19] has hypothesized about the possible transition of information with time. We verified that in practice on the available data.
Immediately after an event occurs (here: September 27), the primary information tweets start appearing and the number grows each time there is a movement or change in the event. On the 28th, after most people have safely evacuated, the primary information decreases and the secondary information tweets gain in number.
Later, around September 30th, again, primary information gains in number. This is due to the fact that the official search for survivors has started, during which difficulties occurred in getting through the ruins and volcano fumes have fallen on the ground. In addition, there occurred a social and political fuss regarding the possible prediction of volcanic eruption reported in the news, which caused a growth in sesquiary information, mostly consisting of opinions.
The sesquiary information grows largely immediately after the event occurs. This was most probably due to the fact that except the people who actually saw the event, many people could not understand what had happened and got confused.
When it comes to secondary information, it is different by the event; however, it usually occurs in large quantities after some time has passed from the beginning of the event. Immediately after the event, there is not enough information sources for the secondary information to occur. Moreover, the general interest in the event itself is fading with time; thus, the number of tweets mentioning the event gradually decreases, and converges to zero after October 6th. Thus, despite the differences in the number of cases for each type of information mentioning the event, we confirmed the characteristic patterns of occurrence of each type of information.
From the perspective of information triage, at the beginning of an emergency, the first priority is on rescue. Therefore, it is the most important to find messages containing primary information, such as the two following.
Applsci 11 06340 i007
The first rescue team which was dispatched on the day of the eruption was the Mobile Police Squad containing 12 people, who left on 13:55 from the Nagano Prefectural Police Department base. A message such as the above would have been invaluable for this team, as it could support the prediction of users’ behavior and in practice, save lives. This shows how important it is to quickly provide the appropriate information.
After some time, during the search for the missing, it is necessary to support the efficiency of decision-making of the rescue team. The 12th Brigade of the Self-Defense Forces established a post in the Otaki Village Office at around 17:30 on September 17th. However, there is a tweet containing sesquiary information, such as the following.
Applsci 11 06340 i008
Therefore, the information on missing people was available for the rescue team even before and during the action. By efficiently utilizing such information, it could be possible to rescue more people.
We considered the following improvement of classification criteria. In the extracted data, we found 39 tweets (16.81% of secondary information) that contained a hashtag with names of Japanese television programs, such as “#Sukkiri” and “#Tokudane!” In previous research, only the names of news programs were taken into consideration, such as “NHK news” or “Asahi Shinbun Digital”. Thus, in carrying out the automation of tweet extraction, previous research only assumed pattern-matching of keywords such as “news” or “newspaper”. The present study proves there is also a need to apply a wider span of keywords to grasp the variety of the content provided by the media. This could be achieved by providing a method for an automatically updated named entity extraction system, such as NeXT [35], which we consider in our future work.
Moreover, in order to apply such methods in practice, in the event of a disaster or emergency, it would be necessary to perform the collection of data not only by the hashtags, but also by real-time keyword spotting, and by burst detection.

7. Study Limitations

There are several possible limitations of this study.
Firstly, datasets collected for this study were of moderate size and were collected in a specific time-frame. Therefore, it is possible that the models created on those datasets will have limited applicability on data collected and analyzed in the future. There are several ways to mitigate this problem. We plan to reevaluate the method periodically and analyze classification errors. For this, we plan to apply the methodology of [36], who also reevaluated their system to reveal a major performance drop after two years. Fortunately, we have shown that the method works well on data from different periods of time; thus, this limitation is not expected to be extensive or sudden.
Another limitation is related to the Twitter API used to collect the data, as well as for actual system application. Our reliance on official Twitter API enforces all limitations the the API possesses, such as a limited number of tweets allowed to collect per hour/day. The more strict the limitations are in the future, the more limited the developed system will become. Such limitations can be an inconvenience in general, but can also potentially mean that a rescue team can face a situation where no more tweets are allowed for extraction at a current day, and thus it is not possible to reach people in distress. Therefore, if the official Twitter API limitations become too severe to make the system reasonably applicable, we plan to either apply a third-party Twitter API, or in the worst case, change the SNS platform to a different venue.

8. Conclusions

In this paper, we firstly presented our study of user linguistic behaviors in tweets of two kinds: disaster tweets and election tweets. In particular, we focused on the time of the Great East Japan Earthquake and the Lower and the Upper House elections.
As a basic idea, we assumed that people use SNS in decision-making or present status determination. Since it is important in such situations to automatically extract valid primary information, we firstly analyzed the earthquake tweets and defined rules for our classification criteria. Next, we classified election tweets based on these criteria and found out that even in primary information tweets there could be other information which causes cognitive bias in readers, and thus it is necessary to further divide factual information from opinionated information. We named this kind of information, neither being primary nor secondary information, though still preserving the structure of its own, “sesquiary information”. This becomes especially important during the time of elections when people look for opinions about the candidates. However, it is also useful in the time of a disaster. Right after the disaster occurs, it is most important to quickly obtain only primary information. However, after the first emergency phase passes and people begin to look for appropriate information to support their further planning and decision-making, sesquiary information gains significant importance as well.
Therefore, we redefined the classification rules based on the analysis of the Great East Japan Earthquake tweets and the tweets for the General Elections to the House of Representatives. We used the new classification rules to reanalyze both types of data. Moreover, we gathered new data from the time of later election to the House of Councilors. We collected all tweets from 4 to 21 July appearing under the hashtag “# Elections” and accordingly analyzed a randomly selected sample of 2000 tweets. We were able to confirm the effectiveness of the re-defined classification rules. In a further investigation into the opinionated contents of primary information tweets, we found out that all those tweets were neutral. However, since some of them contained sesquiary information, which could cause cognitive bias in readers, further post-processing of such cases is necessary.
After estimating the potential of using Twitter as a source of information for decision-making and status determination, we developed a system using Twitter for information triage during major events, such as disasters. To build the system, we compared a number of classifiers, including the proposed one based on Deep Convolutional Neural Networks. We also validated the system by performing an in-depth analysis of information obtained after a number of additional events, including an eruption of a Japanese volcano Ontake on 27 September 2014, as well as heavy rains and typhoons that occurred in 2020. We confirmed that the methods worked sufficiently well even when trained on data from nearly 10 years ago, which strongly suggests that the proposed model is well-generalized and sufficiently grasps important aspects of each type of classified information.
On the basis of the proposed method for automatic extraction of information of high accuracy and uniformity, we used Twitter hashtags to collect messages related to the eruption of the Mt. Ontake volcano.
We found that sesquiary information appeared in large amounts. Moreover, after examining the change of occurrence of information in time, we confirmed that the ratio of information types does change with time. Thus, the classification criteria proposed in the hypothesis were valid.
In the future, we plan to undertake further study of changes in time and changes according to situation (when users are in need of different kinds of information). We also plan to analyze expressions and patterns which indicate changes in user behavior. Finally, we plan to implement those improvements and test the method in practice.

Author Contributions

Conceptualization, M.P. and F.M.; Data curation, Y.F. and Y.O.; Formal analysis, M.P.; Funding acquisition, M.P.; Investigation, M.P., Y.F. and Y.O.; Methodology, F.M.; Project administration, M.P.; Software, M.P.; Supervision, F.M., H.H., Y.M., K.T. and S.K.; Validation, M.P.; Visualization, M.P.; Writing—original draft, M.P.; Writing—review & editing, M.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Parts of data that can be released will be available upon request by the end of 2021.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

*0, *1, *2, ...phrase number in DEP (see above).
1D, 2D, ...Depth of dependency relation in DEP (see above).
ADJAdjective
ADVAdverb
AIArtificial Intelligence
AUXauxiliary verb
BTW“By the way” (phrase used in informal conversations)
CaboChaYet Another Japanese Dependency Structure Analyzer. http://taku910.github.io/cabocha/ (accessed on 2021/06/24)
CHNKChunking. Splitting of a sentence by its dependency phrases (chunks), but without showing information on
their specific grammatical interconnections.
CNNConvolutional Neural Networks
COPCopula
DEPDependency structure. Analysis of sentence elements (words, phrases) revealing their grammatical interconnections.
DLDeep Learning
EXCLExclamation mark
JRip/RIPPERRepeated Incremental Pruning to Produce Error Reduction
kNNk-Nearest Neighbors classifier
LEMLemmatization. Postprocessing of text returning each grammatically modified words into their dictionary forms.
MeCabYet Another Part-of-Speech and Morphological Analyzer. https://taku910.github.io/mecab/ (accessed on 2021/06/23)
NNoun
NERNamed Entity Recognition
NExTNamed entity extraction tool
NHKNippon Hōsō Kyōkai, or the Japan Broadcasting Corporation
NLPNatural Language Processing
PPpostpositional particle
ReLURectified Linear Unit
RTRe-tweet, a public forwarding of someone else’s tweet
SNSSocial Networking Services
SVMSupport-vector machine classifier
SYMSymbol
TF-IDF, tf*idfTerm frequency with inverse document frequency
TOKTokenization. Splitting a sentence into separate tokens (words, punctuation marks, etc.).
TOPTopic marker
TVTelevision
URL“A Uniform Resource Locator (URL), colloquially termed a web address, is a reference to a web resource that
specifies its location on a computer network and a mechanism for retrieving it.” Definition by:
https://en.wikipedia.org/wiki/URL (accessed on 23 June 2021)

References

  1. Fujisaka, T.; Yong, L.; Sumiya, K. User Movement Pattern Analysis System Using a Real Space of Microbloggers for Local Event Detection and Property Verification. In Proceedings of the 72nd Annual Meeting of IPSJ, Tokyo, Japan, 8 March 2010; pp. 845–846. [Google Scholar]
  2. Iwaki, Y.; Jatowt, A.; Tanaka, K. Support for Discovery of Useful Articles on Microblogs. In Proceedings of the 1st Data Engineering and Information Management Forum (DEIM 2009), Kakegawa City, Shizuoka Prefecture, Japan, 8–10 March 2009. [Google Scholar]
  3. Jiang, L.; Yu, M.; Zhou, M.; Liu, X.; Zhao, T. Target-dependent Twitter Sentiment Classification. In Proceedings of the ACL 2011, Portland, OR, USA, 19–24 June 2011. [Google Scholar]
  4. Kamishima, T. 2007. Problems for Collaborative Filtering: Privacy, Shilling Attack, and Variability of Users’ Ratings [in Japanese]. IPSJ Mag. 2007, 48, 966–971. [Google Scholar]
  5. Kazama, K.; Imada, M.; Kashiwagi, K. Analysis of Information Propagation Network on Twitter [in Japanese]. In Proceedings of the 24th Annual Meeting of the Japanese Society for Artificial Intelligence 2010, Nagasaki, Japan, 9–11 June 2010. [Google Scholar]
  6. Tanaka, A.; Tajima, T. A proposal of classification method for on tweets on Twitter [in Japanese]. In Proceedings of the 2nd Data Engineering and Information Management Forum (DEIM 2010), Awaji City, Hyogo Prefecture, Japan, 28 February–2 March 2010. [Google Scholar]
  7. Umejima, A.; Miyabe, M.; Aramaki, E.; Nadamoto, A. Tendency of Rumor and Correction Re-tweet on the Twitter During Disasters. IPSJ SIG Notes 2011, DBS-152, 1–6. (In Japanese) [Google Scholar]
  8. Kuwano, T.; Mitamura, T.; Watanabe, I.; Suzuki, Y.; Oobori, T. The Study of Tourism Informatics Using Twitter. Tour. Inf. Soc. J. 2012, 8, 27–38. [Google Scholar]
  9. Aramaki, E.; Maskawa, S.; Morita, M. Witter catches the flu: Detecting influenza epidemics using Twitter. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP’11), Edinburgh, UK, 27–31 July 2011; Association for Computational Linguistics: New York, NY, USA, 2011; pp. 1568–1576. [Google Scholar]
  10. Cho, S.E.; Jung, K.; Park, H.W. Social media use during Japan’s 2011 earthquake: How Twitter transforms the locus of crisis communication. Media Int. Aust. 2013, 149, 28–40. [Google Scholar] [CrossRef]
  11. Karami, A.; Shah, V.; Vaezi, R.; Bansal, A. Twitter speaks: A case of national disaster situational awareness. J. Inf. Sci. 2020, 46, 313–324. [Google Scholar] [CrossRef] [Green Version]
  12. Miyabe, M.; Miura, A.; Aramaki, E. Use trend analysis of twitter after the great east japan earthquake. In Proceedings of the ACM 2012 Conference on Computer Supported Cooperative Work Companion, Seattle, WA, USA, 11–15 February 2012; pp. 175–178. [Google Scholar]
  13. Casero-Ripollés, A.; Micó-Sanz, J.L.; Díez-Bosch, M. Digital public sphere and geography: The influence of physical location on Twitter’s political conversation. Media Commun. 2020, 8, 96–106. [Google Scholar] [CrossRef]
  14. Kobayashi, T.; Ohshima, H.; Oyama, S.; Tanaka, K. Credibility Improvement of Review Information based on Correction for Biases which resulted from Reviewer’s Profiles and their Regionality. In Proceedings of the 19 Data Engineering Workshop (DEWS2008), B8-4, Miyazaki, Japan, 9–11 March 2008. [Google Scholar]
  15. Kahneman, D.; Tversky, A. Subjective probability: A judgment of representativeness. Cogn. Psychol. 1972, 3, 430–454. [Google Scholar] [CrossRef]
  16. Macskassy, S.A.; Provost, F. Intelligent Information Triage. In Proceedings of the 24th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, New Orleans, LA, USA, 9–13 September 2001; pp. 318–326. [Google Scholar]
  17. Macskassy, S.A.; Hirsh, H.; Provost, F.; Sankaranarayanan, R.; Dhar, V. Information Triage using Prospective Criteria. In User Modeling 2001 Workshop: Machine Learning, Information Retrieval and User Modeling; Springer: Berlin/Heidelberg, Germany, 2001. [Google Scholar]
  18. Marshall, C.C.; Shipman, F.M., III. Spatial hypertext and the practice of information triage. In Proceedings of the Eighth ACM Conference on Hypertext (HYPERTEXT’97); Association for Computing Machinery: New York, NY, USA, 1997; pp. 124–133. [Google Scholar]
  19. YFukushima, u.; Masui, F.; Ptaszynski, M. Classification of Tweet Logs Based on Directness Derived from Surface Expressions [in Japanese]. In Proceedings of the 28th Annual Meeting of the Japanese Society for Artificial Intelligence, Ehime, Japan, 12–15 May 2014. [Google Scholar]
  20. Mori, S.; Neubig, G. Language Resource Addition: Dictionary or Corpus? In Proceedings of the LREC 2014, Reykjavik, Iceland, 26–31 May 2014; pp. 1631–1636. [Google Scholar]
  21. Kudo, T.; Matsumoto, Y. Japanese dependency analysis using cascaded chunking. In Proceedings of the 6th Conference on Natural Language Learning; Association for Computational Linguistics: Stroudsburg, PA, USA, 2002; Volume 20, pp. 1–7. [Google Scholar]
  22. Cohen, W.W. Fast effective rule induction. In Proceedings of the Machine Learning Proceedings 1995, Tahoe City, CA, USA, 9–12 July 1995; pp. 115–123. [Google Scholar]
  23. Sasaki, M.; Kita, K. Rule-based text categorization using hierarchical categories. In Proceedings of the SMC’98 Conference Proceedings, 1998 IEEE International Conference on Systems, Man, and Cybernetics, San Diego, CA, USA, 14 October 1998; Volume 3, pp. 2827–2830. [Google Scholar]
  24. Quinlan, J.R. C4. 5 Programs for Machine Learning, Morgan Kaufmann, San Mateo, California; Elsevier: Amsterdam, The Netherlands, 1992. [Google Scholar]
  25. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  26. Hastie, T.; Tibshirani, R.; Friedman, J. The Elements of Statistical Learning: Data Mining, Inference, and Prediction; Springer Series in Statistics: New York, NY, USA, 2013. [Google Scholar]
  27. Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
  28. Lin, H.T.; Lin, C.J. A study on sigmoid kernels for SVM and the training of non-PSD kernels by SMO-type methods. Neural Comput. 2003, 3, 1–32. [Google Scholar]
  29. Collobert, R.; Weston, J. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the 25th International Conference on Machine Learning, Helsinki, Finland, 5–9 July 2008; ACM: New York, NY, USA, 2008; pp. 160–167. [Google Scholar]
  30. Kim, Y. Convolutional Neural Networks for Sentence Classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Doha, Qatar, 25–29 October 2014; pp. 1746–1751. [Google Scholar]
  31. Nair, V.; Hinton, G.E. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), Haifa, Israel, 21–24 June 2010; pp. 807–814. [Google Scholar]
  32. Scherer, D.; Müller, A.; Behnke, S. Evaluation of pooling operations in convolutional architectures for object recognition. In International Conference on Artificial Neural Networks; Springer: Berlin/Heidelberg, Germany, 2010; pp. 92–101. [Google Scholar]
  33. Hinton, G.E.; Srivastava, N.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Improving neural networks by preventing co-adaptation of feature detectors. arVix 2012, arXiv:1207.0580. [Google Scholar]
  34. LeCun, Y.; Bottou, L.; Orr, G.B.; Müller, K.R. Efficient backprop. In Neural Networks: Tricks of the Trade; Springer: Berlin/Heidelberg, Germany, 1998; pp. 9–50. [Google Scholar]
  35. Watanabe, I.; Masui, F.; Fukumoto, J. Refinement and improvement of usability of named entity extraction tool NExT. In Proceedings of the 10th Annual Meeting of The Association for Natural Language Processing, Tokyo, Japan, 15–19 March 2004; pp. 413–415. [Google Scholar]
  36. Ptaszynski, M.; Masui, F.; Nitta, T.; Hatakeyama, S.; Kimura, Y.; Rzepka, R.; Araki, K. Sustainable cyberbullying detection with category-maximized relevance of harmful phrases and double-filtered automatic optimization. Int. J. Child Comput. Interact. 2016, 8, 15–30. [Google Scholar] [CrossRef]
Figure 1. Ratio of primary, secondary and other information in tweets from the time of the Great East Japan Earthquake.
Figure 1. Ratio of primary, secondary and other information in tweets from the time of the Great East Japan Earthquake.
Applsci 11 06340 g001
Figure 2. Ratio of primary and secondary information in Lower House election tweets.
Figure 2. Ratio of primary and secondary information in Lower House election tweets.
Applsci 11 06340 g002
Figure 3. Classification of Lower House election tweets into positive, negative, and neutral.
Figure 3. Classification of Lower House election tweets into positive, negative, and neutral.
Applsci 11 06340 g003
Figure 4. A breakdown of classification of the additional Earthquake tweets according to the original criteria.
Figure 4. A breakdown of classification of the additional Earthquake tweets according to the original criteria.
Applsci 11 06340 g004
Figure 5. A breakdown of classification of the additional Earthquake tweets according to the redefined criteria.
Figure 5. A breakdown of classification of the additional Earthquake tweets according to the redefined criteria.
Applsci 11 06340 g005
Figure 6. A breakdown of re-classification of the Lower House election tweets according to the redefined criteria.
Figure 6. A breakdown of re-classification of the Lower House election tweets according to the redefined criteria.
Applsci 11 06340 g006
Figure 7. A breakdown of classification of Upper House election tweets (redefined criteria).
Figure 7. A breakdown of classification of Upper House election tweets (redefined criteria).
Applsci 11 06340 g007
Figure 8. Tweets which appeared in each day till the day of election.
Figure 8. Tweets which appeared in each day till the day of election.
Applsci 11 06340 g008
Figure 9. Outline of the system.
Figure 9. Outline of the system.
Applsci 11 06340 g009
Figure 10. Visualization of classification results for Ontake eruption tweets.
Figure 10. Visualization of classification results for Ontake eruption tweets.
Applsci 11 06340 g010
Figure 11. Visualization of classification results for Heavy rains tweets.
Figure 11. Visualization of classification results for Heavy rains tweets.
Applsci 11 06340 g011
Figure 12. Visualization of classification results for Typhoon tweets.
Figure 12. Visualization of classification results for Typhoon tweets.
Applsci 11 06340 g012
Figure 13. Tweet log classification.
Figure 13. Tweet log classification.
Applsci 11 06340 g013
Figure 14. Change in number of tweets each day.
Figure 14. Change in number of tweets each day.
Applsci 11 06340 g014
Table 1. Examples of each type of information found in the Earthquake tweets.
Table 1. Examples of each type of information found in the Earthquake tweets.
CategoryExample / Romanization / Translation
Primary冷蔵庫あいちゃって中全部落ちてきたよー
InformationReizoko aichatte naka zenbu ochitekitayo-
My fridge opened and everything falled on the ground!
しかしおなかすいたにゃー
Shikashi onaka suita nya-
Oh, but I’m soo hungree
停電きたぁ
Teiden kitaa
Here comes power cut
SecondaryRT @***:東急戦、世田谷線以外は前線再開。
InformationRT @***:Tokyuu-sen, Setagaya-sen igai wa zen sen saikai.
RT @***:Tokyuu-war [line], All lines except Setagaya back on track.
らしい。心配。@RT***:東京来て一番でかい
Rashii. Shinpai. RT@***:Tokyo kite ichiban dekai
Apparently. I’m worried. RT@***:The biggest since I came to Tokyo
RT @***:RT してください!! 全国避難所一覧
RT @***:RT shite kudasai!! Zenkoku hinanjo ichiran
RT @***:Please RT!! National list of shelters
Other@*** (゜ロ゜) 笑
@*** (゜ロ゜) warai
@*** (゜ロ゜) laugh
くぅぅぅ・・・
Kuuuu…
Kuuuu… (sound of rumbling tummy)
花見だと?
Hanami da to?
You wanna go view cherry blossoms [In a situation like this]!?
Table 2. Definition of primary information with examples from election tweets.
Table 2. Definition of primary information with examples from election tweets.
Classification CriteriaExample/Romanization/Translation
Tweets containing facts which one could directly confirm, such as things one personally saw, heard, or did. Applsci 11 06340 i001
/ Aomori-ken nai no shūinsen no rikkōho yoteisha dōga wo satsuei shimashita / I took a video of an expected candidate for the House of Representatives elections in Aomori Prefecture
Tweets containing predicative expressions, such as -da, or -dearu.
Tweets containing unofficial retweets and one’s personal opinions about them. Applsci 11 06340 i002
Unofficial retweets with one’s own other contents./ Shiji soshiki no jichirō no fushimatsu wo zeikin tsukkonde rikabari shita dake de jiman dekiru jisseki dewa nai yo ne. RT @ / Just by recovering by putting people’s tax money to cover up the misconducts of self-governing body workers of one’s supporting organizations is not yet an achievement of which one should be boastful. RT @
Original tweets posted with an annotation kakusan-kibo (kakusan-kibō (“spread the news”) is an annotation used only in Japanese tweets which inform other users that the tweet was written to be widely retweeted; similar in English to #RT). Applsci 11 06340 i003
/ [Kakusan-kibō] Fukuoka 10-ku no zen kōho-sha no seisaku wo dōga de chekku dekimasu. / [Kakusan-kibo] You can check the videos showing the policy of all candidates from Fukuoka District 10.
Tweets containing annotation #RT (similar to kakusan-kibo) except those which have a possibility of rumor (containing phrases like ...rashii, or ...mitai) Applsci 11 06340 i004
/ Senkyo jōho saito “Erekutopedia” saito no shūchi ni go kyōryoku onegai itashimasu # RT / Please help in making the election information site “Electopedia” widely known. # RT
Tweets containing the phrase ...nau (“now”) (a phrase indicating that a person is doing something at the moment of writing). Applsci 11 06340 i005
/ Ashita kara no senkyo ni mukete gakushū nau. (To iitsutsu netto nau / Learning for the upcoming elections now. (Actually just surfing on the Web now
Table 3. Definition of secondary information with examples from election tweets.
Table 3. Definition of secondary information with examples from election tweets.
Classification criteria
Tweets citing or referring to news or news sites (by using URL address or phrases such as “News about...”).
Official RT: Tweets which represent an official Twitter citation form containing somebody else’s tweet.
This allows one to easily forward the particular tweet to one’s followers.
Tweets containing phrases indicating second-hand information, such as “I heard/saw that...”, or “Apparently...”.
Unofficial RT not containing one’s personal opinions nor other content.
Table 4. Redefinition of classification rules.
Table 4. Redefinition of classification rules.
Tweet TypePrimarySesquiarySecondaryExample
- Factual Information These are historically first elections after raising the ban for Internet elections
- Description of an action I went to give my vote
- Decisive expressions Taro Yamamoto will be elected with no doubt
- Interview contents Making bad guys out of politicians leads to nothing / Interview with Sugawara Taku
- Policy Policy 1. of Tamiya Kaichi: Voice of the people is more important than the pressure of large companies
- Expression of an intention I’m going to the elections!!
- Emotional expressions Congratulations Taro Yamamoto!! I’m so happy for you!
- Opinions Taro Yamamoto, talk in a more concise way!
- A call to action Let’s vote everyone without abstention!
- Introduction of an URL link Wanna know how Dietmen’re really thinking? Check here!
- Official RT
- Things seen on TV (including facts) Here is again Ikegami the stabilizer with a quick report of votes counted!
- Expressions indicating a rumor This arse interfering in the city council elections, they say he is a supporter of the opposition
- Written reproduction of original information Intention to resign in case of defeat in elections in Tokyo - Asahi Shimbun Digital
- Citations “Its possible to achieve something big only by helping each other”. by Nakatomi Nokamako
Table 5. Three examples of preprocessing of a sentence in Japanese; N = noun, PP = postpositional particle, ADV = adverb, ADJ = adjective, AUX = auxiliary verb, SYM = symbol, 1D, 2D, ...= depth of dependency relation, *0, *1, *2, ...= phrase number.
Table 5. Three examples of preprocessing of a sentence in Japanese; N = noun, PP = postpositional particle, ADV = adverb, ADJ = adjective, AUX = auxiliary verb, SYM = symbol, 1D, 2D, ...= depth of dependency relation, *0, *1, *2, ...= phrase number.
Sentence: Applsci 11 06340 i006
Transcription in alphabet: Kyōwanantekimochiiihinanda!
Glosses: Today TOP what pleasant day COP EXCL
Translation: What a pleasant day it is today!
Preprocessing examples
–TOK:Kyō | wa | nante | kimochiii | hi | nanda | !
–POS:N | PP | ADV | ADJ | N | AUX | SYM
–TOK+POS:Kyō_N|wa_PP|nante_ADV|kimochi_ii_ADJ|hi_N|nanda_AUX|!_SYM
–CHNK:Kyō_wa | nante | kimochi_ii | hi_nanda!
–DEP:*0_3D_Kyō_wa|*1_2D_nante|*2_3D_kimochi_ii|*3_-1D_hi_nanda!
Table 6. Outline of all datasets applied in this research.
Table 6. Outline of all datasets applied in this research.
Manual Analysis Datasets
Great EarthquakeLower House ElectionsUpper House ElectionsOntake Volcano
PreliminaryAdditional Additional Eruption
primary15391831317711449 129
sesquiary000933849 513
secondary2083148186286194 232
other22272690700 0
SUM5849600150320001492 874
balanced datasets for automatic classificationvalidation
primary 100 100 100129
sesquiary 100 100 100513
secondary 100 100 100232
SUM 300 300 300874
Table 7. Overview of periods from when the datasets were collected.
Table 7. Overview of periods from when the datasets were collected.
Data TypesPeriodSearch Query
Fukushima et al. [19] Great Earthquake11 March–17 March 2011
Fukushima et al. [19] Mt. Ontake27 Septeber–6 October 2014#Ontake-san (Mount Ontake)
Fukushima et al. [19] Lower House Election2 December–14 December 2014#sōsenkyo (general elections)
Heavy_rain4 July–8 July 2020#gōu (heavy rains)
Typhoon2 September–6 September 2020#taifū (typhoon), #taifū9gō (typhoon no.9),
#taifū10gō (typhoon no.10)
※ data collected previously (see Section 3 for details)
Table 8. Overview of two newest datasets from 2020.
Table 8. Overview of two newest datasets from 2020.
Heavy RainsTyphoons
primary350350
sesquiary350350
secondary350350
Table 9. Contingency tables for the top three highest scoring classifiers.
Table 9. Contingency tables for the top three highest scoring classifiers.
2-layer CNN / shallow parsing (chunks)
classified as
primarysecondarysesquiary
0]*correctprimary29811
secondary22971
sesquiary40296
2-layer CNN / deep parsing with named entities
classified as
primarysecondarysesquiary
0]*correctprimary29910
secondary62922
sesquiary21297
2-layer CNN / lemmas with parts-of-speech
classified as
primarysecondarysesquiary
0]*correctprimary29163
secondary1027515
sesquiary1011279
Table 10. Results of all applied classifiers (scores averaged for primary, sesquiary, and secondary prediction, calculated separately; best classifier for each dataset in bold type font; best dataset generalization for each classifier—underlined).
Table 10. Results of all applied classifiers (scores averaged for primary, sesquiary, and secondary prediction, calculated separately; best classifier for each dataset in bold type font; best dataset generalization for each classifier—underlined).
LEM + POSTOK + POSLEMTOKCHUNK + NERPOSDEP + NERDEPCHUNKLEM + NERTOK + NER
BoWn-gramBoWn-gramBoWn-gramBoWn-gramBoWn-gramBoWn-gramBoWn-gramBoWn-gramBoWn-gramBoWn-gramBoWn-gram
0]*SVM0]*linearPrec0.7250.7510.7340.7590.7310.7130.7390.7300.6360.6400.3890.5260.5710.5850.6110.6910.6430.7010.6360.6860.6680.707
Rec0.7280.7510.7370.7580.7330.7130.7410.7310.5640.5230.3980.5320.4980.4580.5100.3810.5570.4900.6370.6850.6700.707
F10.7250.7510.734 0.758 0.7320.7130.7390.7310.5450.4850.3700.5270.4520.3980.4650.2630.5340.4420.6340.6850.6670.707
Acc0.7280.7510.7370.7580.7330.7130.7410.7310.5640.5230.3980.5320.4980.4580.5100.3810.5570.4900.6370.6850.6700.707
0]*plynomialPrec0.4460.5410.4460.7050.4460.5350.4460.5400.1110.1110.2970.1110.1110.1110.1110.1110.1110.1110.2210.1120.1110.537
Rec0.3460.3520.3460.3500.3470.3480.3440.3510.3330.3330.3440.3330.3330.3330.3330.3330.3330.3330.3330.3340.3330.351
F10.1920.2430.1920.2030.1940.2410.1900.2420.1670.1670.1920.1670.1670.1670.1670.1670.1670.1670.2130.1670.167 0.271
Acc0.3460.3520.3460.3500.3470.3480.3440.3510.3330.3330.3440.3330.3330.3330.3330.3330.3330.3330.3330.3340.3330.351
0]*radialPrec0.7450.7330.7440.7320.7470.7450.7580.7390.5990.5780.4050.4850.5460.5110.5240.5100.5840.5670.6220.7320.7140.705
Rec0.6080.6500.6030.6490.6060.6340.6110.6300.4210.4120.4090.4740.4080.3990.3770.3760.4270.4140.4750.5880.5140.578
F10.592 0.634 0.5870.6330.5910.6200.5970.6150.3340.3200.3900.4380.3160.3010.2580.2560.3370.3220.3870.5690.4750.555
Acc0.6080.6500.6030.6490.6060.6340.6110.6300.4210.4120.4090.4740.4080.3990.3770.3760.4270.4140.4750.5880.5140.578
0]*sigmoidPrec0.7460.7480.7490.7420.7420.7590.7460.7630.7370.7570.3990.5270.6330.6770.6710.6700.7350.7500.5420.7470.7280.758
Rec0.5720.5810.5770.5820.5590.5730.5660.5770.3880.3800.4020.4760.4170.3840.3990.3630.3920.3890.3520.5970.5090.589
F10.5490.5600.5550.5610.5330.5510.5410.5560.2740.2600.3640.4210.3240.2670.2940.2260.2820.2760.236 0.562 0.4660.559
Acc0.5720.5810.5770.5820.5590.5730.5660.5770.3880.3800.4020.4760.4170.3840.3990.3630.3920.3890.3520.5970.5090.589
Prec0.6800.7470.6810.7470.6660.7220.6690.7060.6080.6230.4120.4620.6060.6080.7050.5580.6230.5910.6710.7160.6590.704
NaïveRec0.6810.7230.6810.7240.6700.7070.6710.6940.5670.5730.4170.4730.5070.5090.5020.4740.5410.5230.6700.7080.6600.699
BayesF10.6640.7080.665 0.710 0.6520.6920.6510.6780.5350.5430.4050.4610.4480.4510.4190.4110.5020.4810.6620.7010.6500.690
Acc0.6810.7230.6810.7240.6700.7070.6710.6940.5670.5730.4170.4730.5070.5090.5020.4740.5410.5230.6700.7080.6600.699
0]*JRipPrec0.7210.8260.7340.8180.7570.7580.7820.7670.7370.5770.3710.4810.6200.5610.6620.2830.6480.6930.7190.6970.7210.788
Rec0.7070.7710.7080.7780.7240.7230.7320.7270.3880.4020.3480.4940.4230.4090.4770.4200.4420.4400.6870.6680.6890.724
F10.6950.7670.697 0.774 0.7130.7080.7180.7130.2740.2980.3100.4770.3220.3030.3820.3340.3440.3440.6690.6550.6700.704
Acc0.7070.7710.7080.7780.7240.7230.7320.7270.3880.4020.3480.4940.4230.4090.4770.4200.4420.4400.6870.6680.6890.724
0]*J48Prec0.7270.7830.7300.8070.7470.7490.7410.7570.6220.6220.4150.5010.5380.5310.3530.3530.6260.6260.7350.7170.7080.724
Rec0.7220.7740.7280.8000.7430.7470.7370.7560.4040.4040.4140.5080.4120.4120.4810.4810.5020.5020.7350.7180.7100.726
F10.7230.7770.728 0.802 0.7440.7450.7380.7560.2950.2950.4130.5020.3100.3110.3760.3760.4300.4300.7330.7170.7080.725
Acc0.7220.7740.7280.8000.7430.7470.7370.7560.4040.4040.4140.5080.4120.4120.4810.4810.5020.5020.7350.7180.7100.726
Prec0.6200.7120.6230.7110.6230.6710.6100.6620.6100.6780.4120.4850.5860.6890.7260.4450.6830.7420.5540.6080.5400.634
kNNRec0.6100.6920.6090.6900.6130.6440.5940.6340.5030.4900.4170.4900.4730.4200.3570.3360.4580.4340.5390.5890.5270.611
(k = 1)F10.612 0.697 0.6110.6950.6170.6480.5970.6380.4500.4260.4120.4790.4050.3290.2180.1710.3960.3550.5250.5900.5200.612
Acc0.6100.6920.6090.6900.6130.6440.5940.6340.5030.4900.4170.4900.4730.4200.3570.3360.4580.4340.5390.5890.5270.611
Prec0.7600.8210.7400.8220.7640.7990.7680.8000.6220.5760.4210.5700.5570.5620.6350.7050.6520.6850.7460.7830.7810.779
RandomRec0.7560.8120.7390.8140.7580.7910.7640.7930.5600.5260.4220.5790.4870.4610.5000.3920.5700.5030.7370.7790.7740.776
ForestF10.7450.8130.725 0.816 0.7490.7920.7570.7930.5400.5200.4200.5660.4420.4200.4170.2750.5370.4540.7310.7780.7720.776
Acc0.7560.8120.7390.8140.7580.7910.7640.7930.5600.5260.4220.5790.4870.4610.5000.3920.5700.5030.7370.7790.7740.776
CNNPrec0.7700.8220.7870.8200.7690.8010.7810.7890.5630.5910.4180.5580.5160.5560.5500.5440.6010.5890.7620.7710.7640.791
(1 hiddenRec0.7690.8190.7870.8180.7690.7980.7820.7860.5560.5690.4200.5640.5070.5160.5260.5090.5870.5620.7610.7710.7640.790
layer)F10.766 0.819 0.7850.8180.7670.7980.7810.7860.5580.5650.4170.5580.5040.5080.5170.4830.5850.5530.7600.7710.7630.790
Acc0.7690.8190.7870.8180.7690.7980.7820.7860.5560.5690.4200.5640.5070.5160.5260.5090.5870.5620.7610.7710.7640.790
CNNPrec0.9390.8450.8930.8450.9190.8220.8620.8210.914N/A0.3330.5480.987N/A0.818N/A0.990N/A0.9130.7650.9100.835
(2 hiddenRec0.9390.8430.8840.8440.9190.8210.8400.8210.910N/A0.3330.5510.987N/A0.781N/A0.990N/A0.9100.7390.9060.836
layers)F10.9390.8440.8860.8440.9190.8210.8420.8210.910N/A0.3060.5490.987N/A0.779N/A0.990N/A0.9100.7420.9060.835
Acc0.9390.8430.8840.8440.9190.8210.8400.8210.910N/A0.3330.5510.987N/A0.781N/A0.990N/A0.9100.7390.9060.836
Table 11. Confusion matrices and results of validation on new data.
Table 11. Confusion matrices and results of validation on new data.
Classified as:PrecisionRecallF1-ScoreAccuracy
Ontake eruptionprimarysecondarysesquiary
0]*correctprimary195590.8940.9330.913
secondary6213520.8350.7860.810
sesquiary17373400.8480.8630.855
Weighted average0.8550.8560.8550.856
Heavy rainsprimarysecondarysesquiary
0]*correctprimary360650.9700.9700.970
secondary134500.9750.9970.986
sesquiary1033200.9850.9610.973
Weighted average0.9760.9760.9760.976
Typhoonprimarysecondarysesquiary
0]*correctprimary361460.9730.9730.973
secondary434200.9830.9880.986
sesquiary623250.9820.9760.979
Weighted average0.9790.9790.9790.979
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ptaszynski, M.; Masui, F.; Fukushima, Y.; Oikawa, Y.; Hayakawa, H.; Miyamori, Y.; Takahashi, K.; Kawajiri, S. Deep Learning for Information Triage on Twitter. Appl. Sci. 2021, 11, 6340. https://doi.org/10.3390/app11146340

AMA Style

Ptaszynski M, Masui F, Fukushima Y, Oikawa Y, Hayakawa H, Miyamori Y, Takahashi K, Kawajiri S. Deep Learning for Information Triage on Twitter. Applied Sciences. 2021; 11(14):6340. https://doi.org/10.3390/app11146340

Chicago/Turabian Style

Ptaszynski, Michal, Fumito Masui, Yuuto Fukushima, Yuuto Oikawa, Hiroshi Hayakawa, Yasunori Miyamori, Kiyoshi Takahashi, and Shunzo Kawajiri. 2021. "Deep Learning for Information Triage on Twitter" Applied Sciences 11, no. 14: 6340. https://doi.org/10.3390/app11146340

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop