Really Vague? Automatically Identify the Potential False Vagueness within the Context of Documents
Abstract
:1. Introduction
- We present the first study on considering the vagueness of privacy policies within the context of the whole documents, rather than individual words or sentences.
- We categorize the patterns of sentences which can provide support to alleviate the vagueness in the (potentially) false vague sentences or phrases through exemplification or explanation, more or less.
- We present the first study on detecting the potential false vagueness and the related supporting evidence, which can help to improve the results of vagueness detection in privacy policies of the existing work.
- Experiments show great performance of our F·vague-Detector in both potential false vagueness and the supporting evidence detection.
2. Related Work and Background
2.1. The Definition of Vagueness
2.2. Preventing Vagueness in Privacy Policy
2.3. Automatic Detection of Vagueness in Privacy Policy
3. The Framework of Our F·Vague-Detector
3.1. Data Source
3.2. Research Questions
- RQ1: What is the proportion of vague sentences in privacy policies, which have a minimum of one clarifying sentence to reduce ambiguity, as identified in Lebanoff’s manual annotations [9]?
- RQ2: Do the supporting sentences present some patterns? What are the common patterns?
- RQ3: To what extent can our approach automatically identify the potentially false vagueness and at least one piece of supporting evidence (sentence)?
4. Defining the Patterns of Supporting Sentences
4.1. Manual Data Labeling and Analysis Process
4.2. The Results of Manual Annotations: The Patterns of Supporting Sentences of Potentially False Vagueness
- -
- Vague sentence:Although Google Analytics plants a persistent Cookie on your web browser to identify you as a unique user the next time you visit the Website, the Cookie can not be used by anyone but Google (from the privacy policy of Horoscope company).
- -
- Vague words manually labeled:persistent cookie, unique, persistent
- •
- Supplementary supporting pattern
- (a)
- processing your bookings / orders and managing your account with us;
- (b)
- marketing our services or related products;
- (c)
- for compiling aggregate statistics about you and maintaining contact lists for correspondence, commercial statistics and analysis on Site usage;
- (d)
- for identity, verification and records.
- •
- Exemplification Supporting Pattern
- -
- Vague sentence:All the information you post may be accessible to anyone with Internet access, and any Personal Information you include in your posting may be read, collected, and used by others.
- -
- One exemplification supporting sentence:For example, if you post your email address along with a public restaurant review, you may receive unsolicited messages from other parties.
- •
- Interpretation Supporting Pattern
- -
- Vague sentence:We will retain your Personal Information for as long as your account is active or as needed to provide you services and to maintain a record of your transactions for financial reporting purposes.
- -
- Vague word:Personal Information
- -
- One interpretative supporting sentence:Personal Information means the information about you that specifically identifies you or, when combined with other information we have, can be used to identify you.
4.3. Addressing RQ1 and RQ2
4.3.1. Experimental Data
4.3.2. Inner Agreement Analysis on the Supporting Pattern Annotations on Testing Dataset
4.3.3. Addressing RQ1 and RQ2
- For the training dataset, in the isolated sentences that are marked manually as vague in [9], about 39% are potentially false vague which have at least one supporting sentence. These supporting sentences cover all three patterns that we focus on in this study; supplementary support shows 20.75%, exemplification shows 5.66%, and interpretative support shows 73.58%.
- For the testing dataset, about 29% are isolated sentences that are manually marked as vague in [9], are potentially false vague, and have at least one supporting sentence. These supporting sentences cover the all three patterns we focus on too. In general, supplementary support shows 24.36%, exemplification support shows 17.95%, and interpretative support shows 57.69%.
5. F·Vague-Detector: Our Approach for Automatically Detecting the Potential False Vagueness and The Supporting Evidence
5.1. Identifying Interpretive Supporting Sentences and Their Potentially False Vague Evidence
- Syntax structure-based heuristic rules Through the analysis on the content and the parsing tree of interpretative sentences, we find some rules about the occurrence of interpreted words.The interpreted word and its modifiers usually appear before the matching word (if it exists), and the detailed explanation usually comes after it. The modifiers include noun phrases, subordinate clauses, and so on.We analyzed the features for interpreted word identification from the parsing tree of the interpreted sentences too. We find that the interpreted words are usually noun phrases (NPs) or demonstrative pronouns (DETs). Their occurrence in the parsing tree presents the following characteristics:Rule I: The modifiers of interpreted words (usually NPs) may be a prepositional phrase (PP) or verbal phrase (VP), and the NP, VP, and PP are usually in the same level of the parsing tree. In addition, the matching words closely follow the modifiers (if they exist). Therefore, if there is a subtree in the parsing tree of the interpretative sentence satisfying the structure of NP + PP∥VP + matching words, the NP is the interpreted word of this interpretative sentence.Rule II: The NP closest to the matching words is usually the interpreted word. There may be multiple NPs with the same distance to the matching words, and some may have a nested relationship between each other. We select the longest one because it carries more complete semantics.If Rule I and Rule II are satisfied, we select the results from Rule I as the interpreted words.For the noun phrases, we would like to find the extension based on the parsing tree of the current sentence, while for the demonstrative pronouns, we would like to perform coreference resolution in the current sentence and its neighbors with the Stanford NLP tool. To be specific, we use the noun phrase composed of the most words in the coreference chain as its antecedent.
- Semantic dependency-based heuristic rule Depending purely on the syntax-based rules without considering the semantic information is not enough for the interpreted words’ extraction, which loses some of the relationship and may cause lots of false positives. We therefore attempt to optimize the extraction results with the usage of the information of semantic dependency parsing (SDP).Semantic dependency parsing is a method to analyze the binary semantic association between the language units (i.e., word or phrase) in one sentence. The SDP result of the Stanford NLP Group is a collection of triples known as the locations of two single words and their semantic dependency relation.Rule III: Through analyzing the results of SDP, we find that when the words include or mean occur as the root verb, the subject of the whole sentence is usually the interpreted word. The corresponding relation in SDP is nsubj between the core noun of the interpreted word and the matching word. However, since single words as the components of phrases often contain part of the semantics, we need to extend them and find their nested noun phrases. For example, from the sentence “Personal information includes your email address, last name...”, we can obtain the relation triple of (2,3,nsubj), where the 2 and 3 are the place ID of the words information and includes. However, personal information rather than information should be the interpreted word. In order to obtain the accurate interpreted word, we need to find the extension of the core word information in this sentence (i.e., personal information).
- Text pattern-based heuristic rules Finally, we propose two rules for two special patterns of sentences to optimize the extraction results.Rule IV: The candidate interpretative sentence may start with noun phrase + colon. In this case, the whole following part of this sentence is used to explain the noun phrase. For example, iin the sentence Transaction Information: information you provide when you interact with us and the Site, such as the Groupon vouchers you are interested in, purchase and redeem..., the content after the colon is the explanation of the transaction information.Rule V: When the candidate interpretative sentence contains the pattern following + noun phrase + colon, the part after the colon is the explanation of the noun phrase. For example, in the sentence The information collected may include but is not limited to the following personal information: email address, last name..., the items after the colon are the enumerations of personal information.
5.2. Identifying the Supplementary Supporting Sentences and Their Potential False Vague Evidence
5.3. Identifying the Exemplification Supporting Sentences and Their Potentially False Vague Sentence
6. Addressing RQ3: The Effectiveness Evaluation of F·Vague-Detector
6.1. Metrics
6.2. Results and Analysis
- From the results of the overall evaluation on the training dataset (i.e., the last row in Table 4), the overall recall on the training set is 66.98%, the precision is 94.85%, and the value is 78.51%. On the testing dataset (i.e., the last row in Table 5), the overall recall is 67.95%, the precision is 70.59%, and the value is 69.24%. The results on both these two datasets look good, although there is a certain gap between the results on the training data and testing data.
- For the identification of pairs of 〈potential false vagueness, the starting sentence pattern〉:
- Versatility analysis: The gap between the recall of our F·vague-Detector on the training and testing sets is 3.41%, the precision gap is 0.00%, and the value is 1.91%. The performance of our algorithm on the testing set is very close to that on the training set. This shows that our identification rules of the starting supplementary supporting sentence and the corresponding potential false vague sentences have very strong generality.
- For the identification of the pairs of 〈potential false vagueness, enumeration supplementary sentence〉:
- Versatility analysis: There is no gap between the recall of our F·vague-Detector on the training and testing dataset. For precision and , the gaps are 22.22% and 12.14%. Although the recall values show great generality of our approach on the identification of the enumerated items, there is still space for improvement. By manually analyzing the pairs of 〈potential false-vagueness, the enumerated items〉, we found that the primary reason is that judging the ending item of the enumerations is not easy. Sometimes these enumerations and their following paragraph have the similar text structure and context, and it is hard to distinguish. We will make an improvement on the identification rules in future work.
- For the identification of the pairs of 〈potential false vagueness, exemplification supporting sentence〉:
- Identification effectiveness: On the training set, the recall is 83.33%, the precision reaches 100%, and the is 90.91% (seen in Table 4), while on the testing set, the recall is 71.43%, the precision is 100%, and the is 83.33%.
- Versatility analysis: The gap between the recall of our approach on the training and testing sets is 11.9%, the precision is 0%, and the is 7.58%. We can say that the effect of our approach on the identification of the pairs of 〈potential false vagueness, exemplification supporting sentence〉 from these two datasets is very close, showing great generality. However, there is still room to improve. By manually analyzing the wrong identification, we found that the primary reason is that our rules are too strong to restrict the sentences containing specific keywords. So we miss some “implicit” exemplifications. We plan to improve our identification algorithm for this kind of sentence in future work.
- For the identification of the pairs of 〈potential false vagueness, interpretative supporting sentence〉:
- Identification effectiveness: The recall of our approach on the training set is 58.97%, the precision reaches 94.34%, and the value is 72.58%; on the testing set, the recall is 57.78%, the precision is 61.54%, and the is 59.6%.
- Versatility analysis: The gap between the recall of our approach on the training and testing sets is 1.19%, the precision is 32.8%, and the value is 12.98%. Although the performance of our approach on the testing data is good, the gap is the biggest on all the supporting patterns. Through our manual analysis, we find that the primary problem is on the match between the vague terms and the interpreted items. This is because both the identification of synonyms and the anaphora resolution are unsatisfying nowadays and still hard problems in the field of NLP, even for a big dataset. We plan to improve the results by exploring other features, such as the semantic similarity between the interpretative and the vague sentences.
7. Threats to Validity
8. Conclusions
- Evaluate our F·vague-Detector on many more privacy documents. In this pilot study, we only use the random 30 privacy policies of the corpus of Lebanoff et al. [9].
- Evaluate the vagueness of whole documents rather than single words or sentences. We hope to assist in improving the quality of privacy policies with a metric of the document’s vagueness.
- Recommend the crucial but unspecified items in a privacy policy. These kinds of items usually appear frequently and definitely hinder people’s understanding of the privacy policies without clear definition.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Torre, D.; Abualhaija, S.; Sabetzadeh, M.; Briand, L.C.; Baetens, K.; Goes, P.; Forastier, S. An AI-assisted Approach for Checking the Completeness of Privacy Policies Against GDPR. In Proceedings of the 28th IEEE International Requirements Engineering Conference, RE 2020, Zurich, Switzerland, 31 August–4 September 2020; Breaux, T.D., Zisman, A., Fricker, S., Glinz, M., Eds.; IEEE: Piscataway, NJ, USA, 2020; pp. 136–146. [Google Scholar]
- Hosseini, M.B.; Breaux, T.D.; Slavin, R.; Niu, J.; Wang, X. Analyzing privacy policies through syntax-driven semantic analysis of information types. Inf. Softw. Technol. 2021, 138, 106608. [Google Scholar] [CrossRef]
- Caramujo, J.; da Silva, A.R.; Monfared, S.; Ribeiro, A.; Calado, P.; Breaux, T.D. RSL-IL4Privacy: A domain-specific language for the rigorous specification of privacy policies. Requir. Eng. 2019, 24, 1–26. [Google Scholar] [CrossRef]
- Breaux, T.D.; Hibshi, H.; Rao, A. Eddy, a formal language for specifying and analyzing data flow specifications for conflicting privacy requirements. Requir. Eng. 2014, 19, 281–307. [Google Scholar] [CrossRef]
- Bhatia, J.; Evans, M.C.; Breaux, T.D. Identifying incompleteness in privacy policy goals using semantic frames. Requir. Eng. 2019, 24, 291–313. [Google Scholar] [CrossRef]
- Massey, A.K.; Eisenstein, J.; Anton, A.I.; Swire, P.P. Automated text mining for requirements analysis of policy documents. In Proceedings of the 2013 21st IEEE International Requirements Engineering Conference (RE), Rio de Janeiro, Brazil, 15–19 July 2013; pp. 4–13. [Google Scholar] [CrossRef]
- Bhatia, J.; Breaux, T.D.; Reidenberg, J.R.; Norton, T.B. A theory of vagueness and privacy risk perception. In Proceedings of the 2016 IEEE 24th International Requirements Engineering Conference (RE), Beijing, China, 12–16 September 2016; pp. 26–35. [Google Scholar]
- Liu, F.; Fella, N.L.; Liao, K. Modeling language vagueness in privacy policies using deep neural networks. In Proceedings of the 2016 AAAI Fall Symposium Series, Arlington, VA, USA, 17–19 November 2016. [Google Scholar]
- Lebanoff, L.; Liu, F. Automatic detection of vague words and sentences in privacy policies. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, 31 October–4 November 2018; pp. 3508–3517. [Google Scholar]
- Liu, F.; Ramanath, R.; Sadeh, N.; Smith, N.A. A step towards usable privacy policy: Automatic alignment of privacy statements. In Proceedings of the COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, Dublin, Ireland, 23–29 August 2014; pp. 884–894. [Google Scholar]
- Martin, P.Y.; Turner, B.A. Grounded Theory and Organizational Research. J. Appl. Behav. Sci. 1986, 22, 141–157. [Google Scholar] [CrossRef]
- Van Deemter, K. Not Exactly: In Praise of Vagueness; Oxford University Press: Oxford, UK, 2012. [Google Scholar]
- Keefe, R. Theories of Vagueness; Cambridge University Press: Cambridge, MA, USA, 2000. [Google Scholar]
- Zadeh, L.A. Fuzzy sets. Inf. Control 1965, 8, 338–353. [Google Scholar] [CrossRef]
- Kempson, R.M. Semantic Theory; Cambridge University Press: Cambridge, MA, USA, 1977. [Google Scholar]
- Cranor, L. Web Privacy with P3P; O’Reilly Media, Inc.: Sebastopol, CA, USA, 2002. [Google Scholar]
- Cranor, L.F.; Guduru, P.; Arjula, M. User interfaces for privacy agents. ACM Trans.-Comput.-Hum. Interact. (TOCHI) 2006, 13, 135–178. [Google Scholar] [CrossRef]
- P3P Implementations. Available online: http:www.w3.org/P3P/implementations (accessed on 28 October 2019).
- Galle, M.; Christofi, A.; Elsahar, H. The Case for a GDPR-specific Annotated Dataset of Privacy Policies. In Proceedings of the AAAI Workshop, Honolulu, HI, USA, 27–28 January 2019. [Google Scholar]
- Sadeh, N.; Acquisti, A.; Breaux, T.D.; Cranor, L.F.; McDonald, A.M.; Reidenberg, J.R.; Smith, N.A.; Liu, F.; Russell, N.C.; Schaub, F.; et al. The Usable Privacy Policy Project; Technical Report, CMU-ISR-13-119; Institute for Software Research School of Computer Science, Carnegie Mellon University: Pittsburgh, PA, USA, 2013. [Google Scholar]
- Ammar, W.; Wilson, S.; Sadeh-Koniecpol, N.; A Smith, N. Automatic Categorization of Privacy Policies: A Pilot Study; Technical Report CMU-LTI-12-019; School of Computer Science, Language Technology Institute: Pittsburgh, PA, USA, 2012. [Google Scholar]
- Wilson, S.; Schaub, F.; Dara, A.A.; Liu, F.; Cherivirala, S.; Leon, P.G.; Andersen, M.S.; Zimmeck, S.; Sathyendra, K.M.; Russell, N.C.; et al. The creation and analysis of a website privacy policy corpus. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Berlin, Germany, 7–12 August 2016; pp. 1330–1340. [Google Scholar]
- Wilson, S.; Schaub, F.; Ramanath, R.; Sadeh, N.; Liu, F.; Smith, N.A.; Liu, F. Crowdsourcing Annotations for Websites’ Privacy Policies: Can It Really Work? In Proceedings of the 25th International Conference on World Wide Web. International World Wide Web Conferences Steering Committee, Montreal, QC, Canada, 11–15 April 2016; pp. 133–143. [Google Scholar]
- Sathyendra, K.M.; Wilson, S.; Schaub, F.; Zimmeck, S.; Sadeh, N. Identifying the provision of choices in privacy policy text. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, Copenhagen, Denmark, 9–11 September 2017; pp. 2774–2779. [Google Scholar]
- Boyd S, S.; Zowghi, D.; Farroukh, A. Measuring the expressiveness of a constrained natural language: An empirical study. In Proceedings of the 13th IEEE international conference on Requirements Engineering (RE’05), Paris, France, 29 August–2 September 2005; pp. 339–352. [Google Scholar]
- Yang, H.; de Roeck, A.; Gervasi, V.; Willis, A.; Nuseibeh, B. Analysing anaphoric ambiguity in natural language requirements. Requir. Eng. 2011, 16, 163–189. [Google Scholar] [CrossRef]
- Cruz, B.D.; Jayaraman, B.; Dwarakanath, A.; McMillan, C. Detecting Vague Words & Phrases in Requirements Documents in a Multilingual Environment. In Proceedings of the 2017 IEEE 25th International Requirements Engineering Conference (RE), Lisbon, Portugal, 4–8 September 2017; pp. 233–242. [Google Scholar]
- Asadabadi, M.R.; Chang, E.; Sharpe, K. Requirement ambiguity and fuzziness in large-scale projects: The problem and potential solutions. Appl. Soft Comput. 2020, 90, 106148. [Google Scholar] [CrossRef]
- Tjong, S.F.; Berry, D.M. The Design of SREE—A Prototype Potential Ambiguity Finder for Requirements Specifications and Lessons Learned. In Requirements Engineering: Foundation for Software Quality; Doerr, J., Opdahl, A.L., Eds.; Springer: Berlin/Heidelberg, Germany, 2013; pp. 80–95. [Google Scholar]
- Yang, H.; De Roeck, A.; Gervasi, V.; Willis, A.; Nuseibeh, B. Speculative requirements: Automatic detection of uncertainty in natural language requirements. In Proceedings of the 2012 20th IEEE International Requirements Engineering Conference (RE), Chicago, IL, USA, 24–28 September 2012; pp. 11–20. [Google Scholar]
- Guélorget, P.; Icard, B.; Gadek, G.; Gahbiche, S.; Gatepaille, S.; Atemezing, G.; Égré, P. Combining vagueness detection with deep learning to identify fake news. In Proceedings of the 2021 IEEE 24th International Conference on Information Fusion (FUSION), Sun City, South Africa, 1–4 November 2021; pp. 1–8. [Google Scholar] [CrossRef]
- Onwuegbuzie, A.J.; Frels, R.K.; Hwang, E. Mapping Saldana’s Coding Methods onto the Literature Review Process. J. Educ. Issues 2016, 2, 130–150. [Google Scholar] [CrossRef]
- Saldana, J. The Coding Manual for Qualitative Researchers; SAGE Publications Ltd.: Thousand Oaks, CA, USA, 2009. [Google Scholar]
- Cohen, J. A coefficient of agreement for nominal scales. Educ. Psychol. Meas. 1960, 20, 37–46. [Google Scholar] [CrossRef]
- Viera, A.J.; Garrett, J.M. Understanding interobserver agreement: The kappa statistic. Fam. Med. 2005, 37, 360–363. [Google Scholar]
- Hazem, A.; Daille, B. Semi-compositional method for synonym extraction of multi-word terms. In Proceedings of the 9th Edition of the Language Resources and Evaluation Conference (LREC 2014), Reykjavik, Iceland, 26–31 May 2014. [Google Scholar]
- Frantzi, K.; Ananiadou, S.; Mima, H. Automatic recognition of multi-word terms: The C-value/NC-value method. Int. J. Digit. Libr. 2000, 3, 115–130. [Google Scholar] [CrossRef]
- Hazem, A.; Daille, B. Word Embedding Approach for Synonym Extraction of Multi-Word Terms. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan, 7–12 May 2018; European Language Resources Association (ELRA): Miyazaki, Japan, 2018; pp. 297–303. [Google Scholar]
- Piao, S.; Forth, J.; Gacitua, R.; Whittle, J.; Wiggins, G. Evaluating tools for automatic concept extraction: A case study from the musicology domain. In Proceedings of the Digital Economy All Hands Meeting-Digital Futures 2010, Nottingham, UK, 11–12 October 2010; Available online: https://core.ac.uk/download/pdf/1557928.pdf#:~:text=This%20paper%20reports%20on%20an%20evaluation%20of%20five,most%20suitable%20for%20the%20task%20of%20concept%20extraction (accessed on 4 April 2023).
- Frantzi, K.; Ananiadou, S. The C-value/NC-value domain independent method for multi-word term extraction. J. Nat. Lang. Process. 1999, 6, 20–27. [Google Scholar] [CrossRef] [PubMed]
- Lossio-Ventura, J.A.; Jonquet, C.; Roche, M.; Teisseire, M. Combining C-value and Keyword Extraction Methods for Biomedical Terms Extraction. In Proceedings of the International Symposium on Languages in Biology and Medicine (LBM’2013), Tokyo, Japan, 12–13 December 2013; pp. 45–49. [Google Scholar]
Supplementary Supporting | Exemplification Supporting | Interpretation Supporting | Overall | ||||||
---|---|---|---|---|---|---|---|---|---|
Authors | Authors | Authors | Authors | ||||||
Yes | No | Yes | No | Yes | No | Yes | No | ||
annotator_c | yes | 19 | 0 | 12 | 2 | 51 | 9 | 82 | 11 |
no | 0 | 519 | 0 | 255 | 10 | 199 | 10 | 704 | |
kappa | 1.0 | 0.92 | 0.80 | 0.87 |
Category | Features of the Interpretative Sentence | Interpreted Word | Priority |
---|---|---|---|
syntax structure-based | there is at least one NP before matching words | the NP closest to matching words in the parsing tree | 1 |
there is a subtree satisfying the structure NP + (NP| VP) + matching words | the first NP (may be with modifiers) | 2 | |
semantic dependency-based | there is an nsubj relation between some NN/DET and matching words. | the NP composed by NN or the NP referred to by DET | 3 |
text pattern-based | the sentence contains the linguistic form NP + colon | the NP before colon | 4 |
the sentence contains the linguistic form following + NP + colon | the NP between the word “following” and colon | 5 |
Types of Incomplete Statements | Features |
---|---|
starting statement | ending with “;” |
enumerated items | the last item ends with “.”, while the others with “;” |
usually, items are ordered alphabetically or numerically, and start with the ordering marks, such as (a) (b) (c) or (1) (2) (3) and so on | |
sometimes, items are arranged as individual paragraphs and start with short umbrella terms. | |
if third parties are illustrated, the URLs are probably the main body of the items. |
Supporting Pattern | UPairs | APairs | CPairs | RPairs | Recall | Precision | ||
---|---|---|---|---|---|---|---|---|
Supplementary | Starting sentences | 11 | 12 | 10 | 12 | 90.91% | 100% | 95.24% |
Enum items | 11 | 18 | 10 | 17 | 90.91% | 94.44% | 92.64% | |
Exemplification | 6 | 5 | 5 | 5 | 83.33% | 100% | 90.91% | |
Interpretative | 78 | 159 | 46 | 150 | 58.97% | 94.34% | 72.58% | |
Overall | 106 | 194 | 71 | 184 | 66.98% | 94.85% | 78.51% |
Supporting Pattern | UPairs | APairs | CPairs | RPairs | Recall | Precision | ||
---|---|---|---|---|---|---|---|---|
Supplementary | Starting sentences | 8 | 7 | 7 | 7 | 87.50% | 100% | 93.33% |
Enum items | 11 | 18 | 10 | 13 | 90.91% | 72.22% | 80.50% | |
Exemplification | 14 | 12 | 10 | 12 | 71.43% | 100% | 83.33% | |
Interpretative | 45 | 65 | 20 | 40 | 57.78% | 61.54% | 59.60% | |
Overall | 78 | 102 | 53 | 72 | 67.95% | 70.59% | 69.24% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Lian, X.; Huang, D.; Li, X.; Zhao, Z.; Fan, Z.; Li, M. Really Vague? Automatically Identify the Potential False Vagueness within the Context of Documents. Mathematics 2023, 11, 2334. https://doi.org/10.3390/math11102334
Lian X, Huang D, Li X, Zhao Z, Fan Z, Li M. Really Vague? Automatically Identify the Potential False Vagueness within the Context of Documents. Mathematics. 2023; 11(10):2334. https://doi.org/10.3390/math11102334
Chicago/Turabian StyleLian, Xiaoli, Dan Huang, Xuefeng Li, Ziyan Zhao, Zhiqiang Fan, and Min Li. 2023. "Really Vague? Automatically Identify the Potential False Vagueness within the Context of Documents" Mathematics 11, no. 10: 2334. https://doi.org/10.3390/math11102334
APA StyleLian, X., Huang, D., Li, X., Zhao, Z., Fan, Z., & Li, M. (2023). Really Vague? Automatically Identify the Potential False Vagueness within the Context of Documents. Mathematics, 11(10), 2334. https://doi.org/10.3390/math11102334