“She’ll Never Be a Man” A Corpus-Based Forensic Linguistic Analysis of Misgendering Discrimination on X
Abstract
:1. Introduction
- Create a dataset of tweets targeting TGCN individuals from platform X, applying a set of criteria to ensure relevance and accuracy.
- Implement an annotation scheme to classify each tweet’s polarity and evaluate the consistency of the manual annotation between two annotators.
- Examine the context and frequency of those tweets that include intentional misgendering, analysing wordlists of the instances.
- Evaluate the effectiveness of an automatic sentiment detection system by comparing its performance to manual annotations.
- Provide recommendations for improving automatic detection systems and addressing intentional misgendering in online discourse effectively.
2. Theoretical Background
2.1. The Concept of Harassment
2.2. The Concept of Microaggressions
2.3. The Concept of Misgendering
3. State of the Art
3.1. Descriptive Linguistics Approach to Gender Microaggressions
3.2. Corpus Linguistics Approach to Microaggressions Annotation
4. Research Questions
- RQ1: Does intentional misgendering as a form of microaggression perpetuate discrimination towards the TGNC community?
- RQ2: Does intentional misgendering typically co-occur with other forms of aggression or discriminatory language?
- RQ3: Is there a significant relationship between the presence of misgendering in tweets and their sentiment polarity?
- RQ4: Can automatic sentiment detection systems effectively identify tweets containing misgendering and expressing hatred towards transgender individuals, or is there a gap in their ability to detect this type of message?
5. Methodology
5.1. Corpus Compilation
5.1.1. Data Selection Criteria
- “Individual 1” is a transgender man famous for his work as an actor before his transition. His pronoun set is he/they.
- “Individual 2” is a transgender woman known for documenting her transition process on social media. Her pronoun set is she/they.
5.1.2. Data Extraction
5.1.3. Data Pre-Processing
- Keyword presence: Tweets explicitly mention the keyword “Individual 1” or “Individual 2”.
- Duplicate tweets: Excluding tweets that are duplicates.
- URL/image tweets: Filtering out tweets solely consisting of URLs/images.
- Language criterion: Eliminating tweets composed in languages other than English or those containing a mix of English and non-English content.
- Minimum length: Disregarding tweets with a character count below five.
- User mentions: Removing tweets primarily comprised of user2 mentions.
5.1.4. Dataset Statistics
5.2. Corpus Annotation
5.2.1. Annotation Scheme
5.2.2. Annotation Process
5.2.3. Annotation Guidelines
5.2.4. Inter-Annotator Agreement
6. Results and Discussion
- The tweets that exhibit “mislabelling”, which involves using incorrect gendered terms or categories that do not align with the individual’s gender identity, are annotated as MISLABEL.
- The tweets that exhibit “mispronouning”, which entails using incorrect pronouns when addressing or referring to the individual, disregarding their gender identity, are annotated as MISPRONOUN.
- The tweets that use the correct pronouns or gendered language when referring to the individuals, aligning with their stated gender identity, are annotated as CORRECT GENDER.
- The tweets that do not directly address the individual’s gender identity and do not specify any gender-specific treatment are annotated as NO.
6.1. Analysis of the Results
6.2. Analysis of the Automatic Annotation
6.2.1. Automatic Annotation Issues
6.2.2. Causes of Automatic Annotation Issues
- Negations and double negatives: Firstly, automated sentiment detection systems, like flairNLP, can struggle with interpreting negations accurately, leading to misclassification of sentiment. This can be observed in the context of the tweet “@user1 @user2 Trans men have always been men, Individual 1 has never been a woman and is a man” where the tweet was annotated as positive by manual annotators and negative by the automatic system. In this tweet, flairNLP might have focused on the sentence “never been a woman” interpreting the negation as an indication of denying, ultimately annotating it as negative. This misinterpretation can occur because automated systems often rely on negations to comprehend the message without fully understanding the surrounding context or the deeper message conveyed by the text.Manual annotators, on the other hand, can recognise that the tweet is reinforcing the identity of trans men and supporting the proper use of pronouns, and the contextual understanding allows them to recognise the intended sentiment as positive, despite the presence of negations.
- Confusion between Subject and Object: Another notable challenge observed in this study is the system’s inability to distinguish between the subject and the addressee of the tweets analysed. In the tweet “Given your insistence on being a horrible person, it’s clear that understanding the basic concept that he’s a man is challenging for you. If that’s the case, then it’s best not to discuss Individual 1 at all”, the system might have interpreted “horrible person” as directed towards Individual 1, leading to a negative annotation. Although the sentiment detection system can classify the overall sentiment correctly based on the semantics of the words, it is not able to discern the direction of the comment or the intended target of criticism. Without the broader context, the system might take the phrase as literal, assuming that it is condemning Individual 1, rather than understanding that it is addressing someone who is misrepresenting or disrespecting him. This underscores the necessity for advanced linguistic models that can comprehend the context and recognise the relationships between different entities in discourse.Manual annotators, by contrast, can evaluate the context and correctly interpret that the term “horrible person” is directed toward someone disrespecting Individual 1. This understanding allowed them to see that the tweet’s sentiment is, in fact, positive, as it defends Individual 1’s identity and advocates for respect.
- Difficulty recognising Sarcasm and Irony: Additionally, automated systems frequently struggle with detecting irony and sarcasm, often leading to misinterpretations in sentiment analysis. For example, in the tweet “Individual 1 transitioned after enduring years of trauma from sexual abuse in Hollywood during her teenage years, following a psychotic breakdown in which she self-harmed, and after experiencing an inner voice urging her to transition [...]”, the automated system read this statement as a literal explanation for someone’s transition. However, human annotators detected the sarcasm inherent in this comment, understanding that it is questioning or mocking the notion of an “inner voice” leading someone to become trans. As a result, manual annotators classified this as a negative due to the sarcastic undertones, and flairNLP labelled it positive.
- Keyword-based analysis: The last and most prominent cause for discrepancy when employing automated sentiment analysis systems to annotate a corpus is the reliance on keyword-based analysis to classify the sentiment. This approach examines specific words and phrases to determine whether the sentiment is positive, negative or neutral. While this method can be effective for simple cases, it often fails to capture the broader context, emotional subtleties or implicit meanings that human annotators can discern. As a result, discrepancies between human annotators and these systems may arise.For example, the tweet “@user3 (Individual 1’s deadname) was a talented, inspiring and beautiful young woman. Individual 1 is now a disturbing, depressed ghost of their former self.” was labelled as positive by flairNLP and negative by the manual annotators. The cause might have been that the automatic system employed keyword-based analysis, identifying words like “beautiful”, “talented” and “inspiring” as indicators of positive sentiment. As a result of this focus, the system denied the derogatory use of terms such as “depressed ghost”, which eventually led to a positive annotation rather than a negative.
6.2.3. Wordlist Frequencies
7. Conclusions and Future Research
7.1. Research Questions
- RQ1: Does intentional misgendering as a form of microaggression perpetuate discrimination towards the TGNC community?The findings of this study confirm that intentional misgendering significantly perpetuates discrimination against the TGNC community. The analysis and data extracted indicate a significant correlation between intentional misgendering and negative sentiment, suggesting that misgendering indeed contributes to discrimination towards TGNC individuals. The prevalence of negative sentiment associated with misgendering underscores its role in perpetuating discrimination within online discourse.
- RQ2: Does intentional misgendering typically co-occur with other forms of aggression or discriminatory language?The study substantiates that intentional misgendering frequently co-occurs with other forms of aggressive or discriminatory language. The analysis reveals that intentional misgendering often accompanies other forms of discriminatory language, such as derogatory terms or negative stereotypes. The co-occurrence of misgendering with such language suggests a broader pattern of discrimination and hostility online towards TGNC individuals.
- RQ3: Is there a significant relationship between the presence of misgendering in tweets and their sentiment polarity?The present study’s findings strongly indicate a significant relationship between the presence of misgendering in tweets and their negative polarity. Both mispronouning (using incorrect pronouns) and mislabelling (using incorrect gender terms) consistently show a bias towards negative sentiment. This pattern of negativity is evident for both Individuals 1 and 2, indicating a consistent correlation between misgendering and negative sentiment across different contexts.Specifically, for Individual 1, tweets containing mispronouning predominantly exhibit negative sentiment, with a significant majority (70 tweets) annotated as negative out of 153 total negative messages. The same applies to tweets with mislabelling (33 tweets), further highlighting the correlation between misgendering and negative sentiment in this context. Similarly, for Individual 2, tweets, including mispronouning and mislabelling, lean towards negative sentiment. Thus, 78 out of 79 tweets with mispronouning were annotated as negative, and 27 out of 28 tweets with mislabelling were annotated as negatives, emphasising a strong association between misgendering and negative polarity.Overall, the study’s data support the conclusion that misgendering in tweets is significantly associated with negative sentiment, with 208 tweets with misgendering out of the 279 total annotated as negative. This underscores the importance of further exploration into the underlying reasons behind this correlation and its implications for TGNC individuals online.
- RQ4: Can automatic sentiment detection systems effectively identify tweets containing misgendering and expressing hatred towards transgender individuals, or is there a gap in their ability to detect this type of message?Automatic sentiment detection systems, such as flairNLP, face inherent limitations that result in the miscategorisation of tweets concerning their overall positivity or negativity. While these systems can sometimes correctly identify positive or negative sentiment, their broader issue lies in a contextual misunderstanding and a lack of nuance in sentiment analysis. This miscategorisation affects the system’s ability to accurately flag harmful language, including misgendering, as it struggles to correctly interpret the context in which certain words or phrases are used.One of the main limitations is that they rely on keyword analysis to measure the message’s sentiment. This approach often ignores the context in which positive or negative terms are used. For example, the presence of positive words in a tweet does not necessarily indicate an overall positive sentiment, as these terms may be used to refer to different persons which complicates accurate detection. In addition, these systems can not identify the addressee or subject of the sentiments leading to erroneous annotations. They operate without context concerning the individuals or groups mentioned and fail to recognise instances where seemingly harmless messages may harm others.Furthermore, these automated systems are inadequate at capturing the subtleties of language, including forms such as sarcasm and irony, as evidenced in this study. These linguistic nuances are often crucial in determining the true sentiment and intent of a message, but automated systems have difficulty interpreting them accurately. Hence, without the ability to grasp these subtleties, automatic systems may misclassify the sentiment of a message, leading to inaccuracies in their analysis.In summary, automatic sentiment detection systems face significant complications in effectively identifying tweets containing misgendering and expressions of hatred towards transgender individuals. Their reliance on keyword analysis, with a limited understanding of contextual nuances and linguistic subtleties, underscores the need for further development and refinement to enhance their accuracy in detecting and addressing this form of harmful language in online discourse.
7.2. Future Lines of Research
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
1 | https://x.com/ (accessed on 1 April 2024). |
2 | Throughout this study, and to ensure the anonymity of individuals targeted by the tweets, any reference to a user on X will be replaced with the placeholder @user and a number. |
3 | https://www.sketchengine.eu/ (accessed on 25 April 2024). |
4 | The English Web Corpus (enTenTen) is an English corpus of texts collected from the Internet. The most recent version of the enTenTen21 corpus consists of 52 billion words. |
5 | SemEval is a series of international natural language processing (NLP) research workshops aiming to further develop the state of the art in semantic analysis by assisting in creating high-quality annotated datasets on an increasingly difficult set of natural language semantics problems. For further information: https://semeval.github.io/ (accessed on 14 April 2024). |
References
- Akbik, Alan, Tanja Bergmann, Duncan Blythe, Kashif Rasul, Stefan Schweter, and Roland Vollgraf. 2019. FLAIR: An easy-to-use framework for state-of-the-art NLP. Paper presented at the NAACL 2019, 2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations), Minneapolis, MN, USA, June 2–7; pp. 54–59. [Google Scholar] [CrossRef]
- American Psychological Association. 2015. Guidelines for psychological practice with transgender and gender nonconforming people. American Psychologist 70: 832–64. [Google Scholar] [CrossRef]
- Argyriou, Konstantinos. 2021. Misgendering as epistemic injustice: A queer sts approach. Las Torres de Lucca: Revista Internacional de Filosofía Política 10: 71–82. [Google Scholar] [CrossRef]
- Artstein, Ron, and Massimo Poesio. 2008. Inter-coder agreement for computational linguistics. Computational Linguistics 34: 555–96. [Google Scholar] [CrossRef]
- Assimakopoulos, Stavros, Rachel Vella Muskat, Lonneke van der Plas, and Albert Gatt. 2020. Annotating for hate speech: The maneco corpus and some input from critical discourse analysis. Paper presented at the Twelfth Language Resources and Evaluation Conference, Marseille, France, May 11–16; Paris: European Language Resources Association, pp. 5088–97. [Google Scholar]
- Biber, Douglas. 1993. Representativeness in corpus design. Literary and Linguistic Computing 8: 243–57. [Google Scholar] [CrossRef]
- Birjali, Marouane, Mohammed Kasri, and Abderrahim Beni-Hssane. 2021. A comprehensive survey on sentiment analysis: Approaches, challenges and trends. Knowledge-Based Systems 226: 107134. [Google Scholar] [CrossRef]
- Burghardt, Manuel. 2015. Introduction to tools and methods for the analysis of twitter data. 10plus1: Living Linguistics 1: 74–91. [Google Scholar]
- Chang, Tiffany K., and Y. Barry Chung. 2015. Transgender microaggressions: Complexity of the heterogeneity of transgender identities. Journal of LGBT Issues in Counseling 9: 217–34. [Google Scholar] [CrossRef]
- Cohen, Jacob. 1960. A coefficient of agreement for nominal scales. Educational and Psychological Measurement 20: 37–46. [Google Scholar] [CrossRef]
- Edmonds, David, and Marco Pino. 2023. Designedly intentional misgendering in social interaction: A conversation analytic account. Feminism and Psychology 33: 668–91. [Google Scholar] [CrossRef]
- Fassinger, Ruth E., and Jean R. Arseneau. 2007. “i’d rather get wet than be under that umbrella”: Differentiating the experiences and identities of lesbian, gay, bisexual, and transgender people. In Handbook of Counseling and Psychotherapy with Lesbian, Gay, Bisexual, and Transgender Clients, 2nd ed. Edited by Kathleen J. Bieschke, Ruperto M. Perez and Kurt A. DeBord. Washington, DC: American Psychological Association, pp. 19–49. [Google Scholar] [CrossRef]
- Guillén-Nieto, Victoria. 2022. Language as evidence in workplace harassment. Corela HS-36: 1–21. [Google Scholar] [CrossRef]
- Guillén-Nieto, Victoria. 2023. Hate Speech: Linguistic Perspectives. Berlin and Boston: De Gruyter Mouton. [Google Scholar] [CrossRef]
- Havens, Laura, Melissa Terras, Benjamin Bach, and Belinda Alex. 2022. Uncertainty and inclusivity in gender bias annotation: An annotation taxonomy and annotated datasets of british english text. Paper presented at the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP), Seattle, WA, USA, July 15; Stroudsburg: Association for Computational Linguistics, pp. 30–57. [Google Scholar] [CrossRef]
- Hochdorn, Alexander, Vicente Paulo Faleiros, Bruno Camargo, and Paul F. Cottone. 2016. Talking gender: How (con)text shapes gender—The discursive positioning of transgender people in prison, work and private settings. International Journal of Transgenderism 17: 212–29. [Google Scholar] [CrossRef]
- Kozareva, Zornitsa, Borja Navarro, Salvador Vázquez, and Andrés Montoyo. 2007. UA-ZBSA: A headline emotion classification through web information. Paper presented at the Fourth International Workshop on Semantic Evaluations (SemEval-2007), Prague, Czech Republic, June 23–24; Stroudsburg: Association for Computational Linguistics, pp. 334–37. [Google Scholar]
- Leymann, Heinz. 1990. Mobbing and psychological terror at workplace. Violence and Victims 5: 119–26. [Google Scholar] [CrossRef] [PubMed]
- McCarthy, Linda. 2003. What about the “t”? is multicultural education ready to address transgender issues? Multicultural Perspectives 5: 46–48. [Google Scholar] [CrossRef]
- McLemore, Kevin A. 2016. A minority stress perspective on transgender individuals’ experiences with misgendering. Stigma and Health 2: 1–46. [Google Scholar] [CrossRef]
- McNamarah, Chan Tov. 2021. Misgendering. California Law Review 109: 2227–322. [Google Scholar] [CrossRef]
- Moreno-Ortiz, Antonio, and Miguel García-Gámez. 2022. Corpus annotation and analysis of sarcasm on twitter: #catsmovie vs. #theriseofskywalker. ATLANTIS: Journal of the Spanish Association of Anglo-American Studies 44: 186–207. [Google Scholar] [CrossRef]
- Nadal, Kevin L., Anneliese Skolnik, and Yinglee Wong. 2012. Interpersonal and systemic microaggressions toward transgender people: Implications for counseling. Journal of LGBTQ Issues in Counseling 6: 55–82. [Google Scholar] [CrossRef]
- Nadal, Kevin L., Casey N. Whitman, Lindsey S. Davis, Tania Erazo, and Katherine C. Davidoff. 2016. Microaggressions toward lesbian, gay, bisexual, transgender, queer, and genderqueer people: A review of the literature. The Journal of Sex Research 53: 488–508. [Google Scholar] [CrossRef] [PubMed]
- Nadal, Kevin L., David P. Rivera, and Melissa J. Corpus. 2010. Sexual orientation and transgender microaggressions in everyday life: Experiences of lesbians, gays, bisexuals, and transgender individuals. In Microaggressions and Marginality: Manifestation, Dynamics, and Impact. Edited by Derald Wing Sue. New York: Wiley, pp. 217–40. [Google Scholar]
- Paludi, Michele A. 2012. Managing Diversity in Today’s Workplace: Strategies for Employees and Employers. Santa Barbara: Preager. [Google Scholar]
- Pierce, Chester M. 1970. Offensive mechanisms. In The Black Seventies. Boston: Porter Sargent, pp. 265–82. [Google Scholar]
- Rosenthal, Sara, Preslav Nakov, Svetlana Kiritchenko, Saif Mohammad, Alan Ritter, and Veselin Stoyanov. 2015. SemEval-2015 Task 10: Sentiment Analysis in Twitter. Paper presented at the 9th International Workshop on Semantic Evaluation (SemEval 2015), Denver, CO, USA, June 4–5; Stroudsburg: Association for Computational Linguistics, pp. 451–63. [Google Scholar] [CrossRef]
- Solórzano, Daniel, Miguel Ceja, and Tara Yosso. 2000. Critical race theory, racial microaggressions, and campus racial climate: The experiences of african american college students. Journal of Negro Education 69: 60–73. [Google Scholar]
- Suchomel, Vít. 2020. Better Web Corpora for Corpus Linguistics and NLP. Doctoral thesis, Masarykova Univerzita, Brno, Czech Republic. [Google Scholar]
- Sue, Derald Wing. 2010. Microaggressions in Everyday Life: Race, Gender, and Sexual Orientation. Hoboken: Wiley. [Google Scholar]
- Sue, Derald Wing, and Christina M. Capodilupo. 2008. Racial, gender, and sexual orientation microaggressions: Implications for counseling and psychotherapy. In Counseling the Culturally Diverse: Theory and Practice, 5th ed. Hoboken: Wiley, pp. 105–30. [Google Scholar]
- Sue, Derald Wing, Christina M. Capodilupo, and Aisha M. B. Holder. 2008. Racial microaggressions in the life experience of black americans. Professional Psychology: Research and Practice 39: 329–36. [Google Scholar] [CrossRef]
- Thál, Jakub, and Iris Elmerot. 2022. Unseen gender: Misgendering of transgender individuals in czech. In The Grammar of Hate: Morphosyntactic Features of Hateful, Aggressive, and Dehumanizing Discourse. Edited by Natalia Knoblock. Cambridge: Cambridge University Press, pp. 97–117. [Google Scholar] [CrossRef]
- Wankhade, Mayur, Annavarapu Chandra Sekhara Rao, and Chaitanya Kulkarni. 2022. A survey on sentiment analysis methods, applications, and challenges. Artificial Intelligence Review 55: 5731–80. [Google Scholar] [CrossRef]
- X. 2024. Abusive Behavior. X Help Centre. Available online: https://help.x.com/en/rules-and-policies/abusive-behavior (accessed on 1 April 2024).
No. | Lemma | Frequency | No. | Lemma | Frequency |
---|---|---|---|---|---|
1 | Trans | 26 | 11 | Delusion | 5 |
2 | Lesbian | 10 | 12 | Tittie | 5 |
3 | Tit | 9 | 13 | Cisgender | 3 |
4 | Transitioned | 8 | 14 | Mutilate | 3 |
5 | Transphobe | 7 | 15 | Psychotic | 3 |
6 | Mastectomy | 6 | 16 | Deadnaming | 2 |
7 | Topless | 5 | 17 | Self-hatred | 2 |
8 | Cis | 5 | 18 | Weirdo | 2 |
9 | Slur | 5 | 19 | Transman | 2 |
10 | Trans | 5 | 20 | Objectify | 2 |
No. | Lemma | Frequency | No. | Lemma | Frequency |
---|---|---|---|---|---|
1 | Trans | 18 | 11 | Fiasco | 3 |
2 | Pretend | 10 | 12 | Womanhood | 3 |
3 | Gay | 8 | 13 | Manly | 3 |
4 | Dude | 6 | 14 | Backlash | 3 |
5 | Girlhood | 4 | 15 | Clown | 3 |
6 | Mock | 4 | 16 | Transgender | 3 |
7 | Pronoun | 4 | 17 | Influencer | 3 |
8 | Marginalise | 4 | 18 | Vomit-inducing | 2 |
9 | Leftist | 4 | 19 | Transvestite | 2 |
10 | Mockery | 3 | 20 | Transphobic | 2 |
Polarity | Confidence | Tweets |
---|---|---|
Positive | 1 | I had an unusual dream about Individual 1 and now I think I might have feelings for him, lol. |
Negative | 1 | She’ll never be a man, no matter what she does. Even though Individual 1 has invested a lot in trying to transition, people will not perceive her as part of our group. |
Neutral | 1 | Film 1-directed by director 1, starring actor 1 and executive-produced by Individual 1-centers around a teenage girl navigating a competition. |
Polarity | Confidence | Tweets |
---|---|---|
Positive | 1 | if Individual 2 did single-handedly disrupt the entire product 1 industry, she must be one of the most influential women in the world. |
Negative | 1 | Individual 2: “I do not think God made an error with me.” He seems to be trying to sway people but inadvertently speaks the truth. Indeed, God didn’t make an error with him—he was made a man. |
Neutral | 1 | Individual 2 was the winner of the first Woman of the Year award by brand 1. |
Positive | Negative | Neutral | Total | |
---|---|---|---|---|
Individual 1 | 57 | 126 | 17 | 200 |
Individual 2 | 26 | 153 | 21 | 200 |
Total | 83 | 279 | 38 | 400 |
Misgendering | Positive | Negative | Neutral | Total |
---|---|---|---|---|
NO | 18 | 20 | 9 | 47 |
CORRECT_GENDER | 40 | 1 | 6 | 47 |
MISLABEL | 0 | 27 | 1 | 28 |
MISPRONOUN | 0 | 78 | 1 | 79 |
Total | 57 | 126 | 17 | 200 |
Misgendering | Positive | Negative | Neutral | Total |
---|---|---|---|---|
NO | 17 | 49 | 18 | 84 |
CORRECT_GENDER | 9 | 1 | 2 | 12 |
MISLABEL | 0 | 33 | 1 | 34 |
MISPRONOUN | 0 | 70 | 0 | 70 |
Total | 26 | 153 | 21 | 200 |
Female-Related Terms | Sentiment Polarity | |||
---|---|---|---|---|
Positive | Negative | Neutral | Total | |
Lesbian | 2 | 7 | 1 | 10 |
Tit | 1 | 8 | 0 | 9 |
Mastectomy | 0 | 6 | 0 | 6 |
Topless | 1 | 3 | 0 | 4 |
Tittie | 2 | 3 | 0 | 5 |
Total | 6 | 27 | 1 | 34 |
Male-Related Terms | Sentiment Polarity | |||
---|---|---|---|---|
Positive | Negative | Neutral | Total | |
Pretend | 2 | 8 | 0 | 10 |
Gay | 0 | 7 | 1 | 8 |
Dude | 0 | 6 | 0 | 6 |
Manly | 0 | 3 | 0 | 3 |
Total | 2 | 24 | 1 | 27 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Sevilla Requena, L. “She’ll Never Be a Man” A Corpus-Based Forensic Linguistic Analysis of Misgendering Discrimination on X. Languages 2024, 9, 291. https://doi.org/10.3390/languages9090291
Sevilla Requena L. “She’ll Never Be a Man” A Corpus-Based Forensic Linguistic Analysis of Misgendering Discrimination on X. Languages. 2024; 9(9):291. https://doi.org/10.3390/languages9090291
Chicago/Turabian StyleSevilla Requena, Lucia. 2024. "“She’ll Never Be a Man” A Corpus-Based Forensic Linguistic Analysis of Misgendering Discrimination on X" Languages 9, no. 9: 291. https://doi.org/10.3390/languages9090291
APA StyleSevilla Requena, L. (2024). “She’ll Never Be a Man” A Corpus-Based Forensic Linguistic Analysis of Misgendering Discrimination on X. Languages, 9(9), 291. https://doi.org/10.3390/languages9090291