Chatbots and Empowerment in Gender-Based Violence: Mixed Methods Analysis of Psychological and Legal Assistance
Abstract
1. Introduction
Contextualising Artificial Intelligence, Chatbots and Gender-Based Violence
2. Literature Review
2.1. Chatbots and Their Application to Gender-Based Violence
2.2. Technological Limitations and Social Barriers
2.3. Ethical Dilemmas and Lack of International Regulation
2.4. Proposals for Improvement in System Design and Integration
2.5. Degree of Exploration of the Topic and Positioning of the Study
2.6. Research Hypotheses
3. Materials and Methods
3.1. Strategy and Design
3.2. Participants
3.3. Instruments
3.4. Ethics and Procedure
3.5. Analysis
3.6. Integration
4. Results
4.1. Qualitative Analysis
4.2. Quantitative Analysis
5. Discussion
Future Research and Recommendations
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Ahn, Jungyong, Jungwon Kim, and Yongjun Sung. 2022. The effect of gender stereotypes on artificial intelligence recommendations. Journal of Business Research 141: 50–59. [Google Scholar] [CrossRef]
- Al-Ayed, Sura I., and Ahmad Adnan Al-Tit. 2024. Measuring gender disparities in the intentions of startups to adopt artificial intelligence technology: A comprehensive multigroup comparative analysis. Uncertain Supply Chain Management 12: 1567–76. [Google Scholar] [CrossRef]
- Avella, Marcela del Pilar, Jesús Eduardo Sanabria Moyano, and Andrea Catalina Peña Piñeros. 2023. International standards for the protection of women against gender-based violence applied to predictive artificial intelligence|Los estándares internacionales de protección de la violencia basada en género de las mujeres aplicados a la inteligencia arti. Justicia (Barranquilla) 28: 43–56. [Google Scholar] [CrossRef]
- Bonamigo, Victoria Grassi, Fernanda Broering Gomes Torres, Rafaela Gessner Lourenço, and Marcia Regina Cubas. 2022. Violencia física, sexual y psicológica según el análisis conceptual evolutivo de Rodgers. Cogitare Enfermagem 27: e86883. [Google Scholar] [CrossRef]
- Borges, Gustavo Silveira, and Maurício da Cunha Savino Filó. 2021. Artificial intelligence, gender, and human rights: The case of amazonInteligência artificial, gênero e direitos humanos: O caso amazon. Law of Justice Journal 35: 218–45. [Google Scholar]
- Brown, Laila M. 2022. Gendered Artificial Intelligence in Libraries: Opportunities to Deconstruct Sexism and Gender Binarism. Journal of Library Administration 62: 19–30. [Google Scholar] [CrossRef]
- Chauhan, Prashant, and Gagandeep Kaur. 2022. Gender Bias and Artificial Intelligence: A Challenge within the Periphery of Human Rights. Hasanuddin Law Review 8: 46–59. [Google Scholar] [CrossRef]
- Costa, Pedro. 2018. Conversing with personal digital assistants: On gender and artificial intelligence. Journal of Science and Technology of the Arts 10: 59–72. [Google Scholar] [CrossRef]
- Dorta-González, Pablo, Alexis Jorge López-Puig, María Isabel Dorta-González, and Sara M. González-Betancor. 2024. Generative artificial intelligence usage by researchers at work: Effects of gender, career stage, type of workplace, and perceived barriers. Telematics and Informatics 94: 102187. [Google Scholar] [CrossRef]
- Guevara-Gómez, Ariana. 2023. National Artificial Intelligence Strategies: A Gender Perspective Approach. Revista de Gestion Publica 12: 129–54. [Google Scholar] [CrossRef]
- Hajibabaei, Anahita, Andrea Schiffauerova, and Ashkan Ebadi. 2022. Gender-specific patterns in the artificial intelligence scientific ecosystem. Journal of Informetrics 16: 101275. [Google Scholar] [CrossRef]
- Hajibabaei, Anahita, Andrea Schiffauerova, and Ashkan Ebadi. 2023. Women and key positions in scientific collaboration networks: Analyzing central scientists’ profiles in the artificial intelligence ecosystem through a gender lens. Scientometrics 128: 1219–40. [Google Scholar] [CrossRef]
- Huang, Kuo-Liang, Sheng-Feng Duan, and Xi Lyu. 2021. Affective Voice Interaction and Artificial Intelligence: A Research Study on the Acoustic Features of Gender and the Emotional States of the PAD Model. Frontiers in Psychology 12: 664925. [Google Scholar] [CrossRef] [PubMed]
- International Telecommunication Union. 2023. Gender Digital Divide and Access to Emerging Technologies. Available online: https://www.itu.int (accessed on 16 October 2025).
- Isaksson, Anna. 2024. Mitigation measures for addressing gender bias in artificial intelligence within healthcare settings: A critical area of sociological inquiry. AI and Society 40: 3009–18. [Google Scholar] [CrossRef]
- Jang, Yeonju, Seongyune Choi, and Hyeoncheol Kim. 2022. Development and validation of an instrument to measure undergraduate students’ attitudes toward the ethics of artificial intelligence (AT-EAI) and analysis of its difference by gender and experience of AI education. Education and Information Technologies 27: 11635–67. [Google Scholar] [CrossRef]
- Jomaa, Nayef, Rais Attamimi, and Musallam Al Mahri. 2024. Utilising Artificial Intelligence (AI) in Vocabulary Learning by EFL Omani Students: The Effect of Age, Gender, and Level of Study. Forum for Linguistic Studies 6: 171–86. [Google Scholar] [CrossRef]
- Joseph, Ofem Usani, Iyam Mary Arikpo, Sylvia Victor Ovat, Chidirim Esther Nworgwugwu, Paulina Mbua Anake, Maryrose Ify Udeh, and Bernard Diwa Otu. 2024. Artificial Intelligence (AI) in academic research. A multi-group analysis of students’ awareness and perceptions using gender and programme type. Journal of Applied Learning and Teaching 7: 76–92. [Google Scholar] [CrossRef]
- Kelley, Stephanie, Anton Ovchinnikov, David R. Hardoon, and Adrienne Heinrich. 2022. Antidiscrimination Laws, Artificial Intelligence, and Gender Bias: A Case Study in Nonmortgage Fintech Lending. Manufacturing and Service Operations Management 24: 3039–59. [Google Scholar] [CrossRef]
- Kim, Keunjae, and Kyungbin Kwon. 2024. Designing an Inclusive Artificial Intelligence (AI) Curriculum for Elementary Students to Address Gender Differences With Collaborative and Tangible Approaches. Journal of Educational Computing Research 62: 1837–64. [Google Scholar] [CrossRef]
- Kovacova, Maria, Jana Kliestikova, Marian Grupac, Iulia Grecu, and Gheorghe Grecu. 2019. Automating gender roles at work: How digital disruption and artificial intelligence alter industry structures and sex-based divisions of labor. Journal of Research in Gender Studies 9: 153–59. [Google Scholar] [CrossRef]
- Lee, Min-Sun, Gi-Eun Lee, San Ho Lee, and Jang-Han Lee. 2024. Emotional responses of Korean and Chinese women to Hangul phonemes to the gender of an artificial intelligence voice. Frontiers in Psychology 15: 1357975. [Google Scholar] [CrossRef]
- Leong, Kelvin, and Anna Sung. 2024. Gender stereotypes in artificial intelligence within the accounting profession using large language models. Humanities and Social Sciences Communications 11: 1141. [Google Scholar] [CrossRef]
- Locke, Larry Galvin, and Grace Hodgdon. 2024. Gender bias in visual generative artificial intelligence systems and the socialization of AI. AI and Society 40: 2229–36. [Google Scholar] [CrossRef]
- Lütz, Fabian. 2022. Gender equality and artificial intelligence in Europe. Addressing direct and indirect impacts of algorithms on gender-based discrimination. ERA Forum 23: 33–52. [Google Scholar] [CrossRef]
- Manasi, Ardra, Subadra Panchanadeswaran, Emily Sours, and Seung Ju Lee. 2022. Mirroring the bias: Gender and artificial intelligence. Gender, Technology and Development 26: 295–305. [Google Scholar] [CrossRef]
- Marinucci, Ludovica, Claudia Mazzuca, and Aldo Gangemi. 2023. Exposing implicit biases and stereotypes in human and artificial intelligence: State of the art and challenges with a focus on gender. AI and Society 38: 747–61. [Google Scholar] [CrossRef]
- Møgelvang, Anja, Camilla Bjelland, Simone Grassini, and Kristine Ludvigsen. 2024. Gender Differences in the Use of Generative Artificial Intelligence Chatbots in Higher Education: Characteristics and Consequences. Education Sciences 14: 1363. [Google Scholar] [CrossRef]
- Noriega, Maria Noelle. 2020. The application of artificial intelligence in police interrogations: An analysis addressing the proposed effect AI has on racial and gender bias, cooperation, and false confessions. Futures 117: 102510. [Google Scholar] [CrossRef]
- Nouraldeen, Rasha Mohammad. 2023. The impact of technology readiness and use perceptions on students’ adoption of artificial intelligence: The moderating role of gender. Development and Learning in Organizations 37: 7–10. [Google Scholar] [CrossRef]
- Nwafor, Ifeoma E. 2024. Gender Mainstreaming into African Artificial Intelligence Policies: Egypt, Rwanda and Mauritius as Case Studies. Law, Technology and Humans 6: 53–68. [Google Scholar] [CrossRef]
- Parry, Aqib Javaid, Sana Altaf, and Akhtar Habib Shah. 2024. Gender Dynamics in Artificial Intelligence: Problematising Femininity in the Film Alita: Battle Angel. 3L: Language, Linguistics, Literature 30: 16–32. [Google Scholar] [CrossRef]
- Piva, Laura. 2024. Vulnerability in the age of artificial intelligence: Addressing gender bias in healthcare. BioLaw Journal 2024: 169–78. [Google Scholar] [CrossRef]
- Rodriguez-Saavedra, Miluska Odely, Luis Barrera Benavides, Ivan Galindo, Luis Campos Ascuña, Antonio Morales Gonzales, Jian Mamani Lopez, and Maria Alegre Chalco. 2025a. The Role of Law in Protecting Minors from Stress Caused by Social Media. Studies in Media and Communication 13: 373–85. [Google Scholar] [CrossRef]
- Rodriguez-Saavedra, Miluska Odely, Luis Gonzalo Barrera Benavides, Ivan Cuentas Galindo, Lui Miguel Campos Ascuña, Antonio Victor Morales Gonzales, Jiang Wagner Mamani Lopez, Ruben Washintong, and Arguedas-Catasi. 2025b. Augmented Reality as an Educational Tool: Transforming Teaching in the Digital Age. Information 16: 372. [Google Scholar] [CrossRef]
- Rodríguez-Saavedra, Miluska Odely, Felipe Gustavo Chávez-Quiroz, Wilian Quispe-Nina, Rocio Natividad Caira-Mamani, Wilfredo Alexander Medina-Esquivel, Edwing Gonzalo Tapia-Meza, and Raúl Romero-Carazas. 2024. Foreign Direct Investment (FDI) and Economic Development Impacted by Peru’s Public Policies. Journal of Ecohumanism 3: 317–32. [Google Scholar] [CrossRef]
- Schiller, Amy, and John McMahon. 2019. Alexa, alert me when the revolution comes: Gender, affect, and labor in the age of home-based artificial intelligence. New Political Science 41: 173–91. [Google Scholar] [CrossRef]
- Sutko, Daniel M. 2020. Theorizing femininity in artificial intelligence: A framework for undoing technology’s gender troubles. Cultural Studies 34: 567–92. [Google Scholar] [CrossRef]
- UNESCO. 2023. Ethical Considerations in Artificial Intelligence and Gender Bias Mitigation. Available online: https://www.unesco.org (accessed on 16 October 2025).
- UN Women. 2023. Gender-Based Violence in Times of Crisis: Analysis and Response Strategies. Available online: https://www.unwomen.org (accessed on 16 October 2025).
- World Bank. 2023. Gender Gaps in Access to Digital Services: A Global Analysis. Available online: https://www.worldbank.org (accessed on 16 October 2025).
- World Health Organization. 2023. Report on Gender-Based Violence and Its Impact on Public Health. Available online: https://www.who.int (accessed on 16 October 2025).
- Xia, Qi, Thomas Kia Fiu Chiu, and Ching Sing Chai. 2023. The moderating effects of gender and need satisfaction on self-regulated learning through Artificial Intelligence (AI). Education and Information Technologies 28: 8691–713. [Google Scholar] [CrossRef]
- Yim, Iris Heung Yue. 2024. Artificial intelligence literacy in primary education: An arts-based approach to overcoming age and gender barriers. Computers and Education: Artificial Intelligence 7: 100321. [Google Scholar] [CrossRef]
- Young, Erin, Judy Wajcman, and Laila Sprejer. 2023. Mind the gender gap: Inequalities in the emergent professions of artificial intelligence (AI) and data science. New Technology, Work and Employment 38: 391–414. [Google Scholar] [CrossRef]






| Category | Subcategory | Qualitative Indicator | Interview Question/Guide |
|---|---|---|---|
| Access and connectivity | Connectivity | Interruptions, latency, crashes, loss of thread | What technical issues (signal, data, Wi-Fi) affected your conversation with the chatbot? |
| Session recovery | Resumption after interruption, context retention, latency tolerance | Were you able to resume the conversation after a crash? How did you do it? | |
| Interaction flow | Conversational architecture | Clear paths, visibility of buttons/actions, continuity of flow. | Was it clear to you how to continue or resume the conversation? What would you change about the flow? |
| Goal-oriented start | Startup steps, brief instructions, “next” prompts. | What elements would have helped you get started more easily, such as a “Next” button and a brief guide? | |
| Privacy and security | Device risk | Fear of being discovered, usage windows, control by partner. | What concerns you about using chat on your phone? Did you need to hide or delete anything? |
| Perceived protection | Hidden mode, quick deletion, PIN lock. | What features would make you feel more secure (hidden mode, quick deletion, blocking)? | |
| Personalization and empathy | Case memory | Remembering the context, following up on the case | Did you notice that the chatbot remembered your case or picked up where you left off? |
| Tone and language | Warmth, respect, avoiding revictimization | How would you describe the chatbot’s tone (friendly, neutral, cold)? | |
| Guidance and effectiveness | Informative usefulness | Clear paths, concrete steps, actionable information | Did you feel supported? |
| Perceived effectiveness | Problem resolution, satisfaction with the help provided. | How effective was the chatbot’s help in your situation? Why? | |
| Legal referral and follow-up | Human bridge | Request for a call or referral, professional validation | At what point did you prefer to speak to a person? What did you need to confirm? |
| Case follow-up | Follow-up contact, status of the process, closure | Did you receive a follow-up after the first contact? How did that make you feel? | |
| Contextual barriers | Safe time/space | Times and places of safe use | When and where can you use the chatbot without risk? |
| Digital literacy | Prior skills and resources | What knowledge or resources did you lack, such as saving evidence or attaching evidence? |
| Category | Coverage | Subcodes (n) | Microquotes (≤15 Words Each, Different) | Integrated Reading |
|---|---|---|---|---|
| Access and connectivity | 412 (41.2%) | Latency (168); Drops (140); Unstable public Wi-Fi (104) | “The signal drops; I give up.” (E07); “It loads slowly; I lose track.” (E11); “It cuts out on public Wi-Fi.” (E28) | Fragile connectivity moderates usefulness and explains abandonment. |
| Interaction flow | 295 (29.5%) | Confusing navigation (170); Poorly oriented start (125) | “I get lost in the menus.” (E15); “I don’t know where to go next.” (E06); “Buttons are hard to see.” (E22) | Clear routes and visible signs reduce unintentional dropout. |
| Privacy and security | 372 (37.2%) | Fear of discovery (220); No hidden or erased mode (152) | “I’m afraid he’ll read the chat.” (E12); “I want it deleted quickly.” (E26); “I need a hidden mode.” (E05) | The risk in the device conditions continuity and future use. |
| Legal referral and follow-up | 270 (27.0%) | Request a call (180); Follow-up guidance (90) | “I started and asked them to call me.” (E03); “The lawyer guided me.” (E19); “I confirmed with a person.” (E17) | The human bridge makes reporting and follow-up of the case feasible. |
| Personalization and empathy | 338 (33.8%) | Case recall (190); Empathetic tone (148) | “He remembered my case; I trusted him.” (E14); “He used my name.” (E24); “He picked up where we left off.” (E30) | Personalization reinforces trust and adherence. |
| Guidance and effectiveness | 321 (32.1%) | Clear routes; Defined requirements; Verified resources | “Where do I go?” (E08); “What do I do now?” (E21); “I don’t understand the requirements.” (E33) | Clear routes and requirements facilitate timely decisions. |
| Contextual barriers | 248 (24.8%) | Limited time; Non-private space; Difficulty attaching | “I can only use it at certain times.” (E09); “I don’t know how to attach files.” (E25); “I don’t have a private place.” (E31) | They condition the pace and depth of use; they require specific support. |
| Category | Participants’ Accounts | Frequent Textual Cues (Paraphrases) | Psychological Support | Legal Support |
|---|---|---|---|---|
| Access and connectivity | Difficulty sustaining conversation due to interruptions or slowness; they return when they can. | “It cut out and I came back later”; “it was slow to load”; “I ran out of data”; “I lost my place.” | Brief support in the face of frustration; psychoeducation to resume in calm moments; normalize breaks. | Asynchronous guidance (written steps); option to schedule a call when there is better signal. |
| Interaction flow | Doubts about how to continue or resume; they get lost in the sequence of steps. | “I didn’t know what to do next”; “I got lost in the steps”; “I couldn’t find the button”; “I had to start over.” | Support for decision making in small steps; clarify immediate goals. | Step-by-step guides; simple checklist of requirements and service locations. |
| Privacy and security | Temor a ser descubierta; uso en ventanas breves y en secreto. | “I use it when he’s not around”; “I delete it quickly”; “I’m afraid he’ll see the chat”; “I can’t use it at home.” | Personal safety plan (safe times/places); validation of fear; brief regulation techniques. | Information on confidential channels; guidelines for preserving evidence without risk; guidance on protective measures. |
| Personalization and empathy | Fear of being discovered; use in brief windows and in secret. | “They remembered my case”; “I felt listened to”; “they used my name”; “they didn’t judge me.” | Validating listening and psychological first aid; brief emotional follow-up. | Consistent and personalized explanations of procedures; maintain a contact person for questions. |
| Guidance and effectiveness | More confidence when they feel continuity in the case and respectful treatment. | “Where do I go?”; “What do I do now?”; “I don’t understand the requirements”; “I need an exact address.” | Psychoeducation on routes to help and realistic expectations; reduction in uncertainty. | Informed referral (competent institutions); times and necessary documents; verification of availability. |
| Legal referral and follow-up | They need clear and actionable instructions to decide and act. | “I want them to call me”; “I need to confirm with someone”; “I don’t know how to start the complaint”; “I don’t know if it qualifies.” | Support during and after reporting; emotional preparation; monitoring of well-being. | Accompaniment in reporting; guidance on protective measures; advice/legal representation and follow-up on the case. |
| Contextual barriers | Limitations due to time/space and digital skills; fragmented progress. | “I can only use it at certain times”; “I don’t know how to attach files”; “I can’t talk at work”; “I don’t have a private place.” | Support during and after reporting; emotional preparation; monitoring of well-being. | Alternatives for delivering evidence (photo, subsequent visit); deferred reporting in a safe time/space. |
| Dimension | Eigenvalue | Percentage Variance (%) | Cumulative Variance (%) |
|---|---|---|---|
| 1 | 0.3530627 | 17.653135 | 17.65314 |
| 2 | 0.3338407 | 16.692033 | 34.34517 |
| 3 | 0.2975089 | 14.875447 | 49.22061 |
| 4 | 0.2548165 | 12.740826 | 61.96144 |
| 5 | 0.2195016 | 10.97508 | 72.93652 |
| 6 | 0.2058736 | 10.293679 | 83.2302 |
| 7 | 0.2002047 | 10.010234 | 93.24043 |
| 8 | 0.1351913 | 6.759567 | 100 |
| Hypothesis | Key Variable(s) | Evidence Type | Confirmed | Summary of Findings |
|---|---|---|---|---|
| H1: The greater the digital accessibility and trust in AI, the more likely women are to use chatbots. | Access to internet, trust in AI | Quantitative (MCA) + Qualitative (interviews) | Yes | Users with higher access and trust showed higher usage levels. |
| H2: Women in rural areas have less access and trust than women in urban areas. | Place of residence, digital trust | Quantitative (Cos2) | Yes | Rural users were concentrated in low-usage clusters. |
| H3: Chatbots with more personalization are perceived as more effective. | Perceived personalization, effectiveness | Quantitative (cross-tab) | Yes | 68% of users perceiving personalization rated chatbots as highly effective. |
| H4: Frequent chatbot users show higher levels of digital empowerment. | Frequency of use, autonomy, confidence | Quantitative (clustering) | Yes | Cluster 3 users showed high autonomy and frequent chatbot use. |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Rodriguez Saavedra, M.O.; Donayre Prado, E.A.; Donayre Sarolli, A.E.; Lujan Tito, P.G.; Escobedo Pajuelo, J.A.; Grundy Lopez, R.E.; Aroquipa Apaza, O.; Alegre Chalco, M.E.; Quispe Nina, W.; Pozo González, R.A.; et al. Chatbots and Empowerment in Gender-Based Violence: Mixed Methods Analysis of Psychological and Legal Assistance. Soc. Sci. 2025, 14, 623. https://doi.org/10.3390/socsci14100623
Rodriguez Saavedra MO, Donayre Prado EA, Donayre Sarolli AE, Lujan Tito PG, Escobedo Pajuelo JA, Grundy Lopez RE, Aroquipa Apaza O, Alegre Chalco ME, Quispe Nina W, Pozo González RA, et al. Chatbots and Empowerment in Gender-Based Violence: Mixed Methods Analysis of Psychological and Legal Assistance. Social Sciences. 2025; 14(10):623. https://doi.org/10.3390/socsci14100623
Chicago/Turabian StyleRodriguez Saavedra, Miluska Odely, Erick Alexander Donayre Prado, Adolfo Erick Donayre Sarolli, Paola Gabriela Lujan Tito, Jose Antonio Escobedo Pajuelo, Ricardo Enrique Grundy Lopez, Orlando Aroquipa Apaza, María Elena Alegre Chalco, Wilian Quispe Nina, Raúl Andrés Pozo González, and et al. 2025. "Chatbots and Empowerment in Gender-Based Violence: Mixed Methods Analysis of Psychological and Legal Assistance" Social Sciences 14, no. 10: 623. https://doi.org/10.3390/socsci14100623
APA StyleRodriguez Saavedra, M. O., Donayre Prado, E. A., Donayre Sarolli, A. E., Lujan Tito, P. G., Escobedo Pajuelo, J. A., Grundy Lopez, R. E., Aroquipa Apaza, O., Alegre Chalco, M. E., Quispe Nina, W., Pozo González, R. A., Hillpa Zuñiga, M. E., & Arguedas Catasi, R. W. (2025). Chatbots and Empowerment in Gender-Based Violence: Mixed Methods Analysis of Psychological and Legal Assistance. Social Sciences, 14(10), 623. https://doi.org/10.3390/socsci14100623

