Human-Centered AI for Migrant Integration Through LLM and RAG Optimization
Abstract
:1. Introduction
- A proof of concept is presented in the field of HCAI systems based on LLM and RAG models, with the aim of enhancing the integration of the most vulnerable individuals into society.
- We delve into the application of technical, bias, and environmental sustainability metrics to evaluate and enhance the identification and mitigation of biases in algorithmic models, with the aim of ensuring precise, fair, and non-discriminatory responses and recommendations.
- A set of best practices and recommendations is defined for the development of HCAI systems supported by LLMs and RAG to ensure that these technologies do not perpetuate existing biases and actively work to reduce disparities.
- Multicriteria decision making methods are integrated into the proof of concept, enabling the simultaneous evaluation of multiple criteria encompassing technical, human, and social aspects.
2. State of the Art
3. Holistic Optimization of LLM- and RAG-Based HCAI Systems
3.1. Hyperparameter Optimization Problem
3.2. General MCDM-Based Hyperparameter Optimization
3.3. Environmental Impact of Computational Processes
3.4. Bilingual Evaluation Understudy
3.5. Recall-Oriented Understudy for Gisting Evaluation
3.6. Perplexity
3.7. Semantic Similarity
3.8. Social Metrics
4. RIM, TOPSIS, and VIKOR Methods in Multicriteria Decision Making
5. Materials and Methods
5.1. Methodology
5.2. Dataset
5.3. Proof of Concept
5.4. Experiment
5.5. Integration Processes
6. Results and Discussion
6.1. Results
6.2. Discussion
6.3. Ethical Implications
7. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Abbreviations
AI | Artificial intelligence |
BLEU | Bilingual evaluation understudy |
CO2e | Carbon dioxide equivalent |
CPU | Central processing unit |
EU | European Union |
GPU | Graphics processing unit |
HCAI | Human-centered artificial intelligence |
LLM | Large language model |
NLP | Natural language processing |
MCDM | Multi-criteria decision making |
RAG | Retrieval-augmented generation |
RIM | Reference ideal method |
ROUGE | Recall-oriented understudy for gisting evaluation |
TOPSIS | Technique for Order Preference by Similarity to Ideal Solution |
TPU | Tensor processing unit |
VIKOR | Visekriterijumska Optimizacija I Kompromisno Resenje |
References
- Schmidt, A. Interactive human centered artificial intelligence: A definition and research challenges. In Proceedings of the International Conference on Advanced Visual Interfaces, Salerno, Italy, 28 September–2 October 2020; pp. 1–4. [Google Scholar]
- Shneiderman, B. Human-centered artificial intelligence: Three fresh ideas. AIS Trans. Hum.-Comput. Interact. 2020, 12, 109–124. [Google Scholar] [CrossRef]
- Naveed, H.; Khan, A.U.; Qiu, S.; Saqib, M.; Anwar, S.; Usman, M.; Barnes, N.; Mian, A. A comprehensive overview of large language models. arXiv 2023, arXiv:2307.06435. [Google Scholar]
- Tahaei, M.; Constantinides, M.; Quercia, D.; Muller, M. A Systematic Literature Review of Human-Centered, Ethical, and Responsible AI. arXiv 2023, arXiv:2302.05284. [Google Scholar]
- Zhao, P.; Zhang, H.; Yu, Q.; Wang, Z.; Geng, Y.; Fu, F.; Yang, L.; Zhang, W.; Cui, B. Retrieval-Augmented Generation for AI-Generated Content: A Survey. arXiv 2024, arXiv:2402.19473. [Google Scholar]
- Schmager, S.; Pappas, I.; Vassilakopoulou, P. Defining Human-centered AI: A comprehensive review of HCAI literature. In Proceedings of the Mediterranean Conference on Information Systems, Madrid, Spain, 6–9 September 2023. [Google Scholar]
- Mehrabi, N.; Morstatter, F.; Saxena, N.; Lerman, K.; Galstyan, A. A survey on bias and fairness in machine learning. ACM Comput. Surv. (CSUR) 2021, 54, 1–35. [Google Scholar] [CrossRef]
- Holstein, K.; Wortman Vaughan, J.; Daumé III, H.; Dudik, M.; Wallach, H. Improving fairness in machine learning systems: What do industry practitioners need? In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Scotland, UK, 4–9 May 2019; pp. 1–16. [Google Scholar]
- Bender, E.M.; Gebru, T.; McMillan-Major, A.; Shmitchell, S. On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, Toronto, ON, Canada, 3–10 March 2021; pp. 610–623. [Google Scholar]
- Mitchell, M.; Wu, S.; Zaldivar, A.; Barnes, P.; Vasserman, L.; Hutchinson, B.; Spitzer, E.; Raji, I.D.; Gebru, T. Model cards for model reporting. In Proceedings of the Conference on Fairness, Accountability, and Transparency, Atlanta, GA, USA, 29–31 January 2019; pp. 220–229. [Google Scholar]
- Castellanos-Nieves, D.; García-Forte, L. Strategies of Automated Machine Learning for Energy Sustainability in Green Artificial Intelligence. Appl. Sci. 2024, 14, 6196. [Google Scholar] [CrossRef]
- Castellanos-Nieves, D.; García-Forte, L. Improving Automated Machine-Learning Systems through Green AI. Appl. Sci. 2023, 13, 11583. [Google Scholar] [CrossRef]
- Mardani, A.; Jusoh, A.; Nor, K.; Khalifah, Z.; Zakwan, N.; Valipour, A. Multiple criteria decision-making techniques and their applications–a review of the literature from 2000 to 2014. Econ. Res.-Ekon. Istraživanja 2015, 28, 516–571. [Google Scholar] [CrossRef]
- Ishizaka, A.; Labib, A. Review of the main developments in the analytic hierarchy process. Expert Syst. Appl. 2011, 38, 14336–14345. [Google Scholar] [CrossRef]
- Aruldoss, M.; Lakshmi, T.M.; Venkatesan, V.P. A survey on multi criteria decision making methods and its applications. Am. J. Inf. Syst. 2013, 1, 31–43. [Google Scholar]
- Fortuna, P.; Nunes, S. A survey on automatic detection of hate speech in text. ACM Comput. Surv. (CSUR) 2018, 51, 1–30. [Google Scholar] [CrossRef]
- Nobata, C.; Tetreault, J.; Thomas, A.; Mehdad, Y.; Chang, Y. Abusive language detection in online user content. In Proceedings of the 25th International Conference on World Wide Web, Montreal, QC, Canada, 11–15 April 2016; pp. 145–153. [Google Scholar]
- Davidson, T.; Warmsley, D.; Macy, M.; Weber, I. Automated hate speech detection and the problem of offensive language. In Proceedings of the International AAAI Conference on Web and Social Media, Montreal, QC, Canada, 15–18 May 2017; Volume 11, pp. 512–515. [Google Scholar]
- Badjatiya, P.; Gupta, S.; Gupta, M.; Varma, V. Deep learning for hate speech detection in tweets. In Proceedings of the 26th International Conference on World Wide Web Companion, Perth, Australia, 3–7 April 2017; pp. 759–760. [Google Scholar]
- Warner, W.; Hirschberg, J. Detecting hate speech on the world wide web. In Proceedings of the Second Workshop on Language in Social Media, Montreal, QC, Canada, 7 June 2012; pp. 19–26. [Google Scholar]
- Gehman, S.; Gururangan, S.; Sap, M.; Choi, Y.; Smith, N.A. Realtoxicityprompts: Evaluating neural toxic degeneration in language models. arXiv 2020, arXiv:2009.11462. [Google Scholar]
- Dinan, E.; Fan, A.; Wu, L.; Weston, J.; Kiela, D.; Williams, A. Multi-dimensional gender bias classification. arXiv 2020, arXiv:2005.00614. [Google Scholar]
- Liang, P.P.; Li, I.M.; Zheng, E.; Lim, Y.C.; Salakhutdinov, R.; Morency, L.P. Towards debiasing sentence representations. arXiv 2020, arXiv:2007.08100. [Google Scholar]
- Dubey, A.; Jauhri, A.; Pandey, A.; Kadian, A.; Al-Dahle, A.; Letman, A.; Mathur, A.; Schelten, A.; Yang, A.; Fan, A.; et al. The llama 3 herd of models. arXiv 2024, arXiv:2407.21783. [Google Scholar]
- Lin, C.Y.; Och, F. Looking for a few good metrics: ROUGE and its evaluation. In Proceedings of the Ntcir Workshop, Tokyo, Japan, 2–4 June 2004. [Google Scholar]
- Papineni, K.; Roukos, S.; Ward, T.; Zhu, W.J. Bleu: A method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, Philadelphia, PA, USA, 6–12 July 2002; pp. 311–318. [Google Scholar]
- Dhamala, J.; Sun, T.; Kumar, V.; Krishna, S.; Pruksachatkun, Y.; Chang, K.W.; Gupta, R. Bold: Dataset and metrics for measuring biases in open-ended language generation. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, Toronto, ON, Canada, 3–10 March 2021; pp. 862–872. [Google Scholar]
- Zekun, W.; Bulathwela, S.; Koshiyama, A.S. Towards Auditing Large Language Models: Improving Text-based Stereotype Detection. arXiv 2023, arXiv:2311.14126. [Google Scholar]
- Ozmen Garibay, O.; Winslow, B.; Andolina, S.; Antona, M.; Bodenschatz, A.; Coursaris, C.; Falco, G.; Fiore, S.M.; Garibay, I.; Grieman, K.; et al. Six human-centered artificial intelligence grand challenges. Int. J. Hum.-Interact. 2023, 39, 391–437. [Google Scholar] [CrossRef]
- Shneiderman, B. Human-centered artificial intelligence: Reliable, safe & trustworthy. Int. J. Hum.-Interact. 2020, 36, 495–504. [Google Scholar]
- Vassilakopoulou, P.; Pappas, I.O. AI/Human augmentation: A study on chatbot–human agent handovers. In Proceedings of the International Working Conference on Transfer and Diffusion of IT, Maynooth, Ireland, 15–16 June 2022; Springer: Berlin/Heidelberg, Germany, 2022; pp. 118–123. [Google Scholar]
- Dwivedi, R.; Dave, D.; Naik, H.; Singhal, S.; Omer, R.; Patel, P.; Qian, B.; Wen, Z.; Shah, T.; Morgan, G.; et al. Explainable AI (XAI): Core ideas, techniques, and solutions. ACM Comput. Surv. 2023, 55, 1–33. [Google Scholar] [CrossRef]
- Ali, S.; Abuhmed, T.; El-Sappagh, S.; Muhammad, K.; Alonso-Moral, J.M.; Confalonieri, R.; Guidotti, R.; Del Ser, J.; Díaz-Rodríguez, N.; Herrera, F. Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence. Inf. Fusion 2023, 99, 101805. [Google Scholar] [CrossRef]
- Rudin, C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 2019, 1, 206–215. [Google Scholar] [CrossRef] [PubMed]
- Yuan, X.; He, P.; Zhu, Q.; Li, X. Adversarial examples: Attacks and defenses for deep learning. IEEE Trans. Neural Networks Learn. Syst. 2019, 30, 2805–2824. [Google Scholar] [CrossRef] [PubMed]
- Afshari, S.S.; Enayatollahi, F.; Xu, X.; Liang, X. Machine learning-based methods in structural reliability analysis: A review. Reliab. Eng. Syst. Saf. 2022, 219, 108223. [Google Scholar] [CrossRef]
- Caton, S.; Haas, C. Fairness in machine learning: A survey. ACM Comput. Surv. 2024, 56, 1–38. [Google Scholar] [CrossRef]
- Costabile, M.F.; Desolda, G.; Dimauro, G.; Lanzilotti, R.; Loiacono, D.; Matera, M.; Zancanaro, M. A Human-centric AI-driven Framework for Exploring Large and Complex Datasets. In Proceedings of the CEUR Workshop Proceedings, CEUR-WS, Ljubljana, Slovenia, 29 November 2022; Volume 3136, pp. 9–13. [Google Scholar]
- Nagitta, P.O.; Mugurusi, G.; Obicci, P.A.; Awuor, E. Human-centered artificial intelligence for the public sector: The gate keeping role of the public procurement professional. Procedia Comput. Sci. 2022, 200, 1084–1092. [Google Scholar] [CrossRef]
- Barale, C. Human-centered computing in legal NLP-An application to refugee status determination. In Proceedings of the Second Workshop on Bridging Human–Computer Interaction and Natural Language Processing, Online, 15 July 2022; pp. 28–33. [Google Scholar]
- Gao, Y.; Xiong, Y.; Gao, X.; Jia, K.; Pan, J.; Bi, Y.; Dai, Y.; Sun, J.; Wang, H. Retrieval-augmented generation for large language models: A survey. arXiv 2023, arXiv:2312.10997. [Google Scholar]
- Lewis, P.; Perez, E.; Piktus, A.; Petroni, F.; Karpukhin, V.; Goyal, N.; Küttler, H.; Lewis, M.; Yih, W.T.; Rocktäschel, T.; et al. Retrieval-augmented generation for knowledge-intensive nlp tasks. Adv. Neural Inf. Process. Syst. 2020, 33, 9459–9474. [Google Scholar]
- Chu, Z.; Wang, Z.; Zhang, W. Fairness in Large Language Models: A Taxonomic Survey. arXiv 2024, arXiv:2404.01349. [Google Scholar] [CrossRef]
- Jeong, C. A Study on the Implementation of Generative AI Services Using an Enterprise Data-Based LLM Application Architecture. arXiv 2023, arXiv:2309.01105. [Google Scholar] [CrossRef]
- Zhao, J.; Wang, T.; Yatskar, M.; Cotterell, R.; Ordonez, V.; Chang, K.W. Gender bias in contextualized word embeddings. arXiv 2019, arXiv:1904.03310. [Google Scholar]
- Shah, M.; Sureja, N. A Comprehensive Review of Bias in Deep Learning Models: Methods, Impacts, and Future Directions. Arch. Comput. Methods Eng. 2024, 1–13. [Google Scholar] [CrossRef]
- Cao, Y.; Zhou, L.; Lee, S.; Cabello, L.; Chen, M.; Hershcovich, D. Assessing cross-cultural alignment between chatgpt and human societies: An empirical study. arXiv 2023, arXiv:2303.17466. [Google Scholar]
- Ferrara, E. Should chatgpt be biased? challenges and risks of bias in large language models. arXiv 2023, arXiv:2304.03738. [Google Scholar]
- Hendrycks, D.; Burns, C.; Basart, S.; Critch, A.; Li, J.; Song, D.; Steinhardt, J. Aligning ai with shared human values. arXiv 2020, arXiv:2008.02275. [Google Scholar]
- Parrish, A.; Chen, A.; Nangia, N.; Padmakumar, V.; Phang, J.; Thompson, J.; Htut, P.M.; Bowman, S.R. BBQ: A hand-built bias benchmark for question answering. arXiv 2021, arXiv:2110.08193. [Google Scholar]
- Rutinowski, J.; Franke, S.; Endendyk, J.; Dormuth, I.; Roidl, M.; Pauly, M. The self-perception and political biases of chatgpt. Hum. Behav. Emerg. Technol. 2024, 2024, 7115633. [Google Scholar] [CrossRef]
- Zhao, J.; Fang, M.; Shi, Z.; Li, Y.; Chen, L.; Pechenizkiy, M. Chbias: Bias evaluation and mitigation of chinese conversational language models. arXiv 2023, arXiv:2305.11262. [Google Scholar]
- Sheng, E.; Chang, K.W.; Natarajan, P.; Peng, N. Societal biases in language generation: Progress and challenges. arXiv 2021, arXiv:2105.04054. [Google Scholar]
- Simmons, G. Moral mimicry: Large language models produce moral rationalizations tailored to political identity. arXiv 2022, arXiv:2209.12106. [Google Scholar]
- Wang, P.; Li, L.; Chen, L.; Cai, Z.; Zhu, D.; Lin, B.; Cao, Y.; Liu, Q.; Liu, T.; Sui, Z. Large language models are not fair evaluators. arXiv 2023, arXiv:2305.17926. [Google Scholar]
- Zhuo, T.Y.; Huang, Y.; Chen, C.; Xing, Z. Exploring ai ethics of chatgpt: A diagnostic analysis. arXiv 2023, arXiv:2301.12867. [Google Scholar]
- Saxena, D.; Moon, E.S.Y.; Chaurasia, A.; Guan, Y.; Guha, S. Rethinking“ Risk” in Algorithmic Systems Through A Computational Narrative Analysis of Casenotes in Child-Welfare. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, Hamburg, Germany, 23–28 April 2023; pp. 1–19. [Google Scholar]
- Nelson, W.; Lee, M.K.; Choi, E.; Wang, V. Designing LLM-Based Support for Homelessness Caseworkers. In Proceedings of the AAAI-2024 Workshop on Public Sector LLMs: Algorithmic and Sociotechnical Design, Vancouver, BC, Canada, 26–27 February 2024. [Google Scholar]
- Lorenzo, P.R.; Nalepa, J.; Kawulok, M.; Ramos, L.S.; Pastor, J.R. Particle swarm optimization for hyper-parameter selection in deep neural networks. In Proceedings of the Genetic and Evolutionary Computation Conference, Berlin, Germany, 15–19 July 2017; pp. 481–488. [Google Scholar]
- Maevsky, D.; Maevskaya, E.; Stetsuyk, E. Evaluating the RAM energy consumption at the stage of software development. In Green IT Engineering: Concepts, Models, Complex Systems Architectures; Springer: Berlin/Heidelberg, Germany, 2017; pp. 101–121. [Google Scholar]
- Budennyy, S.; Lazarev, V.; Zakharenko, N.; Korovin, A.; Plosskaya, O.; Dimitrov, D.; Arkhipkin, V.; Oseledets, I.; Barsola, I.; Egorov, I.; et al. Eco2AI: Carbon emissions tracking of machine learning models as the first step towards sustainable AI. In Doklady Mathematics; Pleiades Publishing: Moscow, Russia, 2022; Available online: http://arxiv.org/abs/2208.00406 (accessed on 17 October 2024).
- Chen, S.F.; Beeferman, D.; Rosenfeld, R. Evaluation Metrics for Language Models; Carnegie Mellon University: Pittsburgh, PA, USA, 1998. [Google Scholar]
- Aluru, S.S.; Mathew, B.; Saha, P.; Mukherjee, A. A deep dive into multilingual hate speech classification. In Machine Learning and Knowledge Discovery in Databases. Applied Data Science and Demo Track: European Conference, ECML PKDD 2020, Ghent, Belgium, 14–18 September 2020; Proceedings, Part V; Springer: Berlin/Heidelberg, Germany, 2021; pp. 423–439. [Google Scholar]
- Cables, E.; Lamata, M.T.; Verdegay, J.L. RIM-reference ideal method in multicriteria decision making. Inf. Sci. 2016, 337, 1–10. [Google Scholar] [CrossRef]
- Hwang, C.L. Multiple attributes decision making. Methods Appl. 1981. [Google Scholar] [CrossRef]
- Mardani, A.; Zavadskas, E.K.; Govindan, K.; Amat Senin, A.; Jusoh, A. VIKOR technique: A systematic review of the state of the art literature on methodologies and applications. Sustainability 2016, 8, 37. [Google Scholar] [CrossRef]
- Hwang, C.L.; Yoon, K.; Hwang, C.L.; Yoon, K. Methods for multiple attribute decision making. In Multiple Attribute Decision Making: Methods and Applications a State-of-the-Art Survey; CRC Press: Boca Raton, FL, USA, 1981; pp. 58–191. [Google Scholar]
- Opricovic, S.; Tzeng, G.H. Compromise solution by MCDM methods: A comparative analysis of VIKOR and TOPSIS. Eur. J. Oper. Res. 2004, 156, 445–455. [Google Scholar] [CrossRef]
- Liu, Y. Roberta: A robustly optimized bert pretraining approach. arXiv 2019, arXiv:1907.11692. [Google Scholar]
- Face, H. Hugging Face—The AI Community Building the Future. 2023. Available online: https://www.anthropic.com/index/introducing-claude (accessed on 20 March 2023).
- Castellanos Nieves, D.; García-Forte, L. Evaluation Dataset: RAG System with LLM for Migrant Integration in an HCAI; University of La Laguna: San Cristóbal de La Laguna, Spain, 2024. [Google Scholar] [CrossRef]
- García-Forte, L.; Castellanos Nieves, D. Summary of an Evaluation Dataset: RAG System with LLM for Migrant Integration in an HCAI; University of La Laguna: San Cristóbal de La Laguna, Spain, 2024. [Google Scholar] [CrossRef]
- Zhou, H.; Huang, H.; Long, Y.; Xu, B.; Zhu, C.; Cao, H.; Yang, M.; Zhao, T. Mitigating the bias of large language model evaluation. In Proceedings of the China National Conference on Chinese Computational Linguistics, Taiyuan, China, 25–28 July 2024; Springer: Berlin/Heidelberg, Germany, 2024; pp. 451–462. [Google Scholar]
- Lyu, Y.; Li, Z.; Niu, S.; Xiong, F.; Tang, B.; Wang, W.; Wu, H.; Liu, H.; Xu, T.; Chen, E. Crud-rag: A comprehensive chinese benchmark for retrieval-augmented generation of large language models. ACM Trans. Inf. Syst. 2024. Available online: https://arxiv.org/abs/2401.17043 (accessed on 17 October 2024).
- Strubell, E.; Ganesh, A.; McCallum, A. Energy and policy considerations for deep learning in NLP. arXiv 2019, arXiv:1906.02243. [Google Scholar]
- Wang, Y.; Yu, Z.; Zeng, Z.; Yang, L.; Wang, C.; Chen, H.; Jiang, C.; Xie, R.; Wang, J.; Xie, X.; et al. Pandalm: An automatic evaluation benchmark for llm instruction tuning optimization. arXiv 2023, arXiv:2306.05087. [Google Scholar]
- Leto, A.; Aguerrebere, C.; Bhati, I.; Willke, T.; Tepper, M.; Vo, V.A. Toward Optimal Search and Retrieval for RAG. arXiv 2024, arXiv:2411.07396. [Google Scholar]
- Pan, Q.; Ashktorab, Z.; Desmond, M.; Cooper, M.S.; Johnson, J.; Nair, R.; Daly, E.; Geyer, W. Human-Centered Design Recommendations for LLM-as-a-judge. arXiv 2024, arXiv:2407.03479. [Google Scholar]
K | Temp | Stereo | Anti-Stereo | Neutral | Non_Hate | Hate | Perplexity | CO2e |
---|---|---|---|---|---|---|---|---|
1 | 0.1 | 0.897 | 0.769 | 0.836 | 0.952 | 0.507 | 1.392 | 1579.591 |
1 | 0.3 | 0.913 | 0.966 | 0.775 | 0.949 | 0.513 | 1.342 | 5913.446 |
1 | 0.5 | 0.937 | 0.930 | 0.848 | 0.935 | 0.000 | 1.375 | 5536.071 |
1 | 0.7 | 0.942 | 0.830 | 0.867 | 0.949 | 0.000 | 1.517 | 5197.749 |
1 | 0.9 | 0.939 | 0.919 | 0.693 | 0.951 | 0.000 | 1.594 | 5246.433 |
2 | 0.1 | 0.914 | 0.930 | 0.788 | 0.945 | 0.615 | 1.299 | 5944.226 |
2 | 0.3 | 0.914 | 0.949 | 0.598 | 0.953 | 0.521 | 1.326 | 5677.812 |
2 | 0.5 | 0.942 | 0.963 | 0.632 | 0.935 | 0.557 | 1.353 | 5725.188 |
2 | 0.7 | 0.920 | 0.873 | 0.911 | 0.944 | 0.513 | 1.471 | 5293.202 |
2 | 0.9 | 0.922 | 0.913 | 0.656 | 0.942 | 0.000 | 1.598 | 5371.470 |
5 | 0.1 | 0.869 | 0.923 | 0.470 | 0.923 | 0.000 | 1.355 | 5900.542 |
5 | 0.3 | 0.858 | 0.881 | 0.766 | 0.919 | 0.523 | 1.451 | 5634.740 |
5 | 0.5 | 0.873 | 0.817 | 0.965 | 0.936 | 0.627 | 1.411 | 5939.417 |
5 | 0.7 | 0.930 | 0.917 | 0.928 | 0.938 | 0.000 | 1.474 | 5505.673 |
5 | 0.9 | 0.917 | 0.859 | 0.926 | 0.938 | 0.000 | 1.629 | 5361.281 |
8 | 0.1 | 0.928 | 0.800 | 0.805 | 0.921 | 0.000 | 1.444 | 5666.004 |
8 | 0.3 | 0.922 | 0.934 | 0.771 | 0.908 | 0.519 | 1.415 | 5449.258 |
8 | 0.5 | 0.900 | 0.880 | 0.901 | 0.934 | 0.000 | 1.377 | 5605.898 |
8 | 0.7 | 0.930 | 0.893 | 0.664 | 0.923 | 0.000 | 1.458 | 4965.446 |
8 | 0.9 | 0.932 | 0.928 | 0.757 | 0.932 | 0.590 | 1.562 | 5311.135 |
10 | 0.1 | 0.947 | 0.894 | 0.957 | 0.927 | 0.000 | 1.332 | 5499.258 |
10 | 0.3 | 0.952 | 0.912 | 0.945 | 0.891 | 0.510 | 1.322 | 4892.902 |
10 | 0.5 | 0.957 | 0.960 | 0.864 | 0.903 | 0.000 | 1.395 | 4590.424 |
10 | 0.7 | 0.980 | 0.842 | 0.996 | 0.916 | 0.506 | 1.452 | 4963.383 |
10 | 0.9 | 0.980 | 0.936 | 0.781 | 0.901 | 0.000 | 1.562 | 4880.585 |
K | Temp | Stereo | Anti-Stereo | Neutral | Non_Hate | Hate |
---|---|---|---|---|---|---|
1 | 0.1 | ±0.130 | ±0.188 | ±0.211 | ±0.058 | ±0.247 |
1 | 0.3 | ±0.128 | ±0.161 | ±0.140 | ±0.090 | ±0.230 |
1 | 0.5 | ±0.091 | ±0.141 | ±0.161 | ±0.099 | ±0.000 |
1 | 0.7 | ±0.150 | ±0.186 | ±0.224 | ±0.100 | ±0.000 |
1 | 0.9 | ±0.106 | ±0.150 | ±0.237 | ±0.100 | ±0.000 |
2 | 0.1 | ±0.135 | ±0.186 | ±0.165 | ±0.090 | ±0.193 |
2 | 0.3 | ±0.119 | ±0.176 | ±0.190 | ±0.086 | ±0.226 |
2 | 0.5 | ±0.148 | ±0.156 | ±0.188 | ±0.100 | ±0.222 |
2 | 0.7 | ±0.109 | ±0.182 | ±0.044 | ±0.108 | ±0.243 |
2 | 0.9 | ±0.138 | ±0.213 | ±0.174 | ±0.111 | ±0.000 |
5 | 0.1 | ±0.142 | ±0.192 | ±0.232 | ±0.095 | ±0.000 |
5 | 0.3 | ±0.146 | ±0.213 | ±0.150 | ±0.092 | ±0.238 |
5 | 0.5 | ±0.162 | ±0.201 | ±0.032 | ±0.092 | ±0.187 |
5 | 0.7 | ±0.150 | ±0.161 | ±0.036 | ±0.109 | ±0.000 |
5 | 0.9 | ±0.133 | ±0.160 | ±0.059 | ±0.118 | ±0.000 |
8 | 0.1 | ±0.135 | ±0.194 | ±0.232 | ±0.102 | ±0.000 |
8 | 0.3 | ±0.155 | ±0.176 | ±0.195 | ±0.106 | ±0.240 |
8 | 0.5 | ±0.131 | ±0.148 | ±0.164 | ±0.107 | ±0.000 |
8 | 0.7 | ±0.165 | ±0.190 | ±0.219 | ±0.118 | ±0.000 |
8 | 0.9 | ±0.127 | ±0.102 | ±0.243 | ±0.102 | ±0.205 |
10 | 0.1 | ±0.138 | ±0.164 | ±0.124 | ±0.126 | ±0.000 |
10 | 0.3 | ±0.123 | ±0.145 | ±0.040 | ±0.135 | ±0.231 |
10 | 0.5 | ±0.136 | ±0.195 | ±0.090 | ±0.131 | ±0.000 |
10 | 0.7 | ±0.106 | ±0.192 | ±0.002 | ±0.130 | ±0.247 |
10 | 0.9 | ±0.158 | ±0.131 | ±0.150 | ±0.134 | ±0.000 |
Rank | VIKOR | RIM | TOPSIS | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Alt. | Chunk | K | Temp. | Alt. | Chunk | K | Temp. | Alt. | Chunk | K | Temp. | |
1 | 1 | 50 | 1 | 0.1 | 76 | 500 | 1 | 0.1 | 40 | 100 | 5 | 0.9 |
2 | 18 | 50 | 8 | 0.5 | 60 | 300 | 2 | 0.9 | 110 | 1000 | 2 | 0.9 |
3 | 40 | 100 | 5 | 0.9 | 24 | 50 | 10 | 0.7 | 61 | 300 | 5 | 0.1 |
4 | 16 | 50 | 8 | 0.1 | 84 | 500 | 2 | 0.7 | 88 | 500 | 5 | 0.5 |
5 | 14 | 50 | 5 | 0.7 | 65 | 300 | 5 | 0.9 | 89 | 500 | 5 | 0.7 |
6 | 11 | 50 | 5 | 0.1 | 56 | 300 | 2 | 0.1 | 14 | 50 | 5 | 0.7 |
7 | 81 | 500 | 2 | 0.1 | 83 | 500 | 2 | 0.5 | 81 | 500 | 2 | 0.1 |
8 | 61 | 300 | 5 | 0.1 | 20 | 50 | 8 | 0.9 | 107 | 1000 | 2 | 0.3 |
9 | 15 | 50 | 5 | 0.9 | 36 | 100 | 5 | 0.1 | 109 | 1000 | 2 | 0.7 |
10 | 28 | 100 | 1 | 0.5 | 34 | 100 | 2 | 0.7 | 16 | 50 | 8 | 0.1 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Castellanos-Nieves, D.; García-Forte, L. Human-Centered AI for Migrant Integration Through LLM and RAG Optimization. Appl. Sci. 2025, 15, 325. https://doi.org/10.3390/app15010325
Castellanos-Nieves D, García-Forte L. Human-Centered AI for Migrant Integration Through LLM and RAG Optimization. Applied Sciences. 2025; 15(1):325. https://doi.org/10.3390/app15010325
Chicago/Turabian StyleCastellanos-Nieves, Dagoberto, and Luis García-Forte. 2025. "Human-Centered AI for Migrant Integration Through LLM and RAG Optimization" Applied Sciences 15, no. 1: 325. https://doi.org/10.3390/app15010325
APA StyleCastellanos-Nieves, D., & García-Forte, L. (2025). Human-Centered AI for Migrant Integration Through LLM and RAG Optimization. Applied Sciences, 15(1), 325. https://doi.org/10.3390/app15010325