Behind the Algorithm: International Insights into Data-Driven AI Model Development
Abstract
1. Introduction
2. Research Question and Objectives
- RO1.
- To identify the key data-related challenges encountered during AI model training, whether developing new ones or adapting existing ones.
- RO2.
- To examine how these challenges influence AI development and deployment processes.
- RO3.
- To explore the strategies employed by professionals to mitigate data-related risks and constraints.
3. Background and Related Work
3.1. Evolving Capabilities of AI in a Data-Driven World
3.2. The Expanding Footprint of AI Adoption in Organizational Contexts
3.3. Critical Risks at the Intersection of AI and Data
3.4. Theoretical Lenses on the Role of Data in AI Systems
3.5. Synthesis and Positioning of the Present Study
4. Materials and Methods
4.1. Research Approach and Design
4.2. Sample Characteristics and Composition
4.3. Data Collection and Analysis
- What are the current challenges and risks involved in managing the data lifecycle—from acquisition to deployment and monitoring of AI models?
- Which of these challenges do you consider most urgent?
- How does data quality influence the accuracy and performance of AI models?
- What are the organizational or project-level consequences of unresolved data quality issues?
- How is your organization addressing evolving regulatory requirements around data privacy and AI compliance?
5. Results
“Right now, we’re not rushing to adopt this innovative technology. In fact, we’re even quite hesitant to implement older AI-based systems that are already in use by other banks for underwriting and loans. The only AI-based application we’ve agreed to implement is a personal virtual assistant—a kind of chatbot—that doesn’t require broad, unrestricted access to our customer data” (P9).
“Companies that rush to adopt AI applications don’t understand the implications of the risks. We’re going to start seeing more and more lawsuits against companies for non-compliance with regulations or for discrimination. Only then will executives start to wake up” (P50).
5.1. Data Preparation Challenges
“As data increases in volume and variety, maintaining an efficient and cost-effective infrastructure that can handle both large-scale storage and processing becomes a major challenge—particularly when real-time access is required” (P31).
“Many times, after completing the data cleaning process, we discover that the data is not relevant at all, and we have to start over—sometimes even purchase entirely different datasets. This costs our organization a great deal of money! Millions of dollars are sometimes wasted due to inaccurate data” (P43).
- Integration of heterogeneous and variable data sources adds complexity and inconsistency.
- Large-scale data processing and infrastructure requirements create operational burdens.
- Data cleaning, relevance assessment, and labeling are time- and labor-intensive.
- Skilled, data-literate teams and internal validation practices are crucial for managing these challenges.
5.2. Data Quality Risks and Mitigation Strategies
“We start by ensuring that the data is well-defined, diverse, and representative. Then we validate the expected inputs for each model and scan them thoroughly before training begins. … We’ve built Power BI dashboards that alert us in real time to poor data quality. … We implement automated data cleaning—or at the very least, automated alerts that flag errors” (P73).
“The ability to track data from acquisition through every stage of its lifecycle—preprocessing, modeling, and deployment—is critical. … I use automated tools to detect missing values, inconsistencies, and anomalies, followed by enrichment processes to ensure completeness and accuracy” (P65).
- Inaccurate, incomplete, or inconsistent data jeopardizes AI reliability.
- Poor-quality data increases operational costs and delays in deployment.
- Governance frameworks and validation tools help monitor and maintain quality.
- Existing solutions are still insufficient to fully address complex, persistent challenges.
5.3. Privacy, Security, and Data Leakage Concerns
“A company purchasing LLMs must be absolutely certain that its data is securely handled by the supplier. But managers often struggle to control what employees input into AI tools, so it’s crucial for every organization to have a clear policy on this matter” (P15).
“It’s unclear what these models were trained on or what cybersecurity risks they might contain—like backdoors, exploits, or critical vulnerabilities. … Most only conduct basic security checks. … We haven’t seen a major AI-driven breach yet, but I’m certain it’s only a matter of time” (P20).
- Employees may inadvertently expose sensitive information through AI tools.
- Vendors risk misusing organizational data for training and resale.
- AI models create new cybersecurity threats, including adversarial attacks.
- Open-source models carry hidden vulnerabilities that are difficult to assess.
5.4. Ethical and Technical Challenges of Bias and Opacity
“Even well-trained models can produce biased results, which may have unintended social or ethical consequences that are often difficult to detect in real time. There’s also the issue of model drift—over time, AI models may become less effective due to changes in underlying data or external factors, leading to inaccurate predictions” (P65).
“We expect real-time alerts when a leak occurs. Our main pain point is when the algorithmic model simply gets it wrong—sometimes it flags a problem where there isn’t one, and other times it misses actual issues. … For instance, during the Super Bowl in the U.S., people’s water usage patterns change dramatically. The AI-based sensors misinterpret this as a leak, introducing bias into the model” (P55).
“Our organization conducts rigorous quality assurance processes based on multiple logic layers throughout the training phase, because we don’t fully trust the model outputs. We flag errors as they arise and perform unique model validation for each dataset—we don’t just feed data into the model blindly. … Any company that deals with large volumes of data and wants to integrate AI applications must have someone on staff with strong formal training in data science” (P17).
“In many companies, the people working with AI systems are data scientists who lack a strong foundation in advanced statistical methods. This exposes them to significant risks without even realizing it. The key is to hire expert statisticians who can handle the data before it enters the models. Only then can we better understand the algorithms, correct for bias, and remain alert to emerging issues” (P6).
- Algorithmic bias can arise from data inconsistencies and contextual misinterpretations.
- Human bias and limited statistical expertise compromise model reliability.
- Model drift and opacity reduce trust and long-term validity.
- Organizations respond with layered validation protocols and expert oversight.
5.5. Organizational Responses to AI Regulation
“We maintain continuous monitoring. A dedicated team tracks regulatory changes and updates our processes accordingly. Our data governance framework was designed to be flexible from the outset, allowing us to quickly implement necessary changes in response to new regulations. Beyond that, we ensure our teams receive ongoing training on the latest regulatory requirements and emphasize the importance of compliance throughout the entire project lifecycle. I believe that built-in tools for automated compliance will significantly enhance productivity and reduce the risk of non-compliance” (P66).
- Most organizations are in early stages of regulatory readiness.
- Internal policies and monitoring teams are used to track evolving requirements.
- Current efforts primarily focus on privacy compliance (e.g., GDPR).
- Enforcement mechanisms are unclear, often shifting responsibility to vendors.
6. Data-Centric Framework
6.1. Data Challenges in the Eyes of Strategic Experts (RO1)
6.2. The Strategic Impact of Data on AI Development (RO2)
6.3. Strategies and Limitations in Addressing Data Risks (RO3)
7. Conclusions
7.1. Theoretical Contributions
7.2. Practical Implications
7.3. Limitations and Future Research
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
| AI | Artificial Intelligence |
| DCAI | Data-centric AI |
| ML | Machine Learning |
| GenAI | Generative artificial intelligence |
| LLMs | Large language models |
| CAIO | Chief AI Officer |
References
- Gill, K.S. The end AI innocence: Genie is out of the bottle. AI Soc. 2025, 40, 257–261. [Google Scholar] [CrossRef]
- Mehrabi, N.; Morstatter, F.; Saxena, N.; Lerman, K.; Galstyan, A. A survey on bias and fairness in machine learning. ACM Comput. Surv. (CSUR) 2021, 54, 1–35. [Google Scholar] [CrossRef]
- Nakash, M.; Bolisani, E. Knowledge management meets artificial intelligence: A systematic review and future research agenda. In European Conference on Knowledge Management; Academic Conferences International Limited: Reading, UK, 2024; pp. 544–552. [Google Scholar] [CrossRef]
- Nakash, M.; Bolisani, E. The transformative impact of AI on knowledge management processes. Bus. Process Manag. J. 2025, 31, 124–147. [Google Scholar] [CrossRef]
- Sambasivan, N.; Kapania, S.; Highfill, H.; Akrong, D.; Paritosh, P.; Aroyo, L.M. “Everyone wants to do the model work, not the data work”: Data Cascades in High-Stakes AI. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, 8–13 May 2021; pp. 1–15. [Google Scholar] [CrossRef]
- Goyal, M.; Mahmoud, Q.H. A systematic review of synthetic data generation techniques using generative AI. Electronics 2024, 13, 3509. [Google Scholar] [CrossRef]
- Patel, K. Ethical Reflections on Data-Centric AI: Balancing Benefits and Risks. Available at SSRN 4993089. 2024. Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4993089 (accessed on 4 February 2025).
- Zha, D.; Bhat, Z.P.; Lai, K.H.; Yang, F.; Hu, X. Data-centric AI: Perspectives and challenges. In Proceedings of the 2023 SIAM International Conference on Data Mining (SDM), Minneapolis, MN, USA, 27–29 April 2023; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2023; pp. 945–948. [Google Scholar] [CrossRef]
- Kumar, S.; Datta, S.; Singh, V.; Singh, S.K.; Sharma, R. Opportunities and challenges in data-centric AI. IEEE Access 2024, 12, 33173–33189. [Google Scholar] [CrossRef]
- Stonebraker, M.; Rezig, E.K. Machine learning and big data: What is important? IEEE Data Eng. Bull. 2019, 42, 3–7. [Google Scholar]
- Mazumder, M.; Banbury, C.; Yao, X.; Karlaš, B.; Gaviria Rojas, W.; Diamos, S.; Janapa Reddi, V. Dataperf: Benchmarks for data-centric ai development. Adv. Neural Inf. Process. Syst. 2023, 36, 5320–5347. [Google Scholar] [CrossRef]
- Nagle, T.; Redman, T.C.; Sammon, D. Only 3% of companies’ data meets basic quality standards. Harv. Bus. Rev. 2017, 95, 2–5. [Google Scholar]
- Narayanan, A.; Kapoor, S. Why an overreliance on AI-driven modelling is bad for science. Nature 2025, 640, 312–314. [Google Scholar] [CrossRef]
- Slota, S.C.; Fleischmann, K.R.; Greenberg, S.; Verma, N.; Cummings, B.; Li, L.; Shenefiel, C. Good systems, bad data?: Interpretations of AI hype and failures. Proc. Assoc. Inf. Sci. Technol. 2020, 57, e275. [Google Scholar] [CrossRef]
- Wagstaff, K. Machine learning that matters. arXiv 2012, arXiv:1206.4656. [Google Scholar] [CrossRef]
- Batty, M. Planning data. Environ. Plan. B Urban Anal. City Sci. 2022, 49, 1588–1592. [Google Scholar] [CrossRef]
- Jarrahi, M.H.; Memariani, A.; Guha, S. The principles of data-centric AI (DCAI). arXiv 2022, arXiv:2211.14611. [Google Scholar] [CrossRef]
- Whang, S.E.; Roh, Y.; Song, H.; Lee, J.G. Data collection and quality challenges in deep learning: A data-centric ai perspective. VLDB J. 2023, 32, 791–813. [Google Scholar] [CrossRef]
- Camilleri, M.A. Artificial intelligence governance: Ethical considerations and implications for social responsibility. Expert Syst. 2024, 41, e13406. [Google Scholar] [CrossRef]
- Radanliev, P. AI ethics: Integrating transparency, fairness, and privacy in AI development. Appl. Artif. Intell. 2025, 39, 2463722. [Google Scholar] [CrossRef]
- Sartori, L.; Theodorou, A. A sociotechnical perspective for the future of AI: Narratives, inequalities, and human control. Ethics Inf. Technol. 2022, 24, 4. [Google Scholar] [CrossRef]
- Paullada, A.; Raji, I.D.; Bender, E.M.; Denton, E.; Hanna, A. Data and its (dis) contents: A survey of dataset development and use in machine learning research. Patterns 2021, 2, 100336. [Google Scholar] [CrossRef]
- Zha, D.; Bhat, Z.P.; Lai, K.H.; Yang, F.; Jiang, Z.; Zhong, S.; Hu, X. Data-centric artificial intelligence: A survey. ACM Comput. Surv. 2025, 57, 1–42. [Google Scholar] [CrossRef]
- Russell, S.; Norvig, P. Artificial Intelligence: A Modern Approach, 4th ed; Pearson Education: London, UK, 2020; Available online: http://lib.ysu.am/disciplines_bk/efdd4d1d4c2087fe1cbe03d9ced67f34.pdf (accessed on 15 December 2024).
- Domingos, P. The Master Algorithm: How the Quest for the Ultimate Learning Machine will Remake Our World; Basic Books: New York, NY, USA, 2015; Available online: https://www.redalyc.org/pdf/6380/638067264018.pdf (accessed on 15 December 2024).
- Domingos, P. Machine learning for data management: Problems and solutions. In Proceedings of the 2018 International Conference on Management of Data, Houston, TX, USA, 10–15 June 2018; p. 629. Available online: https://doi.org/10.1145/3183713.3199515 (accessed on 27 January 2025).
- Holzinger, A. Introduction to machine learning & knowledge extraction (make). Mach. Learn. Knowl. Extr. 2019, 1, 1–20. [Google Scholar] [CrossRef]
- Bommasani, R.; Hudson, D.A.; Adeli, E.; Altman, R.; Arora, S.; von Arx, S.; Liang, P. On the opportunities and risks of foundation models. arXiv 2021, arXiv:2108.07258. [Google Scholar] [CrossRef]
- Fui-Hoon Nah, F.; Zheng, R.; Cai, J.; Siau, K.; Chen, L. Generative AI and ChatGPT: Applications, challenges, and AI-human collaboration. J. Inf. Technol. Case Appl. Res. 2023, 25, 277–304. [Google Scholar] [CrossRef]
- Hagos, D.H.; Battle, R.; Rawat, D.B. Recent advances in generative ai and large language models: Current status, challenges, and perspectives. IEEE Trans. Artif. Intell. 2024, 5, 5873–5893. [Google Scholar] [CrossRef]
- Schwartz, D.; Te’eni, D. AI for knowledge creation, curation, and consumption in context. J. Assoc. Inf. Syst. 2024, 25, 37–47. [Google Scholar] [CrossRef]
- Sinha, S.; Lee, Y.M. Challenges with developing and deploying AI models and applications in industrial systems. Discov. Artif. Intell. 2024, 4, 55. [Google Scholar] [CrossRef]
- Kirchner, K.; Bolisani, E.; Kassaneh, T.C.; Scarso, E.; Taraghi, N. Generative AI Meets Knowledge Management: Insights From Software Development Practices. Knowl. Process Manag. 2025, 1–13. [Google Scholar] [CrossRef]
- Peretz, O.; Nakash, M. From Junior to Senior: Skill Requirements for AI Professionals Across Career Stages. In Proceedings of the International Conference on Research in Business, Management and Finance, Rome, Italy, 5–7 December 2025; Volume 2, pp. 9–10. [Google Scholar] [CrossRef]
- McKinsey. The State of AI in 2023: Generative AI’s Breakout Year. 2023. Available online: https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2023-generative-ais-breakout-year (accessed on 22 May 2025).
- Deloitte. 2024 Year-End Generative AI Report. 2024. Available online: https://www.deloitte.com/us/en/what-we-do/capabilities/applied-artificial-intelligence/content/state-of-generative-ai-in-enterprise.html (accessed on 22 May 2025).
- Kurup, S.; Gupta, V. Factors influencing the AI adoption in organizations. Metamorphosis 2022, 21, 129–139. [Google Scholar] [CrossRef]
- McElheran, K.; Li, J.F.; Brynjolfsson, E.; Kroff, Z.; Dinlersoz, E.; Foster, L.; Zolas, N. AI adoption in America: Who, what, and where. J. Econ. Manag. Strategy 2024, 33, 375–415. [Google Scholar] [CrossRef]
- Sadiq, R.B.; Safie, N.; Abd Rahman, A.H.; Goudarzi, S. Artificial intelligence maturity model: A systematic literature review. PeerJ Comput. Sci. 2021, 7, e661. [Google Scholar] [CrossRef]
- Romeo, E.; Lacko, J. Adoption and integration of AI in organizations: A systematic review of challenges and drivers towards future directions of research. Kybernetes 2025, 1–22. [Google Scholar] [CrossRef]
- Wu, T.J.; Liang, Y.; Wang, Y. The buffering role of workplace mindfulness: How job insecurity of human-artificial intelligence collaboration impacts employees’ work–life-related outcomes. J. Bus. Psychol. 2024, 39, 1395–1411. [Google Scholar] [CrossRef]
- Araujo, T.; Helberger, N.; Kruikemeier, S.; De Vreese, C.H. In AI we trust? Perceptions about automated decision-making by artificial intelligence. AI Soc. 2020, 35, 611–623. [Google Scholar] [CrossRef]
- Pflanzer, M.; Dubljević, V.; Bauer, W.A.; Orcutt, D.; List, G.; Singh, M.P. Embedding AI in society: Ethics, policy, governance, and impacts. AI Soc. 2023, 38, 1267–1271. [Google Scholar] [CrossRef]
- Cabrera, B.M.; Luiz, L.E.; Teixeira, J.P. The Artificial Intelligence Act: Insights regarding its application and implications. Procedia Comput. Sci. 2025, 256, 230–237. [Google Scholar] [CrossRef]
- Finocchiaro, G. The regulation of artificial intelligence. Ai Soc. 2024, 39, 1961–1968. [Google Scholar] [CrossRef]
- Redman, T.C. If your data is bad, your machine learning tools are useless. Harv. Bus. Rev. 2018, 2. Available online: https://hbr.org/2018/04/if-your-data-is-bad-your-machine-learning-tools-are-useless (accessed on 18 November 2024).
- Dodgson, J.E. About research: Qualitative methodologies. J. Hum. Lact. 2017, 33, 355–358. [Google Scholar] [CrossRef]
- Douglas, H. Sampling techniques for qualitative research. In Principles of Social Research Methodology; Islam, M.R., Khan, N.A., Baikady, R., Eds.; Springer: Singapore, 2022; pp. 415–426. [Google Scholar] [CrossRef]
- Mohajan, H.K. Qualitative research methodology in social sciences and related subjects. J. Econ. Dev. Environ. People 2018, 7, 23–48. [Google Scholar] [CrossRef]
- Gummesson, E. Qualitative Methods in Management Research; Sage: London, UK, 2000; Available online: https://www.researchgate.net/publication/215915855_Qualitative_Research_Methods_in_Management_Research (accessed on 19 November 2024).
- Creswell, J.W.; Poth, C.N. Qualitative Inquiry and Research Design: Choosing Among Five Approaches; Sage Publications: London, UK, 2016; ISBN 978-1-5063-3020-4. [Google Scholar]
- Pathak, V.; Jena, B.; Kalra, S. Qualitative research. Perspect. Clin. Res. 2013, 4, 192. [Google Scholar] [CrossRef] [PubMed]
- Scanlan, C.L. Preparing for the Unanticipated: Challenges in Conducting Semi-Structured, In-Depth Interviews; Sage Publications Limited: London, UK, 2020; pp. 67–80. [Google Scholar] [CrossRef]
- Ahmad, M.; Wilkins, S. Purposive sampling in qualitative research: A framework for the entire journey. Qual. Quant. 2024, 59, 1–19. [Google Scholar] [CrossRef]
- Guest, G.; Bunce, A.; Johnson, L. How many interviews are enough? An experiment with data saturation and variability. Field Methods 2006, 18, 59–82. [Google Scholar] [CrossRef]
- Hagaman, A.K.; Wutich, A. How many interviews are enough to identify metathemes in multisited and cross-cultural research? Another perspective on Guest, Bunce, and Johnson’s (2006) landmark study. Field Methods 2017, 29, 23–41. [Google Scholar] [CrossRef]
- Marshall, B.; Cardon, P.; Poddar, A.; Fontenot, R. Does sample size matter in qualitative research?: A review of qualitative interviews in IS research. J. Comput. Inf. Syst. 2013, 54, 11–22. [Google Scholar] [CrossRef]
- Boddy, C.R. Sample size for qualitative research. Qual. Mark. Res. Int. J. 2016, 19, 426–432. [Google Scholar] [CrossRef]
- Bouncken, R.B.; Czakon, W.; Schmitt, F. Purposeful sampling and saturation in qualitative research methodologies: Recommendations and review. Rev. Manag. Sci. 2025, 1–37. [Google Scholar] [CrossRef]
- Dworkin, S.L. Sample size policy for qualitative studies using in-depth interviews. Arch. Sex. Behav. 2012, 41, 1319–1320. [Google Scholar] [CrossRef] [PubMed]
- Braun, V.; Clarke, V. Thematic analysis. In Encyclopedia of Quality of Life and Well-Being Research; Springer International Publishing: Cham, Switzerland, 2024; pp. 7187–7193. [Google Scholar] [CrossRef]
- Guest, G.; MacQueen, K.M.; Namey, E.E. Applied Thematic Analysis; Sage publications: London, UK, 2011; Available online: https://antle.iat.sfu.ca/wp-content/uploads/Guest_2012_AppliedThematicAnlaysis_Ch1.pdf (accessed on 25 July 2024).
- O’Connor, C.; Joffe, H. Intercoder reliability in qualitative research: Debates and practical guidelines. Int. J. Qual. Methods 2020, 19, 1–13. [Google Scholar] [CrossRef]
- D’Ignazio, C.; Klein, L.F. Data Feminism; MIT Press: Cambridge, MA, USA, 2023; ISBN 9780262547185. [Google Scholar]
- Brandao, P.R. The Impact of Artificial Intelligence on Modern Society. AI 2025, 6, 190. [Google Scholar] [CrossRef]


| Domain (Section) | Key Findings in Prior Work | Unique Contributions | Remaining Gaps |
|---|---|---|---|
| Evolving Capabilities of AI in a Data-Driven World (3.1) | Shift from symbolic to data-driven and generative models; LLMs and GenAI expand AI’s scope and complexity [3,4,22,23,24,25,26,27,28,29,30,31,32] | Mapping paradigm shifts; highlighting technical advances and new application domains | Limited empirical evidence on how these advances reshape data work and challenges in practice |
| AI Adoption in Organizational Contexts (3.2) | Widespread but uneven AI adoption; barriers include data infrastructure, skills, and cultural resistance [3,4,32,33,34,35,36,37,38,39,40,41] | Large-scale surveys; maturity models; identification of organizational barriers | Lack of in-depth, cross-sectoral analysis of how data work is managed and aligned with business goals |
| Critical Risks at the Intersection of AI and Data (3.3) | Risks include bias, lack of transparency, privacy, and governance; regulatory responses emerging [2,6,7,9,13,18,19,20,30,32,42,43,44,45] | Identification of ethical, technical, and societal risks; mapping regulatory frameworks | Few studies examine real-world strategies for mitigating data-related risks in organizational setting |
| Theoretical Lenses on the Role of Data in AI Systems (3.4) | Shift from model-centric AI to DCAI; data as socio-technical infrastructure [2,5,7,8,9,14,18,23,46] | Theoretical reframing of data’s role; emphasis on annotation, context, and social factors | Scarcity of empirical research on how data-centric approaches are implemented and experienced by practitioners |
| Stage | Role(s) Involved | Data-Centric Focus and Description |
|---|---|---|
| Data Collection | Data Engineer | Initiating the lifecycle, this stage involves sourcing, aggregating, and validating raw data. The quality, representativeness, and accessibility of data at this point fundamentally shape all downstream AI processes. |
| Data Preparation | Data Scientist | Data is cleaned, transformed, and structured to ensure usability. Feature selection—identifying the most relevant variables—is a critical data-driven task that directly impacts model performance. |
| Model Development | ML Engineer | While focused on algorithmic design, this stage remains data-dependent, as model training, tuning, and validation rely entirely on the quality and structure of the input data. |
| Deployment | DevOps Engineer | Although technical in nature, deployment requires careful handling of data pipelines to ensure that real-time or batch data flows into the model as intended. |
| Monitoring & Maintenance | ML Ops Engineer | Ongoing evaluation of model performance is driven by continuous data input. Monitoring for data drift, anomalies, or shifts in distribution is essential to maintain reliability. |
| Explainability & Interpretation | Data Scientists/ML Engineers | Interpreting model outputs requires understanding how data influenced decisions. Explainability tools often rely on data-centric techniques to trace and justify predictions. |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ziv, L.; Nakash, M. Behind the Algorithm: International Insights into Data-Driven AI Model Development. Mach. Learn. Knowl. Extr. 2025, 7, 122. https://doi.org/10.3390/make7040122
Ziv L, Nakash M. Behind the Algorithm: International Insights into Data-Driven AI Model Development. Machine Learning and Knowledge Extraction. 2025; 7(4):122. https://doi.org/10.3390/make7040122
Chicago/Turabian StyleZiv, Limor, and Maayan Nakash. 2025. "Behind the Algorithm: International Insights into Data-Driven AI Model Development" Machine Learning and Knowledge Extraction 7, no. 4: 122. https://doi.org/10.3390/make7040122
APA StyleZiv, L., & Nakash, M. (2025). Behind the Algorithm: International Insights into Data-Driven AI Model Development. Machine Learning and Knowledge Extraction, 7(4), 122. https://doi.org/10.3390/make7040122

