Pharmacometrics in the Age of Large Language Models: A Vision of the Future
Abstract
1. Introduction
2. Background on LLMs
2.1. Families of LLMs
- Encoder-only models process input sequences bidirectionally, capturing contextual information from both left and right of each token. They are well-suited for classification, sentence similarity, and named entity recognition, but not for text generation, as they do not generate tokens autoregressively.
- Decoder-only are autoregressive models which are trained to predict the next token in a sequence given the previous tokens. This unidirectional approach makes them ideal for generative tasks such as text generation, dialog modeling, code completion, and open-ended question answering. These models underly most of the current generative LLMs.
- Encoder–Decoder models integrate both encoder and decoder blocks. This architecture allows the encoder to process the input text and pass contextualized representations to the decoder, which then generates the output sequence. Such models are particularly effective for machine translation, summarization, and structured question answering.
- The most widely known encoder-only LLM family is BERT (Bidirectional Encoder Representations from Transformers), originally developed by Google in 2018 [30] to advance natural language understanding tasks. Indeed, BERT is particularly effective for tasks such as information retrieval, text classification and aims to analyze and interpret text. Since the release of the original BERT model, its architecture has inspired the development of several derivative models by both Google and other research groups or companies (see Table 1).
- Among the decoder-only model series, one of the highly influential is GPT (Generative Pre-trained Transformer) introduced by OpenAI in 2018 [31]. GPT model, primarily designed for text generation, has subsequently evolved through several iterations (GPT-2 [32], GPT-3 [33], GPT-3.5 and GPT-4 [34]) that have substantially increased both model complexity and performance. In parallel, OpenAI also introduced InstructGPT [35] a fine-tuned version of GPT-3 optimized using reinforcement learning from human feedback (RLHF) [36]. InstructGPT was trained to follow user instructions more accurately and safely, forming the basis of ChatGPT [37], the popular conversational interface built on GPT models.
Family | Model | Developer | Year of Release | Number of Parameters | Pre-Training Corpora | Architecture |
---|---|---|---|---|---|---|
BERT | BERT (Base/Large) [30] | 2018 | 110M/340M | BookCorpus, Wikipedia | Encoder -only | |
DistilBERT [44] | Hugging Face | 2019 | 66B | BookCorpus, Wikipedia | ||
RoBERTa (Base/Large) [45] | FAIR | 2019 | 125M/355M | BookCorpus, CC-News, OpenWebText, Stories | ||
AlBERT (Base/Large) [46] | 2019 | 12M/18M | BookCorpus, Wikipedia | |||
ModernBERT (Base/Large) [47] | Hugging Face | 2024 | 149M/395M | Undisclosed—2 trillion tokens from web documents, code, scientific articles, etc. | ||
NeoBERT [48] | ByteDance AI Lab | 2025 | 250M | RefinedWeb | ||
GPT | GPT-1 [31] | OpenAI | 2018 | 117M | BookCorpus | Decoder -only |
GPT-2 [32] | 2019 | 1.5B | BookCorpus, WebText | |||
GPT-3 [33] | 2020 | 175B | CommonCrawl, WebText, Wikipedia, Books1, Books2 | |||
GPT-3.5 | 2022 | 175B | Undisclosed | |||
GPT-4 [34] | 2023 | Undisclosed | Undisclosed | |||
GPT-4.5 | 2025 | Undisclosed | Undisclosed | |||
BART | BART (Base/Large) [42] | FAIR | 2019 | 140M/400M | BookCorpus, CC-News, OpenWebText, Stories | Encoder- Decoder |
mBART [49] | 2020 | 610M | Common Crawl 25 languages subset (C25) | |||
T5 | T5 [43] | 2020 | 60M-11B | Colossal Clean Crawled Corpus (C4) | Encoder- Decoder | |
mT5 [50] | 2021 | 13B | Multilingual Colossal Clean Crawled Corpus (101 languages) (mC4) | |||
UL2 [51] | 2022 | 20B | Colossal Clean Crawled Corpus (C4), other datasets | |||
Llama | LLaMa-1 [38] | Meta AI | 2023 | 6.7M/13B/ 32.5B/65.2B | Common Crawl, C4, GitHub, Gutenberg Books3, Wikipedia, ArXiv, Stack Exchange | Decoder- only |
LLaMa-2 [41] | 2023 | 7M/13B/ 34B/70B | 2T tokens of curated data | |||
LLaMa-3 | 2024 | 8B/70b | 15T tokens; curated high-quality web, academic, code and multilingual corpora |
2.2. Emergent Abilities of LLMs
- In-context learning: the ability of an LLM to perform a new task by conditioning on information provided in the prompt at the inference time without the need of updating model parameters or additional re-training [33].
- Few-shot or zero-shot learning: the capacity of the model to generalize to unseen tasks either without any examples (zero-shot) or with only a few illustrative examples (few-shot) provided within the prompt [33].
- Chain-of-thought reasoning: the ability to generate intermediate reasoning steps that lead to a final answer, improving performance on complex tasks that require multi-step logical inference, mathematical reasoning, or structured problem-solving [52].
2.3. Classification of LLMs: General-Purpose, Purpose-Built, and Specialized Models
- General-Purpose LLMs: These models are pre-trained on broad, diverse corpora including internet-scale text, code, news, encyclopedias, and books. Their goal is to acquire general linguistic and reasoning skills applicable across domains. They are not optimized for any specific task or field but exhibit strong performance across a wide range of NLP applications. For example, previously introduced GPT model series (OpenAI), Claude (Anthropic), and Gemini (Google DeepMind) are prominent general-purpose LLMs.
- Purpose-Built LLMs: These LLMs are trained from scratch exclusively or predominantly on domain-specific data (e.g., biomedical literature or clinical text). They are optimized from the start to understand the language, terminology, and context of a specific field. For example, BioGPT [53] and BioMedLM [54] are LLMs based on the GPT-2 architecture that were trained from scratch on a corpus of biomedical literature from PubMed, allowing to generate content and answer questions with higher relevance to biomedical research.
- Specialized or Custom LLMs: These are general-purpose LLMs that are subsequently fine-tuned with domain-specific data to improve performance in a targeted application area. Fine-tuning involves a retraining of the base model using curated datasets relevant to a specific task or domain. A well-known example is Codex, a derivative of GPT-3 fine-tuned on a vast corpus of programming code, enabling state-of-the-art performance in code generation, debugging, and language-to-code translation tasks [55]. Similarly, Med-PaLM [56], built on the PaLM architecture, was fine-tuned on medical question-answer datasets to improve performance on medical reasoning and diagnosis tasks. Examples from other domains include LegalBERT [57], a fine-tuned variant of BERT adapted for legal documents, and FinGPT [58], a model fine-tuned for financial analysis and reporting.
3. Current Applications of LLMs in Pharmacometrics
4. What Can LLMs Do for Pharmacometricians?
4.1. Information Retrieval and Knowledge Synthesys
4.2. Data Collection and Formatting
4.3. Code Generation and Debugging
4.4. PK/PD Model Building and Covariate Selection
4.5. Reshaping PBPK and QSP Modeling
4.6. Report Writing and Documentation
4.7. Knowledge Dissemination and Pharmacometrics Education
5. LLMs as Predictive Tools: Toward Pharmacometrics Model Replacement?
6. LLMs from Assistant to Collaborative Reasoning Partners: A Potential Revolution
7. Discussions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Abbreviations
ADMET | Absorption, Distribution, Metabolism, Elimination and Toxicity |
AI | Artificial Intelligence |
BART | Bidirectional Auto-Regressive Transformers |
BERT | Bidirectional Encoder Representations from Transformers |
EHR | Electronic Health Record |
GPT | Generative pre-trained transformer |
LLaMA | Large Language Model Meta AI |
LLM | Large language Model |
MBMA | Model-based meta-analysis |
MIDD | Model-informed Drug Development |
M&S | Modeling and Simulation |
NLME | Non-linear Mixed Effect |
NLP | Natural Language Processing |
PBPK | Physiologically based pharmacokinetic |
PK | Pharmacokinetics |
PD | Pharmacodynamics |
QSP | Quantitative System Pharmacology |
RL | Reinforcement Learning |
RLHF | Reinforcement Learning from Human Feeding |
RWD | Real World Data |
T5 | Text-to-Text Transformer |
TTE | Time-to-Event |
References
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is All you Need. In Advances in Neural Information Processing Systems, Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; Curran Associates, Inc.: Red Hook, NY, USA, 2017; Volume 30, Available online: https://proceedings.neurips.cc/paper_files/paper/2017/hash/3f5ee243547dee91fbd053c1c4a845aa-Abstract.html (accessed on 1 April 2025).
- Thirunavukarasu, A.J.; Ting, D.S.J.; Elangovan, K.; Gutierrez, L.; Tan, T.F.; Ting, D.S.W. Large language models in medicine. Nat. Med. 2023, 29, 1930–1940. [Google Scholar] [CrossRef]
- De Paoli, F.; Berardelli, S.; Limongelli, I.; Rizzo, E.; Zucca, S. VarChat: The generative AI assistant for the interpretation of human genomic variations. Bioinformatics 2024, 40, btae183. [Google Scholar] [CrossRef]
- Zheng, Y.; Koh, H.Y.; Yang, M.; Li, L.; May, L.T.; Webb, G.I.; Pan, S.; Church, G. Large Language Models in Drug Discovery and Development: From Disease Mechanisms to Clinical Trials. arXiv 2024, arXiv:2409.04481. [Google Scholar] [CrossRef]
- Othman, Z.K.; Ahmed, M.M.; Okesanya, O.J.; Ibrahim, A.M.; Musa, S.S.; Hassan, B.A.; Saeed, L.I.; Lucero-Prisno, D.E. Advancing drug discovery and development through GPT models: A review on challenges, innovations and future prospects. Intell.-Based Med. 2025, 11, 100233. [Google Scholar] [CrossRef]
- Liu, X.; Lu, Z.; Wang, T.; Liu, F. Large language models facilitating modern molecular biology and novel drug development. Front. Pharmacol. 2024, 15, 1458739. Available online: https://www.frontiersin.org/journals/pharmacology/articles/10.3389/fphar.2024.1458739/full (accessed on 2 April 2025). [CrossRef] [PubMed]
- Lu, J.; Choi, K.; Eremeev, M.; Gobburu, J.; Goswami, S.; Liu, Q.; Mo, G.; Musante, C.J.; Shahin, M.H. Large Language Models and Their Applications in Drug Discovery and Development: A Primer. Clin. Transl. Sci. 2025, 18, e70205. [Google Scholar] [CrossRef] [PubMed]
- Liu, Z.; Roberts, R.A.; Lal-Nag, M.; Chen, X.; Huang, R.; Tong, W. AI-based language models powering drug discovery and development. Drug Discov. Today 2021, 26, 2593–2607. [Google Scholar] [CrossRef]
- Anderson, W.; Braun, I.; Bhatnagar, R.; Romero, K.; Walls, R.; Schito, M.; Podichetty, J.T. Unlocking the Capabilities of Large Language Models for Accelerating Drug Development. Clin. Pharmacol. Ther. 2024, 116, 38–41. [Google Scholar] [CrossRef]
- Cloesmeijer, M.E.; Janssen, A.; Koopman, S.F.; Cnossen, M.H.; Mathôt, R.A.A.; SYMPHONY consortium. ChatGPT in pharmacometrics? Potential opportunities and limitations. Br. J. Clin. Pharmacol. 2024, 90, 360–365. [Google Scholar] [CrossRef]
- Shin, E.; Ramanathan, M. Evaluation of prompt engineering strategies for pharmacokinetic data analysis with the ChatGPT large language model. J. Pharmacokinet. Pharmacodyn. 2024, 51, 101–108. [Google Scholar] [CrossRef]
- Shin, E.; Yu, Y.; Bies, R.R.; Ramanathan, M. Evaluation of ChatGPT and Gemini large language models for pharmacometrics with NONMEM. J. Pharmacokinet. Pharmacodyn. 2024, 51, 187–197. [Google Scholar] [CrossRef]
- Herrero, S.S.; Calvet, L. Generative Artificial Intelligence Models in Pharmacokinetics: A Study on a Two-Compartment Population Model. 2024. Available online: https://www.researchsquare.com/article/rs-4693613/v1 (accessed on 1 April 2025).
- Holt, S.; Qian, Z.; Liu, T.; Weatherall, J.; van de Schaar, M. Data-Driven Discovery of Dynamical Systems in Pharmacology using Large Language Models. In Proceedings of the Thirty-Eighth Annual Conference on Neural Information Processing Systems, Vancouver, BC, Canada, 10–15 December 2024; Available online: https://openreview.net/forum?id=KIrZmlTA92 (accessed on 10 April 2025).
- Marshall, S.; Madabushi, R.; Manolis, E.; Krudys, K.; Staab, A.; Dykstra, K.; Visser, S.A.G. Model-Informed Drug Discovery and Development: Current Industry Good Practice and Regulatory Expectations and Future Perspectives. CPT Pharmacomet. Syst. Pharmacol. 2019, 8, 87–96. [Google Scholar] [CrossRef]
- Madabushi, R.; Seo, P.; Zhao, L.; Tegenge, M.; Zhu, H. Review: Role of Model-Informed Drug Development Approaches in the Lifecycle of Drug Development and Regulatory Decision-Making. Pharm. Res. 2022, 39, 1669–1680. [Google Scholar] [CrossRef]
- EFPIA MID3 Workgroup; Marshall, S.F.; Burghaus, R.; Cosson, V.; Cheung, S.Y.A.; Chenel, M.; DellaPasqua, O.; Frey, N.; Hamrén, B.; Harnisch, L.; et al. Good Practices in Model-Informed Drug Discovery and Development: Practice, Application, and Documentation. CPT Pharmacomet. Syst. Pharmacol. 2016, 5, 93–122. [Google Scholar]
- Tosca, E.M.; Terranova, N.; Stuyckens, K.; Dosne, A.G.; Perera, T.; Vialard, J.; King, P.; Verhulst, T.; Perez-Ruixo, J.J.; Magni, P.; et al. A translational model-based approach to inform the choice of the dose in phase 1 oncology trials: The case study of erdafitinib. Cancer Chemother. Pharmacol. 2021, 89, 117–128. [Google Scholar] [CrossRef] [PubMed]
- Tosca, E.M.; Borrella, E.; Piana, C.; Bouchene, S.; Merlino, G.; Fiascarelli, A.; Mazzei, P.; Magni, P. Model-based prediction of effective target exposure for MEN1611 in combination with trastuzumab in HER2-positive advanced or metastatic breast cancer patients. CPT Pharmacometrics Syst. Pharmacol. 2023, 12, 1626–1639. [Google Scholar] [CrossRef]
- Tosca, E.M.; Bartolucci, R.; Magni, P.; Poggesi, I. Modeling approaches for reducing safety-related attrition in drug discovery and development: A review on myelotoxicity, immunotoxicity, cardiovascular toxicity, and liver toxicity. Expert. Opin. Drug Discov. 2021, 16, 1365–1390. [Google Scholar] [CrossRef]
- Tosca, E.M.; Carlo, A.D.; Bartolucci, R.; Fiorentini, F.; Tollo, S.D.; Caserini, M.; Rocchetti, M.; Bettica, P.; Magni, P. In silico trial for the assessment of givinostat dose adjustment rules based on the management of key hematological parameters in polycythemia vera patients. CPT Pharmacomet. Syst. Pharmacol. 2024, 13, 359–373. [Google Scholar] [CrossRef]
- Karlsen, M.; Khier, S.; Fabre, D.; Marchionni, D.; Azé, J.; Bringay, S.; Poncelet, P.; Calvier, E. Covariate Model Selection Approaches for Population Pharmacokinetics: A Systematic Review of Existing Methods, From SCM to AI. CPT Pharmacomet. Syst. Pharmacol. 2025, 14, 621–639. [Google Scholar] [CrossRef]
- Ronchi, D.; Tosca, E.M.; Bartolucci, R.; Magni, P. Go beyond the limits of genetic algorithm in daily covariate selection practice. J. Pharmacokinet. Pharmacodyn. 2023, 51, 109–121. [Google Scholar] [CrossRef]
- McComb, M.; Bies, R.; Ramanathan, M. Machine learning in pharmacometrics: Opportunities and challenges. Br. J. Clin. Pharmacol. 2022, 88, 1482–1499. [Google Scholar] [CrossRef]
- Janssen, A.; Bennis, F.C.; Mathôt, R.A.A. Adoption of Machine Learning in Pharmacometrics: An Overview of Recent Implementations and Their Considerations. Pharmaceutics 2022, 14, 1814. [Google Scholar] [CrossRef] [PubMed]
- Tosca, E.M.; De Carlo, A.; Ronchi, D.; Magni, P. Model-Informed Reinforcement Learning for Enabling Precision Dosing Via Adaptive Dosing. Clin. Pharmacol. Ther. 2024, 116, 619–636. [Google Scholar] [CrossRef]
- Sahoo, P.; Singh, A.K.; Saha, S.; Jain, V.; Mondal, S.; Chadha, A. A Systematic Survey of Prompt Engineering in Large Language Models: Techniques and Applications. arXiv 2025, arXiv:2402.07927. [Google Scholar]
- Marvin, G.; Hellen, N.; Jjingo, D.; Nakatumba-Nabende, J. Prompt Engineering in Large Language Models. In Proceedings of the Data Intelligence and Cognitive Informatics, Tirunelveli, India, 27–28 June 2023; Jacob, I.J., Piramuthu, S., Falkowski-Gilski, P., Eds.; Springer Nature: Singapore, 2024; pp. 387–402. [Google Scholar]
- Wei, J.; Tay, Y.; Bommasani, R.; Raffel, C.; Zoph, B.; Borgeaud, S.; Yogatama, D.; Bosma, M.; Zhou, D.; Metzler, D.; et al. Emergent Abilities of Large Language Models. arXiv 2022, arXiv:2206.07682. [Google Scholar] [CrossRef]
- Devlin, J.; Chang, M.-W.; Lee, K.; Toutanova, K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Minneapolis, MN, USA, 2–7 June 2019; Burstein, J., Doran, C., Solorio, T., Eds.; Association for Computational Linguistics: Stroudsburg, PA, USA, 2019; Volume 1, (Long and Short Papers). pp. 4171–4186. Available online: https://aclanthology.org/N19-1423/ (accessed on 7 April 2025).
- Radford, A.; Narasimhan, K.; Salimans, T.; Sutskever, I. Improving Language Understanding by Generative Pre-Training. 2018. Available online: https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf (accessed on 10 April 2025).
- Radford, A.; Wu, J.; Child, R.; Luan, D.; Amodei, D.; Sutskever, I. Language Models are Unsupervised Multitask Learners. OpenAI Blog 2019, 1, 9. [Google Scholar]
- Brown, T.B.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; et al. Language Models are Few-Shot Learners. arXiv 2020, arXiv:2005.14165. [Google Scholar] [CrossRef]
- OpenAI; Achiam, J.; Adler, S.; Agarwal, S.; Ahmad, L.; Akkaya, I.; Aleman, F.L.; Almeida, D.; Altenschmidt, J.; Altman, S.; et al. GPT-4 Technical Report. arXiv 2024, arXiv:2303.08774. [Google Scholar]
- Ouyang, L.; Wu, J.; Jiang, X.; Almeida, D.; Wainwright, C.L.; Mishkin, P.; Zhang, C.; Agarwal, S.; Slama, K.; Ray, A.; et al. Training language models to follow instructions with human feedback. arXiv 2022, arXiv:2203.02155. [Google Scholar] [CrossRef]
- Christiano, P.; Leike, J.; Brown, T.B.; Martic, M.; Legg, S.; Amodei, D. Deep reinforcement learning from human preferences. arXiv 2023, arXiv:1706.03741. [Google Scholar] [CrossRef]
- Introducing ChatGPT. Available online: https://openai.com/index/chatgpt/ (accessed on 7 April 2025).
- Touvron, H.; Lavril, T.; Izacard, G.; Martinet, X.; Lachaux, M.-A.; Lacroix, T.; Rozière, B.; Goyal, N.; Hambro, E.; Azhar, F.; et al. LLaMA: Open and Efficient Foundation Language Models. arXiv 2023, arXiv:2302.13971. [Google Scholar] [CrossRef]
- Zheng, L.; Chiang, W.-L.; Sheng, Y.; Zhuang, S.; Wu, Z.; Zhuang, Y.; Lin, Z.; Li, Z.; Li, D.; Xing, E.P.; et al. Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena. arXiv 2023, arXiv:2306.05685. [Google Scholar]
- Cui, Y.; Yang, Z.; Yao, X. Efficient and Effective Text Encoding for Chinese LLaMA and Alpaca. arXiv 2024, arXiv:2304.08177. Available online: http://arxiv.org/abs/2304.08177 (accessed on 7 April 2025).
- Touvron, H.; Martin, L.; Stone, K.; Albert, P.; Almahairi, A.; Babaei, Y.; Bashlykov, N.; Batra, S.; Bhargava, P.; Bhosale, S.; et al. Llama 2: Open Foundation and Fine-Tuned Chat Models. arXiv 2023, arXiv:2307.09288. [Google Scholar] [CrossRef]
- Lewis, M.; Liu, Y.; Goyal, N.; Ghazvininejad, M.; Mohamed, A.; Levy, O.; Stoyanov, V.; Zettlemoyer, L. BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Online, 5–10 July 2020; Available online: http://arxiv.org/abs/1910.13461 (accessed on 7 April 2025).
- Raffel, C.; Shazeer, N.; Roberts, A.; Lee, K.; Narang, S.; Matena, M.; Zhou, Y.; Li, W.; Liu, P.J. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res. 2020, 21, 140. [Google Scholar]
- Sanh, V.; Debut, L.; Chaumond, J.; Wolf, T. DistilBERT, a distilled version of BERT: Smaller, faster, cheaper and lighter. arXiv 2020, arXiv:1910.01108. Available online: http://arxiv.org/abs/1910.01108 (accessed on 7 April 2025).
- Liu, Y.; Ott, M.; Goyal, N.; Du, J.; Joshi, M.; Chen, D.; Levy, O.; Lewis, M.; Zettlemoyer, L.; Stoyanov, V. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv 2019, arXiv:1907.11692. [Google Scholar]
- Lan, Z.; Chen, M.; Goodman, S.; Gimpel, K.; Sharma, P.; Soricut, R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv 2020, arXiv:1909.11942. [Google Scholar] [CrossRef]
- Warner, B.; Chaffin, A.; Clavié, B.; Weller, O.; Hallström, O.; Taghadouini, S.; Gallagher, A.; Biswas, R.; Ladhak, F.; Aarsen, T.; et al. Smarter, Better, Faster, Longer: A Modern Bidirectional Encoder for Fast, Memory Efficient, and Long Context Finetuning and Inference. arXiv 2024, arXiv:2412.13663. [Google Scholar] [CrossRef]
- Breton, L.L.; Fournier, Q.; Morris, J.X.; Mezouar, M.E.; Chandar, S. NeoBERT: A Next Generation BERT. Transactions on Machine Learning Research. 2025. Available online: https://openreview.net/forum?id=TJRyDi7mwH (accessed on 25 July 2025).
- Liu, Y.; Gu, J.; Goyal, N.; Li, X.; Edunov, S.; Ghazvininejad, M.; Lewis, M.; Zettlemoyer, L. Multilingual Denoising Pre-training for Neural Machine Translation. arXiv 2020, arXiv:2001.08210. [Google Scholar] [CrossRef]
- Xue, L.; Constant, N.; Roberts, A.; Kale, M.; Al-Rfou, R.; Siddhant, A.; Barua, A.; Raffel, C. mT5: A Massively Multilingual Pre-trained Text-to-Text Transformer. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Online, 6–11 June 2021; Toutanova, K., Rumshisky, A., Zettlemoyer, L., Hakkani-Tur, D., Beltagy, I., Bethard, S., Cotterell, R., Chakraborty, T., Zhou, Y., Eds.; Association for Computational Linguistics: Stroudsburg, PA, USA, 2021; pp. 483–498. Available online: https://aclanthology.org/2021.naacl-main.41/ (accessed on 7 April 2025).
- Tay, Y.; Dehghani, M.; Tran, V.Q.; Garcia, X.; Wei, J.; Wang, X.; Chung, H.W.; Shakeri, S.; Bahri, D.; Schuster, T.; et al. UL2: Unifying Language Learning Paradigms. arXiv 2023, arXiv:2205.05131. [Google Scholar] [CrossRef]
- Wei, J.; Wang, X.; Schuurmans, D.; Bosma, M.; Ichter, B.; Xia, F.; Chi, E.; Le, Q.; Zhou, D. Chain-of-Thought Prompting Elicits Reasoning in Large Language Models. arXiv 2023, arXiv:2201.11903. [Google Scholar]
- Luo, R.; Sun, L.; Xia, Y.; Qin, T.; Zhang, S.; Poon, H.; Liu, T.-Y. BioGPT: Generative pre-trained transformer for biomedical text generation and mining. Brief. Bioinform. 2022, 23, bbac409. [Google Scholar]
- Bolton, E.; Venigalla, A.; Yasunaga, M.; Hall, D.; Xiong, B.; Lee, T.; Daneshjou, R.; Frankle, J.; Liang, P.; Carbin, M.; et al. BioMedLM: A 2.7B Parameter Language Model Trained on Biomedical Text. arXiv 2024, arXiv:2403.18421. [Google Scholar]
- Chen, M.; Tworek, J.; Jun, H.; Yuan, Q.; Pinto, H.P.d.O.; Kaplan, J.; Edwards, H.; Burda, Y.; Joseph, N.; Brockman, G.; et al. Evaluating Large Language Models Trained on Code. arXiv 2021, arXiv:2107.03374. [Google Scholar] [CrossRef]
- Singhal, K.; Azizi, S.; Tu, T.; Mahdavi, S.S.; Wei, J.; Chung, H.W.; Scales, N.; Tanwani, A.; Cole-Lewis, H.; Pfohl, S.; et al. Large language models encode clinical knowledge. Nature 2023, 620, 172–180. [Google Scholar]
- Chalkidis, I.; Fergadiotis, M.; Malakasiotis, P.; Aletras, N.; Androutsopoulos, I. LEGAL-BERT: The Muppets straight out of Law School. In Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2020, Online, 16–20 November 2020; Cohn, T., He, Y., Liu, Y., Eds.; Association for Computational Linguistics: Stroudsburg, PA, USA, 2020; pp. 2898–2904. Available online: https://aclanthology.org/2020.findings-emnlp.261/ (accessed on 7 April 2025).
- Yang, H.; Liu, X.-Y.; Wang, C.D. FinGPT: Open-Source Financial Large Language Models. arXiv 2023, arXiv:2306.06031. [Google Scholar] [CrossRef]
- Cha, H.J.; Choe, K.; Shin, E.; Ramanathan, M.; Han, S. Leveraging large language models in pharmacometrics: Evaluation of NONMEM output interpretation and simulation capabilities. J. Pharmacokinet. Pharmacodyn. 2025, 52, 34. [Google Scholar] [CrossRef] [PubMed]
- Zheng, W.; Wang, W.; Kirkpatrick, C.M.J.; Landersdorfer, C.B.; Yao, H.; Zhou, J. AI for NONMEM Coding in Pharmacometrics Research and Education: Shortcut or Pitfall? arXiv 2025, arXiv:2507.08144. [Google Scholar] [CrossRef]
- Tang, L.; Sun, Z.; Idnay, B.; Nestor, J.G.; Soroush, A.; Elias, P.A.; Xu, Z.; Ding, Y.; Durrett, G.; Rousseau, J.F.; et al. Evaluating large language models on medical evidence summarization. npj Digit. Med. 2023, 6, 158. [Google Scholar] [CrossRef]
- Tian, S.; Jin, Q.; Yeganova, L.; Lai, P.-T.; Zhu, Q.; Chen, X.; Yang, Y.; Chen, Q.; Kim, W.; Comeau, D.C.; et al. Opportunities and challenges for ChatGPT and large language models in biomedicine and health. Brief. Bioinform. 2023, 25, bbad493. [Google Scholar] [CrossRef]
- Gao, Z.; Li, L.; Ma, S.; Wang, Q.; Hemphill, L.; Xu, R. Examining the Potential of ChatGPT on Biomedical Information Retrieval: Fact-Checking Drug-Disease Associations. Ann. Biomed. Eng. 2024, 52, 1919–1927. [Google Scholar] [CrossRef] [PubMed]
- Huang, L.; Yu, W.; Ma, W.; Zhong, W.; Feng, Z.; Wang, H.; Chen, Q.; Peng, W.; Feng, X.; Qin, B.; et al. A Survey on Hallucination in Large Language Models: Principles, Taxonomy, Challenges, and Open Questions. ACM Trans. Inf. Syst. 2025, 43, 42. [Google Scholar] [CrossRef]
- Chelli, M.; Descamps, J.; Lavoué, V.; Trojani, C.; Azar, M.; Deckert, M.; Raynier, J.-L.; Clowez, G.; Boileau, P.; Ruetsch-Chelli, C. Hallucination Rates and Reference Accuracy of ChatGPT and Bard for Systematic Reviews: Comparative Analysis. J. Med. Internet Res. 2024, 26, e53164. [Google Scholar] [CrossRef]
- Ge, W.; Hayes, S.; Yee, K.L.; Patel, B.; Bryman, G. Development and benchmarking of non-generative and generative natural language processing approaches for AI-assisted pharmacometric literature curation. In Proceedings of the PAGE 2024: Methodology—AI/Machine Learning, Rome, Italy, 25–28 June 2024; Abstract 10993. Available online: https://www.page-meeting.org/default.asp?abstract=10993 (accessed on 15 April 2025).
- Reason, T.; Benbow, E.; Langham, J.; Gimblett, A.; Klijn, S.L.; Malcolm, B. Artificial Intelligence to Automate Network Meta-Analyses: Four Case Studies to Evaluate the Potential Application of Large Language Models. PharmacoEconomics—Open 2024, 8, 205–220. [Google Scholar] [CrossRef]
- Liu, F.; Panagiotakos, D. Real-world data: A brief review of the methods, applications, challenges and opportunities. BMC Med. Res. Methodol. 2022, 22, 287. [Google Scholar] [CrossRef]
- Huang, J.; Yang, D.M.; Rong, R.; Nezafati, K.; Treager, C.; Chi, Z.; Wang, S.; Cheng, X.; Guo, Y.; Klesse, L.J.; et al. A critical assessment of using ChatGPT for extracting structured data from clinical notes. npj Digit. Med. 2024, 7, 106. [Google Scholar] [CrossRef]
- Rettenberger, L.; Münker, M.F.; Schutera, M.; Niemeyer, C.M.; Rabe, K.S.; Reischl, M. Using Large Language Models for Extracting Structured Information from Scientific Texts. Curr. Dir. Biomed. Eng. 2024, 10, 526–529. [Google Scholar] [CrossRef]
- Giner-Miguelez, J.; Gómez, A.; Cabot, J. Using Large Language Models to Enrich the Documentation of Datasets for Machine Learning. arXiv 2024, arXiv:2404.15320. [Google Scholar] [CrossRef]
- Wang, J.; Chen, Y. A Review on Code Generation with LLMs: Application and Evaluation. In Proceedings of the 2023 IEEE International Conference on Medical Artificial Intelligence (MedAI), Beijing, China, 18–19 November 2023; pp. 284–289. Available online: https://ieeexplore.ieee.org/abstract/document/10403378 (accessed on 1 April 2025).
- GitHub Copilot · Your AI Pair Programmer. Available online: https://github.com/features/copilot (accessed on 1 April 2025).
- Rostami-Hodjegan, A.; Toon, S. Physiologically Based Pharmacokinetics as a Component of Model-Informed Drug Development: Where We Were, Where We Are, and Where We Are Heading. J. Clin. Pharmacol. 2020, 60, S12–S16. [Google Scholar] [CrossRef]
- Zhu, A.Z.X.; Rogge, M. Applications of Quantitative System Pharmacology Modeling to Model-Informed Drug Development. Methods Mol. Biol. Clifton NJ 2022, 2486, 71–86. [Google Scholar]
- Androulakis, I.P.; Cucurull-Sanchez, L.; Kondic, A.; Mehta, K.; Pichardo, C.; Pryor, M.; Renardy, M. The dawn of a new era: Can machine learning and large language models reshape QSP modeling? J. Pharmacokinet. Pharmacodyn. 2025, 52, 36. [Google Scholar] [CrossRef]
- Goryanin, I.; Goryanin, I.; Demin, O. Revolutionizing drug discovery: Integrating artificial intelligence with quantitative systems pharmacology. Drug Discov. Today 2025, 30, 104448. [Google Scholar] [CrossRef]
- Juhi, A.; Pipil, N.; Santra, S.; Mondal, S.; Behera, J.K.; Mondal, H. The Capability of ChatGPT in Predicting and Explaining Common Drug-Drug Interactions. Cureus 2023, 15, e36272. [Google Scholar] [CrossRef] [PubMed]
- Fatoki, T.H.; Balogun, T.C.; Ojewuyi, A.E.; Omole, A.C.; Olukayode, O.V.; Adewumi, A.P.; Umesi, A.J.; Ijeoma, N.P.; Apooyin, A.E.; Chinedu, C.P.; et al. In silico molecular targets, docking, dynamics simulation and physiologically based pharmacokinetics modeling of oritavancin. BMC Pharmacol. Toxicol. 2024, 25, 79. [Google Scholar] [CrossRef] [PubMed]
- Slack, D.; Krishna, S.; Lakkaraju, H.; Singh, S. TalkToModel: Explaining Machine Learning Models with Interactive Natural Language Conversations. arXiv 2023, arXiv:2207.04154. [Google Scholar] [CrossRef]
- Wehling, L.; Singh, G.; Mulyadi, A.W.; Sreenath, R.H.; Hermjakob, H.; Nguyen, T.; Rückle, T.; Mosa, M.H.; Cordes, H.; Andreani, T.; et al. Talk2Biomodels: AI agent-based open-source LLM initiative for kinetic biological models. bioRxiv 2025. [Google Scholar] [CrossRef]
- Kannan, M.; Bridgewater, G.; Zhang, M.; Blinov, M.L. Leveraging public AI tools to explore systems biology resources in mathematical modeling. npj Syst. Biol. Appl. 2025, 11, 15. [Google Scholar] [CrossRef]
- Bonate, P.L.; Strougo, A.; Desai, A.; Roy, M.; Yassen, A.; van der Walt, J.S.; Kaibara, A.; Tannenbaum, S. Guidelines for the Quality Control of Population Pharmacokinetic–Pharmacodynamic Analyses: An Industry Perspective. AAPS J. 2012, 14, 749–758. [Google Scholar] [CrossRef]
- Dykstra, K.; Mehrotra, N.; Tornøe, C.W.; Kastrissios, H.; Patel, B.; Al-Huniti, N.; Jadhav, P.; Wang, Y.; Byon, W. Reporting guidelines for population pharmacokinetic analyses. J. Pharmacokinet. Pharmacodyn. 2015, 42, 301–314. [Google Scholar] [CrossRef]
- Wu, J.; Gan, W.; Chen, Z.; Wan, S.; Yu, P.S. Multimodal Large Language Models: A Survey. arXiv 2023, arXiv:2311.13165. [Google Scholar] [CrossRef]
- Patel, S.B.; Lam, K. ChatGPT: The future of discharge summaries? Lancet Digit. Health 2023, 5, e107–e108. [Google Scholar] [CrossRef] [PubMed]
- Busch, F.; Hoffmann, L.; Dos Santos, D.P.; Makowski, M.R.; Saba, L.; Prucker, P.; Hadamitzky, M.; Navab, N.; Kather, J.N.; Truhn, D.; et al. Large language models for structured reporting in radiology: Past, present, and future. Eur. Radiol. 2024, 35, 2589–2602. [Google Scholar] [CrossRef]
- Bosbach, W.A.; Senge, J.F.; Nemeth, B.; Omar, S.H.; Mitrakovic, M.; Beisbart, C.; Horváth, A.; Heverhagen, J.; Daneshvar, K. Ability of ChatGPT to generate competent radiology reports for distal radius fracture by use of RSNA template items and integrated AO classifier. Curr. Probl. Diagn. Radiol. 2024, 53, 102–110. [Google Scholar] [CrossRef]
- Bergomi, L.; Buonocore, T.M.; Antonazzo, P.; Alberghi, L.; Bellazzi, R.; Preda, L.; Bortolotto, C.; Parimbelli, E. Reshaping free-text radiology notes into structured reports with generative question answering transformers. Artif. Intell. Med. 2024, 154, 102924. [Google Scholar] [CrossRef]
- Sasaki, F.; Tatekawa, H.; Mitsuyama, Y.; Kageyama, K.; Jogo, A.; Yamamoto, A.; Miki, Y.; Ueda, D. Bridging Language and Stylistic Barriers in IR Standardized Reporting: Enhancing Translation and Structure Using ChatGPT-4. J. Vasc. Interv. Radiol. JVIR 2024, 35, 472–475.e1. [Google Scholar] [CrossRef]
- Adams, L.C.; Truhn, D.; Busch, F.; Kader, A.; Niehues, S.M.; Makowski, M.R.; Bressem, K.K. Leveraging GPT-4 for Post Hoc Transformation of Free-text Radiology Reports into Structured Reporting: A Multilingual Feasibility Study. Radiology 2023, 307, e230725. [Google Scholar] [CrossRef]
- Mallio, C.A.; Bernetti, C.; Sertorio, A.C.; Zobel, B.B. ChatGPT in radiology structured reporting: Analysis of ChatGPT-3.5 Turbo and GPT-4 in reducing word count and recalling findings. Quant. Imaging Med. Surg. 2024, 14, 2096102. [Google Scholar] [CrossRef]
- Jiang, H.; Xia, S.; Yang, Y.; Xu, J.; Hua, Q.; Mei, Z.; Hou, Y.; Wei, M.; Lai, L.; Li, N.; et al. Transforming free-text radiology reports into structured reports using ChatGPT: A study on thyroid ultrasonography. Eur. J. Radiol. 2024, 175, 111458. [Google Scholar] [CrossRef]
- Bonate, P.L.; Barrett, J.S.; Ait-Oudhia, S.; Brundage, R.; Corrigan, B.; Duffull, S.; Gastonguay, M.; Karlsson, M.O.; Kijima, S.; Krause, A.; et al. Training the next generation of pharmacometric modelers: A multisector perspective. J. Pharmacokinet. Pharmacodyn. 2024, 51, 5–31. [Google Scholar] [CrossRef]
- Michelet, R.; Aulin, L.B.; Borghardt, J.M.; Dalla Costa, T.; Denti, P.; Ibarra, M.; Ma, G.; Meibohm, B.; Pillai, G.C.; Schmidt, S.; et al. Barriers to global pharmacometrics: Educational challenges and opportunities across the globe. CPT Pharmacomet. Syst. Pharmacol. 2023, 12, 743. [Google Scholar] [CrossRef] [PubMed]
- Ali, D.; Fatemi, Y.; Boskabadi, E.; Nikfar, M.; Ugwuoke, J.; Ali, H. ChatGPT in teaching and learning: A systematic review. Educ. Sci. 2024, 14, 643. [Google Scholar] [CrossRef]
- Bernabei, M.; Colabianchi, S.; Falegnami, A.; Costantino, F. Students’ use of large language models in engineering education: A case study on technology acceptance, perceptions, efficacy, and detection chances. Comput. Educ. Artif. Intell. 2023, 5, 100172. [Google Scholar] [CrossRef]
- Raihan, N.; Siddiq, M.L.; Santos, J.C.; Zampieri, M. Large language models in computer science education: A systematic literature review. In Proceedings of the 56th ACM Technical Symposium on Computer Science Education, Pittsburgh, PA, USA, 6 February–1 March 2025; Volume 1, pp. 938–944. [Google Scholar]
- Meyer, A.; Ruthard, J.; Streichert, T. Dear ChatGPT–can you teach me how to program an app for laboratory medicine? J. Lab. Med. 2024, 48, 197–201. [Google Scholar] [CrossRef]
- Jeanselme, V.; Agarwal, N.; Wang, C. Review of language models for survival analysis. In Proceedings of the AAAI 2024 Spring Symposium on Clinical Foundation Models, Stanford, CA, USA, 25–27 March 2024. [Google Scholar]
- Holford, N. A Time to Event Tutorial for Pharmacometricians. CPT Pharmacomet. Syst. Pharmacol. 2013, 2, 43. [Google Scholar]
- Hu, D.; Liu, B.; Li, X.; Zhu, X.; Wu, N. Predicting Lung Cancer Patient Prognosis with Large Language Models. arXiv 2024, arXiv:2408.07971. [Google Scholar] [CrossRef]
- Jiang, L.Y.; Liu, X.C.; Nejatian, N.P.; Nasir-Moin, M.; Wang, D.; Abidin, A.; Eaton, K.; Riina, H.A.; Laufer, I.; Punjabi, P.; et al. Health system-scale language models are all-purpose prediction engines. Nature 2023, 619, 357–362. [Google Scholar] [CrossRef]
- Derbal, Y. Adaptive Cancer Therapy in the Age of Generative Artificial Intelligence. Cancer Control 2024, 31, 10732748241264704. [Google Scholar] [CrossRef]
- Derbal, Y. Adaptive Treatment of Metastatic Prostate Cancer Using Generative Artificial Intelligence. Clin. Med. Insights 2025, 19, 11795549241311408. [Google Scholar] [CrossRef]
- De Carlo, A.; Tosca, E.M.; Fantozzi, M.; Magni, P. Reinforcement Learning and PK-PD Models Integration to Personalize the Adaptive Dosing Protocol of Erdafitinib in Patients with Metastatic Urothelial Carcinoma. Clin. Pharmacol. Ther. 2024, 115, 825–838. [Google Scholar] [CrossRef]
- Liang, Y.; Wen, H.; Nie, Y.; Jiang, Y.; Jin, M.; Song, D.; Pan, S.; Wen, Q. Foundation Models for Time Series Analysis: A Tutorial and Survey. In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Barcelona, Spain, 25–29 August 2024; Association for Computing Machinery: New York, NY, USA, 2024; pp. 6555–6565. Available online: https://doi.org/10.1145/3637528.3671451 (accessed on 3 April 2025).
- Zhang, X.; Chowdhury, R.R.; Gupta, R.K.; Shang, J. Large Language Models for Time Series: A Survey. In Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence, Jeju, Republic of Korea, 3–9 August 2024; Available online: http://arxiv.org/abs/2402.01801 (accessed on 3 April 2025).
- Xue, H.; Salim, F.D. PromptCast: A New Prompt-Based Learning Paradigm for Time Series Forecasting. IEEE Trans. Knowl. Data Eng. 2024, 36, 6851–6864. [Google Scholar]
- Jin, M.; Wang, S.; Ma, L.; Chu, Z.; Zhang, J.Y.; Shi, X.; Chen, P.-Y.; Liang, Y.; Li, Y.-F.; Pan, S.; et al. TIME-LLM: Time series forecasting by reprogramming large language models. In Proceedings of the 12th International Conference on Learning Representations, Vienna, Austria, 7–11 May 2024. [Google Scholar]
- Rasul, K.; Ashok, A.; Williams, A.R.; Ghonia, H.; Bhagwatkar, R.; Khorasani, A.; Bayazi, M.J.D.; Adamopoulos, G.; Riachi, R.; Hassen, N.; et al. Lag-Llama: Towards Foundation Models for Probabilistic Time Series Forecasting. arXiv 2024, arXiv:2310.08278. [Google Scholar] [CrossRef]
- Kraljevic, Z.; Bean, D.; Shek, A.; Bendayan, R.; Hemingway, H.; Yeung, J.A.; Deng, A.; Balston, A.; Ross, J.; Idowu, E.; et al. Foresight—A generative pretrained transformer for modelling of patient timelines using electronic health records: A retrospective modelling study. Lancet Digit. Health 2024, 6, e281–e290. [Google Scholar] [CrossRef] [PubMed]
- Kraljevic, Z.; Yeung, J.A.; Bean, D.; Teo, J.; Dobson, R.J. Large Language Models for Medical Forecasting—Foresight 2. arXiv 2024, arXiv:2412.10848. [Google Scholar]
- Tan, M.; Merrill, M.A.; Gupta, V.; Althoff, T.; Hartvigsen, T. Are Language Models Actually Useful for Time Series Forecasting? In Proceedings of the 38th International Conference on Neural Information Processing Systems, Vancouver, BC, Canada, 10–15 December 2024. [Google Scholar]
- Labrak, Y.; Bazoge, A.; Morin, E.; Gourraud, P.-A.; Rouvier, M.; Dufour, R. BioMistral: A Collection of Open-Source Pretrained Large Language Models for Medical Domains. In Proceedings of the Findings of the Association for Computational Linguistics: ACL 2024, Bangkok, Thailand, 11–16 August 2024; Available online: http://arxiv.org/abs/2402.10373 (accessed on 4 April 2025).
- Makarov, N.; Bordukova, M.; Rodriguez-Esteban, R.; Schmich, F.; Menden, M.P. Large Language Models forecast Patient Health Trajectories enabling Digital Twins. medRxiv 2024. [Google Scholar] [CrossRef]
- Lammert, J.; Pfarr, N.; Kuligin, L.; Mathes, S.; Dreyer, T.; Modersohn, L.; Metzger, P.; Ferber, D.; Kather, J.N.; Truhn, D.; et al. Large Language Models-Enabled Digital Twins for Precision Medicine in Rare Gynecological Tumors. npj Digit. Med. 2025, 8, 420. Available online: http://arxiv.org/abs/2409.00544 (accessed on 4 April 2025). [CrossRef]
- Shahin, M.H.; Goswami, S.; Lobentanzer, S.; Corrigan, B.W. Agents for Change: Artificial Intelligent Workflows for Quantitative Clinical Pharmacology and Translational Sciences. Clin. Transl. Sci. 2025, 18, e70188. [Google Scholar] [CrossRef]
- Tamkin, A.; Brundage, M.; Clark, J.; Ganguli, D. Understanding the Capabilities, Limitations, and Societal Impact of Large Language Models. arXiv 2021, arXiv:2102.02503. [Google Scholar] [CrossRef]
- Lewis, P.; Perez, E.; Piktus, A.; Petroni, F.; Karpukhin, V.; Goyal, N.; Küttler, H.; Lewis, M.; Yih, W.; Rocktäschel, T.; et al. Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks. arXiv 2021, arXiv:2005.11401. [Google Scholar]
- Patidar, M.; Sawhney, R.; Singh, A.; Chatterjee, B.; Mausam; Bhattacharya, I. Few-shot Transfer Learning for Knowledge Base Question Answering: Fusing Supervised Models with In-Context Learning. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics, Bangkok, Thailand, 11–16 August 2024; Available online: http://arxiv.org/abs/2311.08894 (accessed on 25 July 2025).
- Rathore, V.; Deb, A.; Chandresh, A.; Singla, P. Mausam SSP: Self-Supervised Prompting for Cross-Lingual Transfer to Low-Resource Languages using Large Language Models. In Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2024, Miami, FL, USA, 12–16 November 2024. [Google Scholar]
- Suekuer, E. PMx-AI Bot: Changing the way of traditional Pharmacometrics work with AI Bots. In Proceedings of the PAGE 2024: Methodology—New Tools, Rome, Italy, 25–28 June 2024; Available online: https://www.page-meeting.org/default.asp?abstract=11257 (accessed on 15 April 2025).
Reference | Explored Pharmacometrics Task | Tested LLMs | Key Findings |
---|---|---|---|
Shin & Ramanathan (2024) [11] |
| ChatGPT-4 |
|
Cloesmeijer et al. (2023) [10] |
| ChatGPT-3.5 |
|
Herrero et al. (2024) [13] |
| ChatGPT-3.5 Gemini v4.0 Microsoft Copilot 4.0 |
|
Shin et al. (2024) [12] |
| ChatGPT-3.5 Gemini Ultra 1.0 |
|
Cha et al. (2025) [59] |
| ChatGPT 4o Gemini 1.5 Pro Claude 3.5 Llama 3.2 |
|
Zheng et al. (2025) [60] |
| GPT-4.1-mini GPT-4.1-nano GPT -4.1 GPT -4o-mini GPT -4o o1 o3-mini |
|
Holt et al. (2024) [14] |
| GPT-4 |
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Tosca, E.M.; Aiello, L.; De Carlo, A.; Magni, P. Pharmacometrics in the Age of Large Language Models: A Vision of the Future. Pharmaceutics 2025, 17, 1274. https://doi.org/10.3390/pharmaceutics17101274
Tosca EM, Aiello L, De Carlo A, Magni P. Pharmacometrics in the Age of Large Language Models: A Vision of the Future. Pharmaceutics. 2025; 17(10):1274. https://doi.org/10.3390/pharmaceutics17101274
Chicago/Turabian StyleTosca, Elena Maria, Ludovica Aiello, Alessandro De Carlo, and Paolo Magni. 2025. "Pharmacometrics in the Age of Large Language Models: A Vision of the Future" Pharmaceutics 17, no. 10: 1274. https://doi.org/10.3390/pharmaceutics17101274
APA StyleTosca, E. M., Aiello, L., De Carlo, A., & Magni, P. (2025). Pharmacometrics in the Age of Large Language Models: A Vision of the Future. Pharmaceutics, 17(10), 1274. https://doi.org/10.3390/pharmaceutics17101274