A Comparative Review of Large Language Models in Engineering with Emphasis on Chemical Engineering Applications
Abstract
1. Introduction
2. Historical Development of Large Language Models (LLMs)
2.1. Early Development of AI and NLP in Engineering
2.2. Rise of Neural Networks and the Transformer Breakthrough
2.3. Emergence of LLMs and Domain-Specific Adaptation
3. Applications of AI and LLMs
3.1. LLMs for Complex Problem Solving and Ideation
3.2. Educational Applications of Large Language Models
4. General Engineering Applications for Large Language Models
4.1. LLMs in Software Engineering
4.2. LLMs in Mechanical Engineering
4.3. LLMs in Civil Engineering
4.4. LLMs in Electrical Engineering
4.5. Integration of LLMs with Simulation and Programming Tools
5. Applications of LLMs in Chemical Engineering
5.1. LLMs in Chemical Engineering Education
5.2. LLMs in Process Simulation and Modelling
5.2.1. Surrogate Modelling
5.2.2. Code Generation for Models
5.2.3. Digital Twins and Data Integration
5.2.4. Soft Sensors and Advanced Process Control
5.2.5. LLM-Integrated Design and Simulation Systems
5.3. LLMs in Reaction Optimization and Autonomous Experimentation
5.3.1. Autonomous Labs and Agents
5.3.2. Prompt-Driven Reaction Optimization
5.3.3. LLM-Powered Synthesis Planning
5.4. LLMs in Molecular Design and Discovery
5.4.1. Chemical Language Models
5.4.2. Accelerating Materials Discovery
5.4.3. Property Prediction and Knowledge Extraction
5.5. LLMs in Process Design and Operations
5.5.1. LLMs in Distillation Column Design
5.5.2. LLMs in Experiment Design for Reactors
5.5.3. Optimization of Reactor Conditions
5.5.4. Operational Support
5.6. Challenges and Future Considerations for LLMs in Chemical Engineering
5.7. Limitations of LLMs in Safety-Critical Engineering Contexts
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Kong, Z.Y.; Adi, V.S.K.; Segovia-Hernández, J.G.; Sunarso, J. Complementary role of large language models in educating undergraduate design of distillation column: Methodology development. Digit. Chem. Eng. 2023, 9, 100126. [Google Scholar] [CrossRef]
- Cheng, J. Applications of Large Language Models in Pathology. Bioengineering 2024, 11, 342. [Google Scholar] [CrossRef]
- Peng, Y.; Yang, X.; Li, D.; Ma, Z.; Liu, Z.; Bai, X.; Mao, Z. Predicting flow status of a flexible rectifier using cognitive computing. Expert Syst. Appl. 2025, 264, 125878. [Google Scholar] [CrossRef]
- Hadi, M.U.; Tashi, Q.A.; Qureshi, R.; Shah, A.; Muneer, A.; Irfan, M.; Zafar, A.; Shaikh, M.B.; Akhtar, N.; Hassan, S.Z.; et al. Large Language Models: A Comprehensive Survey of Its Applications, Challenges, Limitations, and Future Prospects 2025. TechRxiv 2025. [Google Scholar] [CrossRef]
- Biswas, R.; De, S. A Comparative Study on Improving Word Embeddings Beyond Word2Vec and GloVe. In Proceedings of the 2022 Seventh International Conference on Parallel, Distributed and Grid Computing (PDGC), Solan, Himachal Pradesh, India, 25–27 November 2022; IEEE: New York City, NY, USA, 2022; pp. 113–118. [Google Scholar] [CrossRef]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is All you Need. In Advances in Neural Information Processing Systems; Curran Associates, Inc.: Red Hook, NY, USA, 2017; Available online: https://proceedings.neurips.cc/paper_files/paper/2017/hash/3f5ee243547dee91fbd053c1c4a845aa-Abstract.html (accessed on 23 April 2025).
- Devlin, J.; Chang, M.-W.; Lee, K.; Toutanova, K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv 2018. [Google Scholar] [CrossRef]
- Yenduri, G.; Ramalingam, M.; Selvi, G.C.; Supriya, Y.; Srivastava, G.; Maddikunta, P.K.R.; Raj, G.D.; Jhaveri, R.H.; Prabadevi, B.; Wang, W.; et al. GPT (Generative Pre-Trained Transformer)—A Comprehensive Review on Enabling Technologies, Potential Applications, Emerging Challenges, and Future Directions. IEEE Access 2024, 12, 54608–54649. [Google Scholar] [CrossRef]
- Brown, T.B.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; et al. Language Models are Few-Shot Learners. arXiv 2020. [Google Scholar] [CrossRef]
- Voicebot.ai. Timeline History of Large Language Models. Available online: https://voicebot.ai/large-language-models-history-timeline/ (accessed on 23 April 2025).
- OpenAI; Achiam, J.; Adler, S.; Agarwal, S.; Ahmad, L.; Akkaya, I.; Aleman, F.L.; Almeida, D.; Altenschmidt, J.; Altman, S.; et al. GPT-4 Technical Report. arXiv 2023. [Google Scholar] [CrossRef]
- Gao, K.; He, S.; He, Z.; Lin, J.; Pei, Q.; Shao, J.; Zhang, W. Examining User-Friendly and Open-Sourced Large GPT Models: A Survey on Language, Multimodal, and Scientific GPT Models. arXiv 2023. [Google Scholar] [CrossRef]
- Zhang, D.; Liu, W.; Tan, Q.; Chen, J.; Yan, H.; Yan, Y.; Li, J.; Huang, W.; Yue, X.; Ouyang, W.; et al. ChemLLM: A Chemical Large Language Model. arXiv 2024. [Google Scholar] [CrossRef]
- Muggleton, S. Alan Turing and the development of Artificial Intelligence. Eur. J. Artif. Intell. 2014, 27, 3–10. [Google Scholar] [CrossRef]
- Wang, G.; Li, X.; Xie, S. Bilateral Turing Test: Assessing machine consciousness simulations. Cogn. Syst. Res. 2024, 88, 101299. [Google Scholar] [CrossRef]
- Gugerty, L. Newell and Simon’s Logic Theorist: Historical Background and Impact on Cognitive Modeling. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2006, 50, 880–884. [Google Scholar] [CrossRef]
- Pollitzer, E.; Jenkins, J. Expert knowledge. expert systems and commercial interests. Omega 1985, 13, 407–418. [Google Scholar] [CrossRef]
- Rich, S.H.; Venkatasubramanian, V. Model-based reasoning in diagnostic expert systems for chemical process plants. Comput. Chem. Eng. 1987, 11, 111–122. [Google Scholar] [CrossRef]
- Venkatasubramanian, V. The promise of artificial intelligence in chemical engineering: Is it here. finally? AIChE J. 2019, 65, 466–478. [Google Scholar] [CrossRef]
- Sriram, D.; Stephanopoulos, G.; Logcher, R.; Gossard, D.; Groleau, N.; Serrano, D.; Navinchandra, D. Knowledge-Based System Applications in Engineering Design: Research at MIT. AI Mag. 1989, 10, 79. [Google Scholar] [CrossRef]
- Sriram, R.D. Artificial Intelligence in Engineering: Personal Reflections. NIST 2006. preprint. Available online: https://www.nist.gov/publications/artificial-intelligence-engineering-personal-reflections (accessed on 23 April 2025).
- Shrager, J. ELIZA Reinterpreted: The world’s first chatbot was not intended as a chatbot at all. arXiv 2024. [Google Scholar] [CrossRef]
- Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Learning representations by back-propagating errors. Nature 1986, 323, 533–536. [Google Scholar] [CrossRef]
- Sánchez Fernández, I.; Peters, J.M. Machine learning and deep learning in medicine and neuroimaging. Ann. Child Neurol. Soc. 2023, 1, 102–122. [Google Scholar] [CrossRef]
- Hinton, G. Deep belief networks. Scholarpedia 2009, 4, 5947. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
- Liu, Y.; Zhao, T.; Ju, W.; Shi, S. Materials discovery and design using machine learning. J. Mater. 2017, 3, 159–177. [Google Scholar] [CrossRef]
- Benitez, J.M.; Castro, J.L.; Requena, I. Are artificial neural networks black boxes? IEEE Trans. Neural Netw. 1997, 8, 1156–1164. [Google Scholar] [CrossRef] [PubMed]
- Rosoł, M.; Gąsior, J.S.; Łaba, J.; Korzeniewski, K.; Młyńczak, M. Evaluation of the performance of GPT-3.5 and GPT-4 on the Polish Medical Final Examination. Sci. Rep. 2023, 13, 20512. [Google Scholar] [CrossRef]
- Wu, S.; Otake, Y.; Mizutani, D.; Liu, C.; Asano, K.; Sato, N.; Saito, T.; Baba, H.; Fukunaga, Y.; Higo, Y.; et al. Future-proofing geotechnics workflows: Accelerating problem-solving with large language models. Georisk Assess. Manag. Risk Eng. Syst. Geohazards 2024, 19, 307–324. [Google Scholar] [CrossRef]
- Lee, J.; Yoon, W.; Kim, S.; Kim, D.; Kim, S.; So, C.H.; Kang, J. BioBERT: A pre-trained biomedical language representation model for biomedical text mining. arXiv 2019. [Google Scholar] [CrossRef]
- Beltagy, I.; Lo, K.; Cohan, A. SciBERT: A Pretrained Language Model for Scientific Text. arXiv 2019. [Google Scholar] [CrossRef]
- Cleti, M.; Jano, P. Hallucinations in LLMs: Types, Causes, and Approaches for Enhanced Reliability. 2024. Available online: https://www.researchgate.net/profile/Meade-Cleti/publication/385085962_Hallucinations_in_LLMs_Types_Causes_and_Approaches_for_Enhanced_Reliability/links/6715051009ba2d0c760eabb8/Hallucinations-in-LLMs-Types-Causes-and-Approaches-for-Enhanced-Reliability.pdf (accessed on 3 July 2025). [CrossRef]
- Pan, S.; Luo, L.; Wang, Y.; Chen, C.; Wang, J.; Wu, X. Unifying Large Language Models and Knowledge Graphs: A Roadmap. arXiv 2023. [Google Scholar] [CrossRef]
- Bommasani, R.; Hudson, D.A.; Adeli, E.; Altman, R.; Arora, S.; von Arx, S.; Bernstein, M.S.; Bohg, J.; Bosselut, A.; Brunskill, E.; et al. On the Opportunities and Risks of Foundation Models. arXiv 2021. [Google Scholar] [CrossRef]
- Wu, S.; Irsoy, O.; Lu, S.; Dabravolski, V.; Dredze, M.; Gehrmann, S.; Kambadur, P.; Rosenberg, D.; Mann, G. BloombergGPT: A Large Language Model for Finance. arXiv 2023. [Google Scholar] [CrossRef]
- Mohamadi, S.; Mujtaba, G.; Le, N.; Doretto, G.; Adjeroh, D.A. ChatGPT in the Age of Generative AI and Large Language Models: A Concise Survey. arXiv 2023. [Google Scholar] [CrossRef]
- Tsai, M.-L.; Ong, C.W.; Chen, C.-L. Exploring the use of large language models (LLMs) in chemical engineering education: Building core course problem models with Chat-GPT. Educ. Chem. Eng. 2023, 44, 71–95. [Google Scholar] [CrossRef]
- Ramos, M.C.; Collison, C.J.; White, A.D. A review of large language models and autonomous agents in chemistry. Chem. Sci. 2025, 16, 2514–2572. [Google Scholar] [CrossRef]
- Geetha, S.D.; Khan, A.; Khan, A.; Kannadath, B.S.; Vitkovski, T. Evaluation of ChatGPT pathology knowledge using board-style questions. Am. J. Clin. Pathol. 2024, 161, 393–398. [Google Scholar] [CrossRef] [PubMed]
- Poldrack, R.A.; Lu, T.; Beguš, G. AI-assisted coding: Experiments with GPT-4. arXiv 2023. [Google Scholar] [CrossRef]
- Choi, H.S.; Song, J.Y.; Shin, K.H.; Chang, J.H.; Jang, B.-S. Developing prompts from large language model for extracting clinical information from pathology and ultrasound reports in breast cancer. Radiat. Oncol. J. 2023, 41, 209–216. [Google Scholar] [CrossRef] [PubMed]
- Zhang, X.; Zhang, Y.; Zhang, Q.; Ren, Y.; Qiu, T.; Ma, J.; Sun, Q. Extracting comprehensive clinical information for breast cancer using deep learning methods. Int. J. Med. Inform. 2019, 132, 103985. [Google Scholar] [CrossRef]
- Qin, Z.; Wang, C.; Qin, H.; Jia, W. Brainstorming Brings Power to Large Language Models of Knowledge Reasoning. arXiv 2024. [Google Scholar] [CrossRef]
- Reeping, D.; Shah, A. Work-in-Progress: Students’ Prompting Strategies When Solving an Engineering Design Task. In Proceedings of the 2024 IEEE Frontiers in Education Conference (FIE), Washington, DC, USA, 13–16 January 2024; pp. 1–5. [Google Scholar]
- Pierson, K.C.; Ha, M.J. Usage of ChatGPT for Engineering Design and Analysis Tool Development. In Proceedings of the AIAA SCITECH 2024 Forum, Orlando, FL, USA, 8–12 January 2024. [Google Scholar] [CrossRef]
- Ye, A.; Maiti, A.; Schmidt, M.; Pedersen, S.J. A Hybrid Semi-Automated Workflow for Systematic and Literature Review Processes with Large Language Model Analysis. Future Internet 2024, 16, 167. [Google Scholar] [CrossRef]
- Wang, S.; Xu, T.; Li, H.; Zhang, C.; Liang, J.; Tang, J.; Yu, P.S.; Wen, Q. Large Language Models for Education: A Survey and Outlook. arXiv 2024. [Google Scholar] [CrossRef]
- Guizani, S.; Mazhar, T.; Shahzad, T.; Ahmad, W.; Bibi, A.; Hamam, H. A systematic literature review to implement large language model in higher education: Issues and solutions. Discov. Educ. 2025, 4, 35. [Google Scholar] [CrossRef]
- Bernabei, M.; Colabianchi, S.; Falegnami, A.; Costantino, F. Students’ use of large language models in engineering education: A case study on technology acceptance, perceptions, efficacy, and detection chances. Comput. Educ. Artif. Intell. 2023, 5, 100172. [Google Scholar] [CrossRef]
- Hou, X.; Zhao, Y.; Liu, Y.; Yang, Z.; Wang, K.; Li, L.; Luo, X.; Lo, D.; Grundy, J.; Wang, H. Large Language Models for Software Engineering: A Systematic Literature Review. arXiv 2023. [Google Scholar] [CrossRef]
- Feng, Y.; Zhao, Y.; Zheng, H.; Li, Z.; Tan, J. Data-driven product design toward intelligent manufacturing: A review. Int. J. Adv. Robot. Syst. 2020, 17, 172988142091125. [Google Scholar] [CrossRef]
- Ni, B.; Buehler, M.J. MechAgents: Large language model multi-agent collaborations can solve mechanics problems, generate new data, and integrate knowledge. Extrem. Mech. Lett. 2024, 67, 102131. [Google Scholar] [CrossRef]
- Bermudez-Viramontes, L. Leveraging Large Language Models for the Development of Educational Modules in Mechanical Engineering. 2024. Available online: https://escholarship.org/uc/item/7d9871hb (accessed on 3 July 2025).
- Xu, K.; Zhang, K.; Li, J.; Huang, W.; Wang, Y. CRP-RAG: A Retrieval-Augmented Generation Framework for Supporting Complex Logical Reasoning and Knowledge Planning. Electronics 2024, 14, 47. [Google Scholar] [CrossRef]
- Taboada, I.; Daneshpajouh, A.; Toledo, N.; De Vass, T. Artificial Intelligence Enabled Project Management: A Systematic Literature Review. Appl. Sci. 2023, 13, 5014. [Google Scholar] [CrossRef]
- Majumder, S.; Dong, L.; Doudi, F.; Cai, Y.; Tian, C.; Kalathil, D.; Ding, K.; Thatte, A.A.; Li, N.; Xie, L. Exploring the capabilities and limitations of large language models in the electric energy sector. Joule 2024, 8, 1544–1549. [Google Scholar] [CrossRef]
- Zhou, M.; Li, F.; Zhang, F.; Zheng, J.; Ma, Q. Meta In-Context Learning: Harnessing Large Language Models for Electrical Data Classification. Energies 2023, 16, 6679. [Google Scholar] [CrossRef]
- Liu, Z.; Chai, Y.; Li, J. Toward Automated Simulation Research Workflow through LLM Prompt Engineering Design. J. Chem. Inf. Model. 2025, 65, 114–124. [Google Scholar] [CrossRef]
- Du, Y.; Chen, S.; Zan, W.; Li, P.; Wang, M.; Song, D.; Li, B.; Hu, Y.; Wang, B. BlenderLLM: Training Large Language Models for Computer-Aided Design with Self-improvement. arXiv 2024. [Google Scholar] [CrossRef]
- Yao, Y.; Duan, J.; Xu, K.; Cai, Y.; Sun, Z.; Zhang, Y. A survey on large language model (LLM) security and privacy: The Good, The Bad, and The Ugly. High-Confid. Comput. 2024, 4, 100211. [Google Scholar] [CrossRef]
- Decardi-Nelson, B.; Alshehri, A.S.; Ajagekar, A.; You, F. Generative AI and process systems engineering: The next frontier. Comput. Chem. Eng. 2024, 187, 108723. [Google Scholar] [CrossRef]
- Boiko, D.A.; MacKnight, R.; Kline, B.; Gomes, G. Autonomous chemical research with large language models. Nature 2023, 624, 570–578. [Google Scholar] [CrossRef]
- Jablonka, K.M.; Schwaller, P.; Ortega-Guerrero, A.; Smit, B. Leveraging large language models for predictive chemistry. Nat. Mach. Intell. 2024, 6, 161–169. [Google Scholar] [CrossRef]
- Bran, A.M.; Cox, S.; Schilter, O.; Baldassari, C.; White, A.D.; Schwaller, P. Augmenting large language models with chemistry tools. Nat. Mach. Intell. 2024, 6, 525–535. [Google Scholar] [CrossRef]
- Ouyang, L.; Wu, J.; Jiang, X.; Almeida, D.; Wainwright, C.L.; Mishkin, P.; Zhang, C.; Agarwal, S.; Slama, K.; Ray, A.; et al. Training language models to follow instructions with human feedback. arXiv 2022. [Google Scholar] [CrossRef]
- Jiao, W.; Wang, W.; Huang, J.; Wang, X.; Shi, S.; Tu, Z. Is ChatGPT a Good Translator? Yes with GPT-4 as the Engine. arXiv 2023. [Google Scholar] [CrossRef]
- Kung, T.H.; Cheatham, M.; Medenilla, A.; Sillos, C.; De Leon, L.; Elepaño, C.; Madriaga, M.; Aggabao, R.; Diaz-Candido, G.; Maningo, J.; et al. Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models. PLoS Digit. Health 2023, 2, e0000198. [Google Scholar] [CrossRef]
- Baidoo-Anu, D.; Owusu Ansah, L. Education in the Era of Generative Artificial Intelligence (AI): Understanding the Potential Benefits of ChatGPT in Promoting Teaching and Learning. J. AI 2023, 7, 52–62. [Google Scholar] [CrossRef]
- Huang, S.; Dong, L.; Wang, W.; Hao, Y.; Singhal, S.; Ma, S.; Lv, T.; Cui, L.; Mohammed, O.K.; Patra, B.; et al. Language Is Not All You Need: Aligning Perception with Language Models. arXiv 2023. [Google Scholar] [CrossRef]
- White, J.; Fu, Q.; Hays, S.; Sandborn, M.; Olea, C.; Gilbert, H.; Elnashar, A.; Spencer-Smith, J.; Schmidt, D.C. A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT. arXiv 2023. [Google Scholar] [CrossRef]
- Weidinger, L.; Uesato, J.; Rauh, M.; Griffin, C.; Huang, P.-S.; Mellor, J.; Glaese, A.; Cheng, M.; Balle, B.; Kasirzadeh, A.; et al. Taxonomy of Risks posed by Language Models. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, Seoul, Republic of Korea, 21–24 June 2022; pp. 214–229. [Google Scholar] [CrossRef]
- Evans, O.; Cotton-Barratt, O.; Finnveden, L.; Bales, A.; Balwit, A.; Wills, P.; Righetti, L.; Saunders, W. Truthful AI: Developing and governing AI that does not lie. arXiv 2021. [Google Scholar] [CrossRef]
- Ye, L.; Zhang, N.; Li, G.; Gu, D.; Lu, J.; Lou, Y. Intelligent Optimization Design of Distillation Columns Using Surrogate Models Based on GA-BP. Processes 2023, 11, 2386. [Google Scholar] [CrossRef]
- Van Herck, J.; Gil, M.V.; Jablonka, K.M.; Abrudan, A.; Anker, A.S.; Asgari, M.; Blaiszik, B.; Buffo, A.; Choudhury, L.; Corminboeuf, C.; et al. Assessment of fine-tuned large language models for real-world chemistry and material science applications. Chem. Sci. 2025, 16, 670–684. [Google Scholar] [CrossRef] [PubMed]
- Rupprecht, S.; Hounat, Y.; Kumar, M.; Lastrucci, G.; Schweidtmann, A.M. Text2Model: Generating dynamic chemical reactor models using large language models (LLMs). arXiv 2025. [Google Scholar] [CrossRef]
- SymphonyAI. Industrial LLM-Symphony (No Date). Available online: https://www.symphonyai.com/industrial/industrial-llm/ (accessed on 24 April 2025).
- Devarapalli, V.N. How LLM-Based Virtual Assistants Can Benefit the Digitalization of the Process Industry Plant Operations. Int. Res. J. Sci. Eng. Technol. 2025, 12, 1–8. Available online: https://www.researchgate.net/publication/388284659_How_LLM-Based_Virtual_Assistants_Can_Benefit_the_Digitalization_of_the_Process_Industry_Plant_Operations (accessed on 24 April 2025).
- Kadlec, P.; Gabrys, B.; Strandt, S. Data-driven Soft Sensors in the process industry. Comput. Chem. Eng. 2009, 33, 795–814. [Google Scholar] [CrossRef]
- Yin, S.; Li, X.; Gao, H.; Kaynak, O. Data-Based Techniques Focused on Modern Industry: An Overview. IEEE Trans. Ind. Electron. 2015, 62, 657–667. [Google Scholar] [CrossRef]
- Sun, Y.; Li, X.; Liu, C.; Deng, X.; Zhang, W.; Wang, J.; Zhang, Z.; Wen, T.; Song, T.; Ju, D. Development of an intelligent design and simulation aid system for heat treatment processes based on LLM. Mater. Des. 2024, 248, 113506. [Google Scholar] [CrossRef]
- Ruan, Y.; Lu, C.; Xu, N.; He, Y.; Chen, Y.; Zhang, J.; Xuan, J.; Pan, J.; Fang, Q.; Gao, H.; et al. An automatic end-to-end chemical synthesis development platform powered by large language models. Nat. Commun. 2024, 15, 10160. [Google Scholar] [CrossRef] [PubMed]
- Luo, F.; Zhang, J.; Wang, Q.; Yang, C. Leveraging Prompt Engineering in Large Language Models for Accelerating Chemical Research. ACS Cent. Sci. 2025, 11, 511–519. [Google Scholar] [CrossRef] [PubMed]
- Savage, N. Drug discovery companies are customizing ChatGPT: Here’s how. Nat. Biotechnol. 2023, 41, 585–586. [Google Scholar] [CrossRef]
- Xuan, J.; Daniel, T. The Future of Chemical Engineering in the Era of Generative AI. 2023. Available online: https://www.thechemicalengineer.com/features/the-future-of-chemical-engineering-in-the-era-of-generative-ai/ (accessed on 24 April 2025).
- Mswahili, M.E.; Jeong, Y.-S. Transformer-based models for chemical SMILES representation: A comprehensive literature review. Heliyon 2024, 10, e39038. [Google Scholar] [CrossRef]
- Noutahi, E.; Gabellini, C.; Craig, M.; Lim, J.S.C.; Tossou, P. Gotta be SAFE: A New Framework for Molecular Design. arXiv 2023. [Google Scholar] [CrossRef]
- Kuenneth, C.; Ramprasad, R. polyBERT: A chemical language model to enable fully machine-driven ultrafast polymer informatics. Nat. Commun. 2023, 14, 4099. [Google Scholar] [CrossRef]
- Ma, Q.; Zhou, Y.; Li, J. Automated Retrosynthesis Planning of Macromolecules Using Large Language Models and Knowledge Graphs. Macromol. Rapid Commun. 2025, 2500065. [Google Scholar] [CrossRef]
- Schwaller, P.; Laino, T.; Gaudin, T.; Bolgar, P.; Hunter, C.A.; Bekas, C.; Lee, A.A. Molecular Transformer: A Model for Uncertainty-Calibrated Chemical Reaction Prediction. ACS Cent. Sci. 2019, 5, 1572–1583. [Google Scholar] [CrossRef]
- Pan, Y.; Xiao, Q.; Zhao, F.; Li, Z.; Liu, J.; Ullah, S.; Lim, K.H.; Huang, T.; Yu, Z.; Li, C.; et al. Chat-microreactor: A large-language-model-based assistant for designing continuous flow systems. Chem. Eng. Sci. 2025, 311, 121567. [Google Scholar] [CrossRef]
- Yoshikawa, N.; Skreta, M.; Darvish, K.; Arellano-Rubach, S.; Ji, Z.; Bjørn Kristensen, L.; Li, A.Z.; Zhao, Y.; Xu, H.; Kuramshin, A.; et al. Large language models for chemistry robotics. Auton. Robot. 2023, 47, 1057–1086. [Google Scholar] [CrossRef]
- Hirtreiter, E.; Schulze Balhorn, L.; Schweidtmann, A.M. Toward automatic generation of control structures for process flow diagrams with large language models. AIChE J. 2024, 70, e18259. [Google Scholar] [CrossRef]
- Ji, Z.; Lee, N.; Frieske, R.; Yu, T.; Su, D.; Xu, Y.; Ishii, E.; Bang, Y.J.; Madotto, A.; Fung, P. Survey of Hallucination in Natural Language Generation. ACM Comput. Surv. 2023, 55, 1–38. [Google Scholar] [CrossRef]
- Rai, A. Explainable AI: From black box to glass box. J. Acad. Mark. Sci. 2020, 48, 137–141. [Google Scholar] [CrossRef]
- Mehrabi, N.; Morstatter, F.; Saxena, N.; Lerman, K.; Galstyan, A. A Survey on Bias and Fairness in Machine Learning. ACM Comput. Surv. 2022, 54, 1–35. [Google Scholar] [CrossRef]
Milestone | Year | Key Development | Impact | References |
---|---|---|---|---|
Turing Test | 1950 | Turing proposed AI mimicking human intelligence. | Shaped AI’s theoretical foundation. | [14] |
Expert Systems | 1980s | Rule-based systems for engineering decisions. | Automated tasks but lacked adaptability. | [19] |
Neural Resurgence | 1986 | Backpropagation scaled neural networks. | Enabled engineering applications. | [23] |
Early NLP | 1990s | N-grams, HMMs used statistical methods for word prediction. | Limited by long-range dependency issues. | [4] |
Deep Learning | 2006–2012 | Deep networks excelled in complex tasks. | Set stage for scalable AI in engineering. | [25] |
Neural Networks | 2010s | Word2Vec, RNNs, LSTMs improved semantic and sequential processing. | Enhanced context but computationally heavy. | [5] |
Transformer | 2017 | Self-attention enabled parallel processing (Vaswani et al.). | Boosted efficiency, scalability; basis for LLMs. | [6] |
BERT | 2018 | Bidirectional transformer set NLP benchmarks (Google). | Improved context understanding. | [7] |
GPT Series | 2018–2020 | GPT-1 to GPT-3 (175B parameters) enabled few-shot learning (OpenAI). | Generated coherent text, versatile tasks. | [9] |
Scaled Models | 2022 | PaLM (540B), OPT (175B) pushed size limits. | Enhanced performance across domains. | [12] |
GPT-4 | 2023 | Multimodal model with >1T parameters (OpenAI). | Advanced reasoning, code, multimodal tasks. | [11] |
Domain-Specific LLMs | 2024 | ChemLLM fine-tuned for chemistry. | Excelled in specialized engineering tasks. | [39] |
Engineering Field | Applications | Reference |
---|---|---|
Software | Code generation, bug fixing, documentation, automated testing, code reviews | [51] |
Mechanical | Solving mechanics problems, design automation, digital twins, educational content generation | [52,53,54] |
Civil | Data management, contract analysis, knowledge extraction, design automation | [55,56] |
Electrical | Assisting power engineers, risk recognition, load forecasting, data classification | [57,58] |
Focus | Methodology | Key Findings | References |
---|---|---|---|
ChatGPT-3.5 in distillation column design | Six-step process: case study selection, ChatGPT-3.5 parameter suggestions, iterative refinement, calculation analysis, validation with Aspen Plus | LLMs suggest initial parameters (e.g., trays, reflux ratio) but require validation due to computational instability | [1] |
Coscientist for autonomous chemical research | GPT-4-based system for experiment design and execution | LLMs optimize reactions and handle liquids, applicable to reactor design | [63] |
LLM-RDF for end-to-end synthesis | Six LLM agents for synthesis and reactor scale-up | LLMs automate reactor setup and scale-up strategies | [82] |
Chat-microreactor for flow reactor design | LLM-based literature extraction, neural network classifiers, vectorized database | Efficient data extraction (16s/paragraph), F1 score > 70% for flow patterns | [91] |
AI for P&ID generation | Graph-based model learning from existing P&IDs | AI generates P&IDs for separation systems | [93] |
LLMs for robotic lab systems | Translating natural language to executable plans | LLMs generate code for reactor operation | [92] |
LLM-based virtual assistants | Contextual reasoning from operational logs | LLMs enhance decision-making in plant operations | [78] |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Khoo, T.L.; Lee, T.S.; Bee, S.-T.; Ma, C.; Zhang, Y.-Y. A Comparative Review of Large Language Models in Engineering with Emphasis on Chemical Engineering Applications. Processes 2025, 13, 2680. https://doi.org/10.3390/pr13092680
Khoo TL, Lee TS, Bee S-T, Ma C, Zhang Y-Y. A Comparative Review of Large Language Models in Engineering with Emphasis on Chemical Engineering Applications. Processes. 2025; 13(9):2680. https://doi.org/10.3390/pr13092680
Chicago/Turabian StyleKhoo, Teck Leong, Tin Sin Lee, Soo-Tueen Bee, Chi Ma, and Yuan-Yuan Zhang. 2025. "A Comparative Review of Large Language Models in Engineering with Emphasis on Chemical Engineering Applications" Processes 13, no. 9: 2680. https://doi.org/10.3390/pr13092680
APA StyleKhoo, T. L., Lee, T. S., Bee, S.-T., Ma, C., & Zhang, Y.-Y. (2025). A Comparative Review of Large Language Models in Engineering with Emphasis on Chemical Engineering Applications. Processes, 13(9), 2680. https://doi.org/10.3390/pr13092680