Next Article in Journal
Surface Defect Inspection in Images Using Statistical Patches Fusion and Deeply Learned Features
Previous Article in Journal
Convolutional Neural Networks with Transfer Learning for Recognition of COVID-19: A Comparative Study of Different Approaches
Open AccessArticle

Automated Source Code Generation and Auto-Completion Using Deep Learning: Comparing and Discussing Current Language Model-Related Approaches

1
IBM Quantum, IBM T.J. Watson Research Center, Yorktown Heights, NY 10598, USA
2
Electrical and Computer Engineering, Carnegie Mellon University, Mountain View, CA 94035, USA
*
Author to whom correspondence should be addressed.
Intern at IBM Quantum at the time of writing this paper.
AI 2021, 2(1), 1-16; https://doi.org/10.3390/ai2010001
Received: 3 November 2020 / Revised: 12 January 2021 / Accepted: 14 January 2021 / Published: 16 January 2021
(This article belongs to the Section AI in Autonomous Systems)
In recent years, the use of deep learning in language models has gained much attention. Some research projects claim that they can generate text that can be interpreted as human writing, enabling new possibilities in many application areas. Among the different areas related to language processing, one of the most notable in applying this type of modeling is programming languages. For years, the machine learning community has been researching this software engineering area, pursuing goals like applying different approaches to auto-complete, generate, fix, or evaluate code programmed by humans. Considering the increasing popularity of the deep learning-enabled language models approach, we found a lack of empirical papers that compare different deep learning architectures to create and use language models based on programming code. This paper compares different neural network architectures like Average Stochastic Gradient Descent (ASGD) Weight-Dropped LSTMs (AWD-LSTMs), AWD-Quasi-Recurrent Neural Networks (QRNNs), and Transformer while using transfer learning and different forms of tokenization to see how they behave in building language models using a Python dataset for code generation and filling mask tasks. Considering the results, we discuss each approach’s different strengths and weaknesses and what gaps we found to evaluate the language models or to apply them in a real programming context. View Full-Text
Keywords: deep learning; language model; source code; software engineering; natural language processing deep learning; language model; source code; software engineering; natural language processing
Show Figures

Figure 1

MDPI and ACS Style

Cruz-Benito, J.; Vishwakarma, S.; Martin-Fernandez, F.; Faro, I. Automated Source Code Generation and Auto-Completion Using Deep Learning: Comparing and Discussing Current Language Model-Related Approaches. AI 2021, 2, 1-16. https://doi.org/10.3390/ai2010001

AMA Style

Cruz-Benito J, Vishwakarma S, Martin-Fernandez F, Faro I. Automated Source Code Generation and Auto-Completion Using Deep Learning: Comparing and Discussing Current Language Model-Related Approaches. AI. 2021; 2(1):1-16. https://doi.org/10.3390/ai2010001

Chicago/Turabian Style

Cruz-Benito, Juan; Vishwakarma, Sanjay; Martin-Fernandez, Francisco; Faro, Ismael. 2021. "Automated Source Code Generation and Auto-Completion Using Deep Learning: Comparing and Discussing Current Language Model-Related Approaches" AI 2, no. 1: 1-16. https://doi.org/10.3390/ai2010001

Find Other Styles

Article Access Map by Country/Region

1
Back to TopTop