Next Article in Journal
Multimodal Quanvolutional and Convolutional Neural Networks for Multi-Class Image Classification
Previous Article in Journal
Trustworthy AI Guidelines in Biomedical Decision-Making Applications: A Scoping Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Generative Artificial Intelligence: Analyzing Its Future Applications in Additive Manufacturing

1
Chair of Microfluidics, University of Rostock, 18059 Rostock, Germany
2
Department Life, Light & Matter, University of Rostock, 18059 Rostock, Germany
*
Author to whom correspondence should be addressed.
Big Data Cogn. Comput. 2024, 8(7), 74; https://doi.org/10.3390/bdcc8070074
Submission received: 7 June 2024 / Revised: 2 July 2024 / Accepted: 3 July 2024 / Published: 6 July 2024

Abstract

:
New developments in the field of artificial intelligence (AI) are increasingly finding their way into industrial areas such as additive manufacturing (AM). Generative AI (GAI) applications in particular offer interesting possibilities here, for example, to generate texts, images or computer codes with the help of algorithms and to integrate these as useful supports in various AM processes. This paper examines the opportunities that GAI offers specifically for additive manufacturing. There are currently relatively few publications that deal with the topic of GAI in AM. Much of the information has only been published in preprints. There, the focus has been on algorithms for Natural Language Processing (NLP), Large Language Models (LLMs) and generative adversarial networks (GANs). This summarised presentation of the state of the art of GAI in AM is new and the link to specific use cases is this first comprehensive case study on GAI in AM processes. Building on this, three specific use cases are then developed in which generative AI tools are used to optimise AM processes. Finally, a Strengths, Weaknesses, Opportunities and Threats (SWOT) analysis is carried out on the general possibilities of GAI, which forms the basis for an in-depth discussion on the sensible use of GAI tools in AM. The key findings of this work are that GAI can be integrated into AM processes as a useful support, making these processes faster and more creative, as well as to make the process information digitally recordable and usable. This current and future potential, as well as the technical implementation of GAI into AM, is also presented and explained visually. It is also shown where the use of generative AI tools can be useful and where current or future potential risks may arise.

1. Introduction

Additive manufacturing (AM), also known as 3D printing, is a relatively new manufacturing technology that enables the layer-by-layer construction of digital three-dimensional (3D) models [1,2]. These 3D models are usually created with the help of CAD programs (Computer-Aided Design) or 3D scans of real objects [3]. A lot of time, creativity and experience is required to achieve the desired results, especially when designing printed objects [4,5]. However, artificial intelligence (AI) and natural language are also increasingly being used to create or inspire designs [6]. In previous studies, algorithms such as Natural Language Processing (NLP), Large Language Models (LLMs) and generative adversarial networks (GANs) have been analysed for this purpose. Based on this, a corresponding AI technology has been established that can generate various types of content such as text, images, audio files and 3D models from text or natural language and this is known as generative AI [7].
Developments in the field of AI have become increasingly important in research and industry in recent years. Thereby, AI has become a generic term for various computer algorithms that can perform tasks that normally require human intelligence, such as learning, recognising patterns, making decisions and understanding language [8]. Machine learning (ML) and deep learning (DL) are the best-known sub-areas for this. GAI is a newer branch of AI that generates new content from data instead of analysing it [9]. ML is used, for example, to independently generate new images, texts or sounds from data [10]. GAI thus represents a significant advancement compared to previous intelligent systems [11]. The aim here is to generate new data instead of differentiating existing data (through classification, regression or clustering), as is the case with conventional AI models [8]. According to Banh and Strobel [8], GAI models are trained to understand complex data distributions and then generate results that are very similar to real data. Using statistics, GAI models can learn high-dimensional probability distributions from a training dataset and generate new, similar data that resemble the underlying class of training data.
A generative AI trained with 3D modelling data supports, for example, the conception of complex shapes and organic structures through generative design [12,13] by rapidly generating inspiration images via diffusion-based approaches [4,14] or by supporting AI-generated technical instructions or recommendations extracted from a knowledge base [15,16]. Three-dimensional printing then offers the ideal manufacturing technology to produce the generated elements, as even very complex, abstract structures can be printed relatively cost-effectively with AM [17,18].
Generative AI applications are already used in everyday life to generate texts, images and computer code [14,16,18]. The creation of content from text instructions has made considerable progress with the development of special algorithms, access to very large datasets via the internet and the availability of enormous computing power, while the creation of 3D content is only making slow progress [5]. Existing models for generating 3D objects can only access comparatively small 3D datasets and are therefore currently still very limited in their possibilities [5].
This paper examines the possibilities that generative AI offers specifically for additive manufacturing. The typical AM process chain, from development to production and finishing, is considered with regard to the possible applications and integration options for generative AI tools. Exemplary scenarios or use cases are presented, analysed and evaluated. Firstly, we analysed how well a conversational AI chatbot can answer domain-specific AM questions. Secondly, we analysed how Text-to-image algorithms can be used to generate basic design inspiration for 3D printing, and thirdly, whether Text-to-3D algorithms can be used to generate and print meaningful 3D models for AM parts. The results will then be used to identify the future potential of generative AI tools in the AM process and to support existing processes through the use of generative AI. We will also discuss where the use of such tools makes little sense or can even have potentially dangerous effects.

2. Related Work

In the field of additive manufacturing, there are currently very few publications that deal with the topic of generative AI. Badini et al. [18], for example, have investigated the potential of Chat Generative Pre-trained Transformer—ChatGPT (OpenAI, San Francisco, CA, USA)—a Large Language Model, to generate more efficient G-code for the printing process. For this purpose, ChatGPT was trained with existing G-code data to automatically generate optimised G-code for specific materials, printers or print settings. It was shown that the use of ChatGPT contributes to AM process optimisation and improves the efficiency and accuracy of G-code generation [18]. This can ultimately save a significant amount of time and material.
Jasche et al. [19] have investigated how specially trained chatbots can provide general support when using a 3D printer. To this end, a fully functional chatbot was developed that introduces users to 3D printing. Users can ask the chatbot specific questions about the 3D printing process and receive specific answers or instructions in return. The chatbot acts as a human–machine interface and can thus facilitate the interaction of both beginners and experienced users with 3D printers [19]. Jasche et al. [19] showed that by integrating the most important activities of the 3D printing workflow into the chatbot’s database, the entry barrier for newcomers to the technology was lowered and they were able to start a successful print in just a few minutes. In addition, the implementation of more sophisticated functions could also be interesting for experienced users.
Ballagas et al. [20] have developed a human-centred approach for the use of AI to support 3D design tasks, where design intentions are expressed at a high level through language and translated by AI into new 3D designs. For this purpose, the authors trained a generative adversarial network to create a model for describing the design of sunglasses using natural language inputs that allow consumers to express their design preferences without design tools or expertise [20]. The new designs generated by the AI, which are highly customised to the desired fit or style, for example, can then be quickly printed using 3D printing. However, the AI model developed was very limited in terms of realisable design intentions and had problems with ambiguities in particular [20]. According to Ballagas et al. [20], a better design workflow with the complementary strengths of AI combined with a human expert would still be an effective optimisation and enable the next level of customisation and personalisation.
Hyunjin [21] examines, in a study, how the development of AI in combination with 3D printing is changing the design process and the manufacturing industry. A generative design process based on AI plays a central role here, making it possible to create a large number of designs based on special boundary conditions in the shortest possible time. This reduces the level of creativity required for the design process and leads to an automatic accumulation of design-specific knowledge so that the AI-generated results can relieve the designer of important tasks [21]. Once the AI has finalised one or more designs, these can then be produced in small quantities and very individually using the 3D printing process [21]. According to Hyunjin [21], AI design and 3D printing will revolutionise the diversity and efficiency of manufacturing and make products more consumer-oriented by involving consumers more in the production process, contributing to the democratisation of manufacturing.
Jaruga-Rozdolska [22] demonstrated a concrete application of generative, image-generating AI. To this end, the functionality of the AI tool Midjourney (Midjourney Inc., San Francisco, CA, USA) and its potential application in architecture were explained. Specific examples were used to demonstrate the tool’s potential to support creative processes and illustrate how Midjourney works, as well as how it can be used in architecture. The research shows that Midjourney AI can be a valuable tool for the architect and is able to support the creative thinking process [22]. According to Jaruga-Rozdolska [22], for example, buildings in the digital AI images created are aesthetically pleasing and correspondingly correct to the words entered. In addition, the speed of image creation is significantly higher than that of comparable conventional software. However, it was also shown that AI is only capable of making autonomous decisions to a limited extent and that the user is still responsible for providing relevant input, evaluating the results and making decisions that lead to the desired outcome [22]. In architecture, an AI-generated image is therefore only the first stage of creation, but it can significantly reduce the time and labour required to develop designs [22].
Gozalo-Brizuela and Garrido-Merchán [23] have compiled a complete overview of generative AI applications. In their study, the authors present a comprehensive overview of more than 350 generative AI applications covering a broad spectrum, e.g., conversational AI, image processing, text-to-video and text-to-3D algorithms [23].

3. Applications and Tools for Generative Artificial Intelligence in Additive Manufacturing

This work analyses three AI application areas in more detail, in which the use of AI is already taking place in the context of AM processes or at least appears to be potentially useful. Some of the currently most popular AI tools for these tasks are also evaluated and validated using practical AM use cases. For this purpose, the investigated areas of the application of GAI in AM are listed individually below, explained and linked to 3D printing.

3.1. Conversational AI Used to Generate Additive Manufacturing Knowledge

3.1.1. General Aspects of Conversational AI

Conversational AI, currently one of the most discussed topics in the field of AI, aims to improve the interactions between humans and computers [23,24]. According to Kulkarni et al. [25], this technology is a subfield of AI that deals with speech- and text-based AI agents or chatbots that can simulate and automate conversations and verbal interactions by converting text input (prompts) into text output [23].

3.1.2. Technical Background of Conversational AI

Conversational AI is based on Natural Language Processing, a collection of computer techniques for automatically analysing and representing human language, and it is revolutionising the way humans interact with computers [24,25,26]. This technology is also supported by LLMs, which are statistical models that assign a probability to a sequence of words and can thus contain hundreds of billions of parameters [23,27,28]. LLMs are trained on large text datasets (such as GPT-3, PaLM, LLaMA) in order to learn human-like conversations or other intelligent activities and subsequently create them themselves [23,27]. Some examples are text generation and logical thinking [29], mathematical understanding or programming support [30] and applications related to demand forecasting, warehouse optimisation and risk management in supply chains [31].

3.1.3. Existing Conversational AI Tools

The current best-known tools for the direct application of conversational AI are ChatGPT (OpenAI, San Francisco, CA, USA), Google Gemini (Alphabet Inc., Mountain View, CA, USA), Microsoft Copilot (Microsoft Corp., Redmond, WA, USA) and the Enhanced Representation through Knowledge Integration (Ernie) chatbot (Baidu, Beijing, China) [23,32]. However, according to Rudolph et al. [32], ChatGPT (Conditional Generative Pre-trained Transformer) in particular has recently attracted a lot of public attention and fuelled the hype surrounding the technology. ChatGPT is based on an architecture that combines pre-trained deep learning models with a programmability layer and trains them with millions of conversations from various sources to generate natural conversations [31]. Developments in this area are progressing rapidly, and all known tools are becoming better and better at conducting human-like conversations [32].

3.1.4. Relevance of Conversational AI in Additive Manufacturing

For AM in particular, developments in this area are currently still taking place in the research environment [18,19,20]. However, the first few industrial companies are already experimenting with the technology [33,34]. Basically, the aim is to develop chatbots that are trained with specific 3D printing-related data and can then provide concrete answers or assistance with 3D printing-related questions or problems. Basically, technically relevant knowledge on additive manufacturing is to be generated on the basis of text input. This speeds up the manual processes of research as well as “trial-and-error” tests and saves costs [33,34].

3.2. Text-to-Image Algorithms to Generate Additive Manufacturing Design Approaches

3.2.1. General Aspects of Text-to-Image Algorithms

AI which is able to generate images from text input has recently attracted a great deal of public interest [35]. The generation from text input of realistic images (Text-to-image), which subsequently also correspond semantically to the given text descriptions, represents a difficult challenge, but at the same time also has enormous application potential, e.g., in image creation, editing, design inspiration or CAD [22,36].

3.2.2. Technical Background of Text-to-Image Algorithms

Text-to-image algorithms are based on NLP and computer vision [37] and have received a significant boost through the use of GANs [36,38]. The combination of generative algorithms and an intuitive interface that allows a user to enter natural language enables the creation of images that illustrate concepts described in the text [35]. Modern state-of-the-art generative models for the generation of high-resolution images are diffusion models, which, according to Ho et al. [39], aim, for the first time, to break down image generation into many small denoising steps. This allows them to generate high-quality images that are suitable for creative processes such as art, design and photography, as well as for the creation of raw data for other AI processes such as image classification [39].

3.2.3. Existing Tools with Text-to-Image Algorithms

The current best-known tools for generating images from text prompts are DALL-E2 (OpenAI), Midjourney (Midjourney Inc., San Francisco, CA, USA) and StableDiffusion (Stability AI Ltd., London, UK), which are, in principle, based on diffusion models [35] Midjourney, and the launch of the Midjourney community in particular, have unleashed creativity in dealing with image-generating AIs, as here, unlike before, the images generated, including the text prompts, are shared with a community, thus creating a huge social learning community [40]. In addition, the degree of user involvement with Midjourney is also rated as the highest among the available tools [22].

3.2.4. Relevance of Text-to-Image Algorithms for Additive Manufacturing

As with AI chatbots, Text-to-image algorithms are currently mainly used in AM research, but only very occasionally [21,41]. Industrial applications or solutions cannot be found at present. Text-to-image tools such as Midjourney are also being used more and more frequently to speed up design tasks before the printing process or to obtain initial design suggestions for print files based on text inputs [4,22]. These text inputs can also be generated by chatbots such as ChatGPT and the generated 2D images can then be used as input for the generation of realistic 3D models [42]. AI design support thus enables greater automation and acceleration of the development and design process [22,35]. Expensive human resources, e.g., for CAD modelling, can also be replaced by cheaper computer activities, although this is the subject of much controversy [35,43]. The basic process from text prompt to printed component is shown in Figure 1 below. There, a 2D image is generated from text using a GAI algorithm. This then serves as a design template for a CAD draft, from which Standard Tesselation Language (STL) files are generated for print preparation, known as slicing. The sliced files are then transferred to a 3D printer, which additively manufactures the component according to the generatively created and modelled design.

3.3. Text-to-3D Synthesis to Generate 3D Models Suitable for Additive Manufacturing

3.3.1. General Aspects of Text-to-3D Algorithms

With the advent of Text-to-image algorithms, interest in algorithms that can create 3D objects generatively is also increasing [44]. Recent breakthroughs in pre-trained Text-to-image diffusion models have led to the development of Text-to-3D algorithms that can generate realistic 3D or CAD models from text prompts or 2D images [42,45,46,47]. This offers considerable potential for people without 3D CAD design experience, for the creation of generative designs for 3D printing or generally for individualization and increased productivity in the design process [35].

3.3.2. Technical Background of Text-to-3D Algorithms

Text-to-3D algorithms can be trained with special structures such as voxels and point clouds, which require relatively large datasets of 3D data [45]. A new approach also learns 3D structures from diffusion models pre-trained with images [42]. However, it would be ideal if a user could generate any 3D shape from an abstract text description of an object [45]. Khalid et al. [45] have developed a corresponding technique based on a large dataset of rendered images and text inputs, which modifies a 3D shape using only a text prompt. Overall, there are still some limitations in the Text-to-3D area, e.g., the insufficient amount of available training data [46], the occurrence of unwanted artefacts in the generated model [45] or the insufficient resolution of the generated 3D models [42].

3.3.3. Existing Tools with Text-to-3D Algorithms

Only very few accessible tools could be found for text-to-3D applications. Perhaps the most promising tool is 3DFY Prompt (3DFY.ai Ltd., Haifa, Israel), which can generate relatively high-quality 3D models from simple text prompts. The application is based on large datasets from categorised 3D models. Point-E and Shap-E from OpenAI offer further applications. However, these applications are currently only accessible in a very rudimentary way via basic programming environments. Another Text-to-3D application is Magic3D (Nvidia Corp., Santa Clara, CA, USA), which is not yet practically available for testing.

3.3.4. Relevance of Text-to-3D Algorithms in Additive Manufacturing

Text-to-3D algorithms are not yet used in 3D printing practise. But it is conceivable that algorithms can be used to create 3D models for AM components based on text descriptions of the desired design [35]. The corresponding process flow is shown below in Figure 2. First, a text prompt is entered of the desired design and the design is generated using a GAI 3D synthesis algorithm. This design can then be exported and prepared directly in a slicer program for 3D printing. Subsequent CAD design is not necessary, so this process step can be skipped here.

4. Implementations of Generative Artificial Intelligence in Additive Manufacturing

This section shows the implementation of the three previously presented application options regarding GAI specifically for AM use cases. We investigated how well a conversational AI chatbot can answer specific AM questions, how Text-to-image algorithms can be used to generate basic design inspiration and whether Text-to-3D algorithms can be used to generate meaningful 3D models of AM components that can subsequently be printed.

4.1. AI-Powered Conversational Chatbots as Additive Manufacturing Assistants

The aim of this use case was to use a digital AI-based chatbot to create the most detailed and interactive instructions possible for a specific 3D printing job. Specifically, the chatbot was to guide an inexperienced user through a 3D printing process with Material Extrusion (MEX), in particular, Fused Deposition Modelling (FDM), using a special 3D printer. An Ultimaker S3 (Ultimaker B.V., Geldermalsen, Netherlands) was used as the printer. The chatbot should first provide basic process knowledge, explain the process flow and then guide the user step-by-step through the printing process.
First, a conversational AI chatbot based on ChatGPT, with the model configuration GPT-3.5, was configured for this use case and additionally trained with specific 3D printing expertise from three scientific AM and FDM reviews [48,49,50] and the Ultimaker S3 user manual [51]. There are various tools that offer a browser-based development environment for configuring and training a chatbot. The chatbot builder tool Orimon.ai (Unlax Consumer Solutions Private Ltd., Delhi, India), which uses NLP and ML algorithms to understand conversations and provide personalised responses, was used in this work. With this tool, documents can be uploaded as a PDF file and used together with GPT-3.5 to train a special chatbot. An imaginary design was assumed as the print file, which was to be available to the user in the form of an STL file. The STL file should be prepared for printing using the Ultimaker Cura slicing software version 5.4.4 (Ultimaker B.V.) and then printed on the Ultimaker S3 3D printer with a PLA filament (Ultimaker B.V.).
At the beginning of the use case, the chatbot was asked what FDM is and how the process works. The next question was how the process flow from a 3D design to a finished printed part works (see Figure 3). The chatbot was able to answer both questions in a well-founded, detailed, linguistically skillful and fast manner. Based on the listed process steps, the user can then independently search for further information or request it from the chatbot.
After the FDM process was explained, specific questions were asked about slicing and what needs to be considered during this step. The chatbot’s answers were comprehensive but lacked specific details about the parameter settings (Figure 4). These were then asked again, but the chatbot was unable to return any specific print parameter settings (see Figure 5, left). The chatbot does not yet have enough information to give a more detailed answer.
This question was then processed manually via Orimon.ai’s chatbot tool and answered by a human 3D printing expert. Specifically, the information on printing temperature, printing speed, layer thickness and infill has been supplemented with recommended values for PLA filament. The corresponding data were added to the chatbot’s training database and a re-training of the chatbot was then initiated. The same question about the print parameter settings was then asked again and the chatbot was able to give detailed information about the print settings (see Figure 5, right).
The suggested parameter settings were then used for the imaginary slicing and the file was prepared for actual 3D printing and saved on a USB stick. The chatbot was asked again what the next steps were, how to start printing and what to do afterwards (Figure 6). The chatbot also had detailed answers to these questions which guide the user through the printing process.
From a subjective point of view, the use of this chatbot as an introduction to practical 3D printing saves several days of familiarisation time compared to a new employee who has to familiarise themselves with the topic independently. This means that a new employee can print their first successful part in just a few hours instead of several days. In addition, better printing results can be expected immediately, as the chatbot can pre-set already established print settings.

4.2. Generation of Design Templates for Additively Manufactured Objects Using Midjourney

The aim of this use case was to utilise the Text-to-image AI tool Midjourney. This tool uses GANs and Recurrent Neural Networks (RNNs) to analyse text input data and generate images from it. The aim was to create the most realistic designs possible for a lamp to be additively manufactured. The lamp should have an aesthetic shape and feature special 3D printing elements such as lattice patterns. Functionality also had to be considered so that light sources could also be used, for example. However, the overall aim was to obtain inspirational lamp designs as quickly and easily as possible, which could then be used as a template for a CAD design and for a real print of the design using FDM.
At the beginning of the investigation, the following simple prompt was used: 3D-printed lamp. Based on these text inputs, the Midjourney tool then generated different design variants of a 3D-printed lamp (see Figure 7). Based on this, attempts were made to further optimise the design in order to come even closer to a real design and the desired requirements. To this end, the text prompt was expanded by adding additional keywords and design terms such as “generative design” and “lattice structures”, which take a greater account of aspects such as functionality, design and object representation (see Figure 8). New designs were then created by Midjourney using the new text prompts. In principle, it would also be possible to use already created designs as a basis for further optimizations, but this was not done in this study in order avoid limiting the diverse design possibilities of Midjourney too early and also to be able to demonstrate them better. Based on the initial prompt, it was first defined that the lamp should have a flat base or stand and be designed generatively (Figure 8A). The background was then standardised (B) and tested to see what a design with a stronger focus on lattice structures would look like (C). Ultimately, a specific attempt was made to create a special drop shape with an upward-facing opening for the lamp (D).
In the end, a final design was selected, which was created as a high-resolution, downloadable and editable image. For test purposes, the design was fundamentally modelled with Autodesk Fusion 360 (Autodesk Inc., San Rafael, CA, USA). The size was freely chosen with a maximum diameter of 100 mm and a height of 180 mm. Exact remodelling based on the generated image was not possible, as only a 2D image perspective of the lamp was generated and specific manufacturing conditions for FDM printing had to be considered. However, the Midjourney design and the design aspects presented can serve as a strong design orientation (see Figure 9). The lamp design was then exported as an STL file from Fusion 360 and prepared for printing on an Ultimaker S3 FDM printer with Ultimaker Cura. Ultimately, a prototype of the sliced design was printed using PLA Plus filament. The printability of the final part was good overall, but several design and print iterations had to be carried out before a visually pleasing result was possible. The AI-generated design could therefore ultimately not be implemented 1:1; certain restrictions had to be observed in the post-modelling and printing processes.
With the support of a Text-to-image AI tool, the development of a design proposal or draft variant can be carried out much faster. Designing according to a visual and detailed image is also faster. Overall, designs can be fully conceptualised and constructed in just a few hours. In contrast, a conventional design and construction phase can take several days.

4.3. Generation of a 3D Part Model for Additive Manufacturing Using Shap-E

The goal of this use case was also to create a lamp design and a print file for 3D printing, but this time using a Text-to-3D synthesis algorithm without human CAD design. As Text-to-3D algorithms are generally not yet very mature, more attention was paid to the basic feasibility of model generation and less to the aesthetics and functionality of the generated designs. However, the lamp model generated should be printable using FDM.
The tool Shap-E (OpenAI) was used to generate a 3D design from a text prompt. Shap-E uses latent diffusion models and Neural Radiance Fields (NeRFs) algorithms in the background to generate flexible and realistic 3D elements [44]. For this purpose, the official repository with program code and explanations was used, which was implemented via Google Colab (Alphabet Inc.). The program code was not changed and a T4 GPU was used for the calculations.
Initially, the following prompt was used again: 3D-printed lamp. Based on this text prompt, the Shap-E algorithm generated a 3D design that looks similar to a classic bedside lamp (see Figure 10a). This design is not printable as the lampshade and the rest of the lamp body are not connected to each other. Unconnected elements were also the result of other prompts that focused more on the external shape of the lamp and also described more complex structural elements (Figure 10b,c). A printable lamp does not seem to be possible with Shap-E, which is why the prompt was changed to print a vase. With this new prompt, a relatively clear design, which looks more like a vase and is printable, is shown in Figure 10d.
Based on the initial design results, it was found that design elements that lead to holes in the structure (such as lattice structures or honeycombs) do not give good results and lead to many unconnected areas that cannot be printed. It is currently not possible to influence the position and shape of the holes or specific detail elements. In general, closed, connected areas are better, which tend to come from simple prompts and are limited to the description of flowing shapes. The term “lamp” also had to be replaced by “vase”, as otherwise a complete lamp with a shade and base was always produced. These findings ultimately led to a final design, which had a simple vase shape without detailed elements and could be exported as an OBJ file without further CAD processing (see Figure 11, left). The exported design had a very small size, approx. 1.4 mm in diameter and approx. 2.0 mm in height, which could not be explicitly influenced. This small size probably resulted from the available or allocated computing capacity of the hardware. Larger models would utilise this significantly more and for longer. The file was then scaled-up in Ultimaker Cura so that the design could be printed well. The final size was approx. 50 mm in diameter, with a height of approx. 70 mm, which corresponds to about 3500% of the original size. This size was very easy to print with an Ultimaker S3 printer (Figure 11, right). No further design iterations or adjustments were necessary for the resulting part, as it could be printed directly on the first attempt with standard printing parameters for PLA Plus and showed good print quality.
With a Text-to-3D AI tool, the entire development process of an additive component can be accelerated immensely. Calling up the Text-to-3D synthesis algorithm and creating a printable design can be carried out by an experienced user in just a few minutes. The biggest time saving is achieved by bypassing or automating the time-consuming and manual design process. This alone can save several hours of design work.

5. Discussion and Future Perspectives

5.1. General Aspects of Generative Artificial Intelligence

In this work, the possibilities of generative AI in additive manufacturing were investigated. Specific application scenarios within the AM process were developed and solutions based on GAI tools were implemented. The solutions shown illustrate the potential of this new technology and can effectively support the sub-processes under consideration, such as knowledge transfer and design support in additive manufacturing. Fundamentally, GAI opens up new opportunities that can be summarised in a Strengths, Weaknesses, Opportunities and Threat (SWOT) analysis. Figure 12 illustrates this. The individual aspects of the SWOT analysis and their relationship to additive manufacturing are explained in more detail below.
One of the greatest strengths of generative AI is the rapid incorporation of domain-specific literature or expertise on a topic. Based on this knowledge, special routine activities can then be taken over and automated by AI tools, e.g., chatbots, for product advice. Through continuous learning processes, expansion of the database and the optimisation of algorithm performance, the results are also constantly improving; the productivity of the solutions, the creativity, e.g., of Text-to-image algorithms, and even the number of different angles and perspectives on solving a problem increases.
However, generative AI applications currently still have some weaknesses. Although expertise can be quickly absorbed and utilised, they usually lack the necessary quality and specific detailed experience, which experts usually intuitively contribute to the ongoing process of training and which are therefore not considered by the AI. The AI algorithms also only have a rudimentary understanding of correlations. Further weaknesses lie in data sovereignty. Large amounts of training data are required for the algorithm to perform well, but these are difficult to obtain and manage. Furthermore, there are currently still technological barriers to the use of GAI tools. A lot of experience is required, for example, to generate good images from text or to recognise undesirable results. AI algorithms can also generate incorrect or inaccurate results that are not based on the training data or have been incorrectly decoded. These so-called hallucinations can only be recognised with appropriate experience. Ultimately, the consumption of resources such as computing power, electricity and money with the increasing use of generative AI is also a weak point in terms of sustainability.
The opportunities for GAI are great, especially if the output quality of GAI solutions is further improved in order to increase their attractiveness and efficiency. Good solutions already reduce the time and cost of development and creative processes, for example by quickly generating images or 3D structures from simple texts. With regard to chatbots, companies will also be able to transfer their intellectual property or that of their employees to a database and make it digitally accessible via a chatbot. This allows various processes to be scaled, for example by answering several enquiries on a single topic with one dataset, one automated process or one technical solution, instead of manually identifying and contacting the right person. At the same time, such requests can also be very specific and personalised despite the scaling, as well-trained algorithms can process them well. This in turn will contribute to a good user experience and acceptance of the technology in the future.
One threat that is frequently discussed is the dangers arising from the use of AI or the algorithms themselves [52]. Data security, ethical concerns and the protection of privacy are often discussed in this context. This is due to the fact that the training of GAI algorithms requires large amounts of training data, which are usually not generated entirely by the company itself but come from public sources such as the internet. Individual authors are usually not asked and often not even informed that their data are being used. This then leads to legal and regulatory concerns when using the technology. This also applies to additive manufacturing. The development of special AI solutions for AM applications naturally also requires a lot of AM-relevant data, some of which is freely accessible on the internet, but is subject to special licences that, for example, exclude commercial use or require the author to be named. It is currently not possible to check compliance with these licences with certainty, which generally raises concerns about the provision of so-called open source data. The content generated can also be deliberately false, dangerous or misleading, for example, used to produce fake news, create political sentiments or unsettle people. Similarly, inappropriate AM training data can be intentionally used for the development of a GAI solution which in turn produces false results. Another danger posed by the use of GAI is the loss of jobs. People are worried that their work could be taken over by AI in the future, which is possible in some areas, such as text translation. The loss of cognitive skills through the use of AI and the danger of technological singularity, in which AI eventually becomes more intelligent, faster than humans and able to learn completely independently, is less at the centre of discussions, but also dangerous. AI could then almost completely replace humans and possibly rebel against them. However, this applies less to jobs in the AM sector, as AI solutions are currently more of a great help than a threat here, as shown in this study.

5.2. Aspects of Generative Artificial Intelligence in the Context of Additive Manufacturing

Based on the state of the art, it can be seen that the use of generative AI tools in the field of additive manufacturing is not yet very widespread. However, the use cases developed in this work and their associated implementations were able to demonstrate the efficient utilisation of various forms of applications of the technology. The aspects of the SWOT analysis of GAI listed above also apply here, for the most part. In the following, the use cases developed in this thesis are analysed in more detail and it is discussed whether the use of GAI tools was useful or not. In addition, possible risks due to the further development of the implemented solutions are also highlighted.

5.2.1. Chatbots in the Additive Manufacturing Process Chain

The implementation of a chatbot can be carried out relatively easily and quickly using various online tools. Even with just a little training data and little training time, amazing results can be achieved in the form of challenging conversations. Complex questions are often answered satisfactorily by the trained chatbot and can also be meaningfully supplemented on request. However, if questions are asked that are not directly contained in the underlying training data, there are limitations and the chatbot can only give general answers. However, the problem can be solved relatively quickly by adding the information manually and retraining the algorithm. Interactions with the chatbot usually take place via browser-based web applications. As long as internet access is available, this works very well, quickly and conveniently for the user. If there is no internet connection, the chatbot cannot be reached. In this case, a local network with a connection to a local server would have to be set up or a local computer used to run the trained algorithm.
The development of a chatbot and its training with specific process expertise offer a wide range of possible applications for conveying information and knowledge along the entire additive manufacturing process chain. Manual instructions can be automated very well with a chatbot, as the use case shows in detail. Other internal and external company processes could also be easily automated, e.g., AI-based support via chat for customer problems or familiarising new employees with special software. Another option is to collect the expertise of a company and its employees in a database and use this to train a chatbot for internal or external training courses. This collected knowledge can then be digitised, used, expanded and optimised at any time. This has the advantage that the knowledge is available regardless of illness and the age-related retirement of employees and, collected and combined, this also enables new perspectives, as well as increases productivity. At the same time, however, companies are also confronted with new risks in terms of digital data security. There is also the danger that the company becomes too dependent on the knowledge of the AI and no longer has any knowledge of its own in the event of a failure. A modern IT landscape and security structure within the company is therefore essential in order to utilise and protect digital technologies efficiently. Cybersecurity in particular will play an important role in the future.

5.2.2. Additive Manufacturing Designs from Text Generation Algorithms

In addition to the use of AI-based chatbots, Text-to-image and Text-to-3D algorithms also offer innovative potential for additive manufacturing. With the help of suitable algorithms, pictorial and three-dimensional designs can be created from text descriptions in a very short timeframe. These algorithms can be used via various online tools, although they are at different stages of development. In particular, the creation of 2D images from text is currently already possible. Various tools offer access to very well-trained Text-to-image algorithms, but some experience is needed to use them. Good construction and design knowledge are required for the modelling, and one also had to be familiar with the CAD program in order to be able to create the structures approximately according to the image template. A high level of expertise in print preparation and the specific printing process is also required to enable good printing. The technical implementation of the technology and its integration into the process flow is therefore still challenging, and a relatively high level of experience is required to achieve high-quality results. Creating 3D elements from text is even more complicated and is currently only possible with a few programs that also require some programming knowledge to start the tool and to be able to enter and change text prompts. The creation of sophisticated content or usable 3D design files is generally still difficult. If these files are then to be processed directly, without further optimisation, only relatively simple geometric designs make sense. Technical applications cannot yet be developed easily with it.
Nevertheless, this also offers users with little or no creative potential the opportunity to visualise their thoughts through text or to present simple designs without special design software. Furthermore, time-consuming development steps such as design finding and variant development for additively manufactured parts can be accelerated by using GAI to promptly create design inspirations or design templates via text. However, it should be noted that the technology is currently still limited and a lot of experience is required to create good designs from text. Especially with Text-to-3D algorithms, the achievable complexity of the generated structures is not very good and the control options provided by the text are only rudimentary. In the short term, it will not be possible to replace conventional design and construction processes, but this is not to be ruled out with the further development and improvement of these algorithms. This in turn creates new risks, e.g., the work of designers and engineers could potentially be replaced by AI or contribute to these skills being less utilised and lost over time. This could ultimately lead to a self-reinforcing dependence on artificial intelligence. In addition, individual human creativity could be weakened and standardised; AI-based design patterns could dominate. The development of fundamentally new solutions and ideas could thus be inhibited, resulting in only familiar AI-generated elements based on historical solutions. This process would also become increasingly self-reinforcing over time.

5.2.3. Future Developments for Chatbots and Text Generation Algorithms in Additive Manufacturing

Generative AI and its applications already show great potential for AM today, but they are not yet widespread and are only suitable for technical applications to a limited extent. In order to make GAI more attractive for AM processes and, for example, to create technically mature solutions, further development work will be necessary in the future. In principle, however, various applications are possible, the prerequisites for which already exist.
In the future, chatbots could always work with references in order to directly reference the information generated and create more trust. This is already possible, but usually has to be considered in the training process of the respective chatbot using manual entries. Chatbots could be integrated into existing software solutions and support the user there directly. For example, a chatbot could be integrated into finite element analysis (FEA) software and set up a dialogue-based simulation process, so that complicated parameter settings no longer necessarily require special expertise and various iterations but can be established by the program itself. In addition, chatbots could also be connected directly to the printer and monitor the ongoing printing process by regularly comparing the generated data to the knowledge database and directly recommending measures in the event of deviations. Finally, special networks consisting of software, hardware and chatbots could also support process monitoring and, if necessary, directly evaluate in situ data analysis results from networked solutions and use them to optimise processes.
Text generation algorithms also offer great potential for AM. Here, special algorithms for generating 3D models from text could be developed and trained in such a way that they are specifically geared towards 3D-print-compatible designs and always generate structures with complex shapes and grid elements accordingly. It would also be possible to specialise them for different AM processes so that the user describes their project, specifies the AM process to be used for production (e.g., Fused Deposition Modelling, Powder Bed Fusion, etc.) and ultimately receives a production-ready design that considers the requirements of the respective process. In addition, specific material and component data could be integrated into the algorithm in order to influence the design to be generated in a targeted manner. In principle, the designs for steel could be more filigree than for plastic, and the user could also specify their size requirements directly so that the component does not have to be rescaled, as seen earlier. Companies could develop their own Text-to-3D or Text-to-image algorithms using their own CAD database and use them to create variants of designs without long development times, which can then also be produced economically in smaller quantities with AM. This would make it possible to integrate patented solutions or effective functional principles into new solutions or to create new products for existing production lines by defining specific boundary conditions. For example, a car manufacturer could have a new component developed in just a few days that is adapted to the boundary conditions of its own 3D printers in terms of size, material and quality; is based on its own expertise; and is also similar to the existing product portfolio in terms of certification and qualification aspects, thus enabling synergy.
The combination of different GAI tools could also offer great opportunities in the future. For example, a simple chatbot that explains the use of Text-to-3D or Text-to-image tools would already be very helpful for creating better elements. In the future, this could be developed into a combined tool that asks the user specifically for design ideas, materials, production options, etc., and automatically creates a design based on this information. Something similar is already being carried out today, for example, by letting ChatGPT create a text prompt for Midjourney.
Table 1 summarises the future development potential of chatbots and text generation algorithms in additive manufacturing.

6. Conclusions

In this work, three different use cases of the application of generative AI in additive manufacturing were developed and the possibilities offered by generative AI tools in this area were analysed. Specific AI solutions were implemented using a chatbot and text generation algorithms to provide meaningful support for existing AM processes and demonstrate their technical implementation and complexity. GAI could be integrated as useful support in all analysed use cases. The main task of the generative AI was to support the AM design and production process so that the development and production processes could ultimately run much faster and more creatively. The technical realisation or implementation of an AI-based chatbot as a helpful assistant in additive manufacturing or the creative use of a Text-to-image AI tool to create AM designs was relatively easy and the results achieved were also very impressive. The direct creation of digital 3D models from texts, on the other hand, was technically more difficult and only delivered less convincing results. However, the current and future potential of GAI for additive manufacturing was presented and explained for all use cases. General aspects of GAI were also summarised and explained in a SWOT analysis. Based on this, it was ultimately analysed in detail where the use of generative AI tools can be useful and where current or future potential risks may arise. To summarise, the following advantages of GAI in AM can be highlighted, which also serve as the key findings of this study:
  • GAI is a useful support for various AM processes;
  • GAI can help speed up or even replace AM processes;
  • GAI increases creativity in AM design;
  • GAI can be used to digitally collect, analyse and profitably evaluate information and expertise.
Based on these key findings, further work should be carried out to analyse the possible risks and, thus, disadvantages of GAI in additive manufacturing in more detail and not just look at the possible advantages. In this process, AI algorithms should be fed and trained with significantly more training data in order to investigate whether AI can then increasingly make independent cross-connections between that information and learn independently. Furthermore, it should be investigated, in the long term, whether human skills such as creativity, self-drive and personal knowledge suffer from the use of efficient AI tools. However, the optimisation of the AI implementations presented in this paper should also be continued in order to further demonstrate their potential. More new use cases should also be developed and presented, with specific AI-based solutions, to support more AM-relevant processes. Ultimately, this will also lead to the development of new useful AI tools directly related to AM and the development of applications to make GAI even more usable. In particular, the creation of digital 3D models from text still requires a lot of development, but, if successfully optimised, this also offers considerable potential for additive manufacturing and beyond in the future.

Author Contributions

Conceptualization, E.W.; Methodology, E.W.; Software, E.W.; Validation, E.W.; Formal analysis, E.W.; Investigation, E.W.; Resources, E.W.; Writing—original draft, E.W.; Writing—review and editing, E.W. and H.S.; Visualisation, E.W.; Supervision, H.S.; Project administration, E.W.; Funding acquisition, E.W. and H.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the European Regional Development Fund (ERDF) and the Ministry for Economics, Employment and Health of Mecklenburg-Vorpommern, Germany, grant numbers TBI-V-1-345-VBW-118 and TBI-1-026-W-009.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors report there are no competing interests to declare.

References

  1. Tofail, S.A.; Koumoulos, E.P.; Bandyopadhyay, A.; Bose, S.; O’Donoghue, L.; Charitidis, C. Additive manufacturing: Scientific and technological challenges, market uptake and opportunities. Mater. Today 2018, 21, 22–37. [Google Scholar] [CrossRef]
  2. Gibson, I.; Rosen, D.; Stucker, B.; Khorasani, M. Additive Manufacturing Technologies; Springer International Publishing: Cham, Switzerland, 2021; ISBN 9783030561260. [Google Scholar]
  3. Kietzmann, J.; Pitt, L.; Berthon, P. Disruptions, decisions, and destinations: Enter the age of 3-D printing and additive manufacturing. Bus. Horiz. 2015, 58, 209–215. [Google Scholar] [CrossRef]
  4. Liu, V.; Vermeulen, J.; Fitzmaurice, G.; Matejka, J. 3DALL-E: Integrating Text-to-Image AI in 3D Design Workflows. 2022. Available online: https://arxiv.org/pdf/2210.11603 (accessed on 21 June 2023).
  5. Lin, C.-H.; Gao, J.; Tang, L.; Takikawa, T.; Zeng, X.; Huang, X.; Kreis, K.; Fidler, S.; Liu, M.-Y.; Lin, T.-Y. Magic3D: High-Resolution Text-to-3D Content Creation. arXiv 2022, arXiv:2211.10440. [Google Scholar]
  6. Li, C.; Zhang, C.; Waghwase, A.; Lee, L.-H.; Rameau, F.; Yang, Y.; Bae, S.-H.; Hong, C.S. Generative AI Meets 3D: A Survey on Text-to-3D in AIGC Era. 2023. Available online: https://arxiv.org/pdf/2305.06131 (accessed on 21 June 2023).
  7. Baidoo-Anu, D.; Owusu Ansah, L. Education in the Era of Generative Artificial Intelligence (AI): Understanding the Potential Benefits of ChatGPT in Promoting Teaching and Learning. J. AI 2023, 7, 52–62. [Google Scholar] [CrossRef]
  8. Banh, L.; Strobel, G. Generative artificial intelligence. Electron. Mark. 2023, 33, 63. [Google Scholar] [CrossRef]
  9. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention Is All You Need. 2017. Available online: http://arxiv.org/pdf/1706.03762 (accessed on 28 June 2024).
  10. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; The MIT Press: Cambridge, MA, USA; London, UK, 2016; ISBN 0262337371. [Google Scholar]
  11. Corchado, J.M.; López, F.S.; Núñez, V.J.M.; Garcia, S.R.; Chamoso, P. Generative Artificial Intelligence: Fundamentals. ADCAIJ 2023, 12, e31704. [Google Scholar] [CrossRef]
  12. Yuan, C.; Moghaddam, M. Attribute-Aware Generative Design with Generative Adversarial Networks. IEEE Access 2020, 8, 190710–190721. [Google Scholar] [CrossRef]
  13. Oh, S.; Jung, Y.; Kim, S.; Lee, I.; Kang, N. Deep Generative Design: Integration of Topology Optimization and Generative Models. J. Mech. Des. 2019, 141, 111405. [Google Scholar] [CrossRef]
  14. Ali, S.; Parikh, D. Telling Creative Stories Using Generative Visual Aids. 2021. Available online: https://arxiv.org/pdf/2110.14810 (accessed on 21 June 2023).
  15. Wang, W.; Lin, X.; Feng, F.; He, X.; Chua, T.-S. Generative Recommendation: Towards Next-Generation Recommender Paradigm. 2023. Available online: https://arxiv.org/pdf/2304.03516 (accessed on 21 June 2023).
  16. Cao, Y.; Li, S.; Liu, Y.; Yan, Z.; Dai, Y.; Yu, P.S.; Sun, L. A Comprehensive Survey of AI-Generated Content (AIGC): A History of Generative AI from GAN to ChatGPT. 2023. Available online: https://arxiv.org/pdf/2303.04226 (accessed on 21 June 2023).
  17. Khorasani, M.; Ghasemi, A.; Rolfe, B.; Gibson, I. Additive manufacturing a powerful tool for the aerospace industry. Rapid Prototyp. J. 2022, 28, 87–100. [Google Scholar] [CrossRef]
  18. Badini, S.; Regondi, S.; Frontoni, E.; Pugliese, R. Assessing the capabilities of ChatGPT to improve additive manufacturing troubleshooting. Adv. Ind. Eng. Polym. Res. 2023, 6, 278–287. [Google Scholar] [CrossRef]
  19. Jasche, F.; Weber, P.; Liu, S.; Ludwig, T. PrintAssist—A conversational human-machine interface for 3D printers. i-com 2023, 22, 3–17. [Google Scholar] [CrossRef]
  20. Ballagas, R.; Wei, J.; Vankipuram, M.; Li, Z.; Spies, K.; Horii, H. Exploring Pervasive Making Using Generative Modeling and Speech Input. IEEE Pervasive Comput. 2019, 18, 20–28. [Google Scholar] [CrossRef]
  21. Hyunjin, C. A Study on the Change of Manufacturing Design Process due to the Development of A.I Design and 3D Printing. IOP Conf. Ser. Mater. Sci. Eng. 2020, 727, 12010. [Google Scholar] [CrossRef]
  22. Jaruga-Rozdolska, A. Artificial intelligence as part of future practices in the architect’s work: MidJourney generative tool as part of a process of creating an architectural form. Architectus 2022, 3, 95–104. [Google Scholar] [CrossRef]
  23. Gozalo-Brizuela, R.; Garrido-Merchán, E.C. A Survey of Generative AI Applications. 2023. Available online: https://arxiv.org/pdf/2306.02781 (accessed on 29 June 2023).
  24. Saka, A.B.; Oyedele, L.O.; Akanbi, L.A.; Ganiyu, S.A.; Chan, D.W.; Bello, S.A. Conversational artificial intelligence in the AEC industry: A review of present status, challenges and opportunities. Adv. Eng. Inform. 2023, 55, 101869. [Google Scholar] [CrossRef]
  25. Kulkarni, P.; Mahabaleshwarkar, A.; Kulkarni, M.; Sirsikar, N.; Gadgil, K. Conversational AI: An Overview of Methodologies, Applications & Future Scope. In Proceedings of the 2019 5th International Conference On Computing, Communication, Control And Automation (ICCUBEA), Pune, India, 19–21 September 2019; pp. 1–7. [Google Scholar]
  26. Chowdhary, K.R. Natural Language Processing. In Fundamentals of Artificial Intelligence; Springer: New Delhi, India, 2020; pp. 603–649. [Google Scholar]
  27. Zhao, W.X.; Zhou, K.; Li, J.; Tang, T.; Wang, X.; Hou, Y.; Min, Y.; Zhang, B.; Zhang, J.; Dong, Z.; et al. A Survey of Large Language Models. 2023. Available online: https://arxiv.org/pdf/2303.18223 (accessed on 4 July 2023).
  28. Carlini, N.; Tramèr, F.; Wallace, E.; Jagielski, M.; Herbert-Voss, A.; Lee, K.; Roberts, A.; Brown, T.; Song, D.; Erlingsson, Ú.; et al. Extracting Training Data from Large Language Models. In 30th USENIX Security Symposium (USENIX Security 21); USENIX Association: Berkeley, CA, USA, 2021; pp. 2633–2650. ISBN 978-1-939133-24-3. [Google Scholar]
  29. Joublin, F.; Ceravola, A.; Deigmoeller, J.; Gienger, M.; Franzius, M.; Eggert, J. A Glimpse in ChatGPT Capabilities and Its Impact for AI Research. 2023. Available online: https://arxiv.org/pdf/2305.06087 (accessed on 10 July 2023).
  30. Tian, H.; Lu, W.; Li, T.O.; Tang, X.; Cheung, S.-C.; Klein, J.; Bissyandé, T.F. Is ChatGPT the Ultimate Programming Assistant—How Far Is It? 2023. Available online: https://arxiv.org/pdf/2304.11938 (accessed on 10 July 2023).
  31. Bahrini, A.; Khamoshifar, M.; Abbasimehr, H.; Riggs, R.J.; Esmaeili, M.; Majdabadkohne, R.M.; Pasehvar, M. ChatGPT: Applications, Opportunities, and Threats. 2023. Available online: https://arxiv.org/pdf/2304.09103 (accessed on 4 July 2023).
  32. Rudolph, J.; Tan, S.; Tan, S. War of the chatbots: Bard, Bing Chat, ChatGPT, Ernie and beyond. The new AI gold rush and its impact on higher education. J. Appl. Learn. Teach. 2023, 6, 364–389. [Google Scholar] [CrossRef]
  33. Authentise Inc. Authentise brings ChatGPT Capabilities to Additive Manufacturing. Available online: https://www.authentise.com/post/authentise-brings-chatgpt-capabilities-to-additive-manufacturing (accessed on 10 July 2023).
  34. Ai Build Limited. Talk to AiSync. Available online: https://ai-build.com/ (accessed on 10 July 2023).
  35. Brisco, R.; Hay, L.; Dhami, S. Exploring the role of text-to-image ai in concept generation. Proc. Des. Soc. 2023, 3, 1835–1844. [Google Scholar] [CrossRef]
  36. Li, B.; Qi, X.; Lukasiewicz, T.; Torr, P. Controllable Text-to-Image Generation. In Advances in Neural Information Processing Systems; Wallach, H., Larochelle, H., Beygelzimer, A., d’Alché-Buc, F., Fox, E., Garnett, R., Eds.; Curran Associates, Inc.: New York, NY, USA, 2019. [Google Scholar]
  37. Szeliski, R. Computer Vision: Algorithms and Applications; Springer Nature: Berlin/Heidelberg, Germany, 2022; ISBN 9783030343729. [Google Scholar]
  38. Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Networks. 2014. Available online: https://arxiv.org/pdf/1406.2661 (accessed on 11 July 2023).
  39. Ho, J.; Jain, A.; Abbeel, P. Denoising Diffusion Probabilistic Models. 2020. Available online: https://arxiv.org/pdf/2006.11239 (accessed on 11 July 2023).
  40. Oppenlaender, J. The Creativity of Text-to-Image Generation. In Proceedings of the 25th International Academic Mindtrek Conference, Tampere, Finland, 16–18 November 2022; pp. 192–202. [Google Scholar] [CrossRef]
  41. Li, Y.; Wu, H.; Tamir, T.S.; Shen, Z.; Liu, S.; Hu, B.; Xiong, G. An Efficient Product-Customization Framework Based on Multimodal Data under the Social Manufacturing Paradigm. Machines 2023, 11, 170. [Google Scholar] [CrossRef]
  42. Poole, B.; Jain, A.; Barron, J.T.; Mildenhall, B. DreamFusion: Text-to-3D Using 2D Diffusion. 2022. Available online: https://arxiv.org/pdf/2209.14988 (accessed on 12 July 2023).
  43. Nahavandi, S. Industry 5.0—A Human-Centric Solution. Sustainability 2019, 11, 4371. [Google Scholar] [CrossRef]
  44. Jun, H.; Nichol, A. Shap-E: Generating Conditional 3D Implicit Functions. 2023. Available online: https://arxiv.org/pdf/2305.02463 (accessed on 17 July 2023).
  45. Khalid, N.M.; Xie, T.; Belilovsky, E.; Popa, T. CLIP-Mesh: Generating Textured Meshes from Text Using Pretrained Image-Text Models; Association for Computing Machinery: New York, NY, USA, 2022; pp. 1–8. [Google Scholar]
  46. Jain, A.; Mildenhall, B.; Barron, J.T.; Abbeel, P.; Poole, B. Zero-Shot Text-Guided Object Generation with Dream Fields. 2021. Available online: https://arxiv.org/pdf/2112.01455 (accessed on 17 July 2023).
  47. Gao, J.; Shen, T.; Wang, Z.; Chen, W.; Yin, K.; Li, D.; Litany, O.; Gojcic, Z.; Fidler, S. GET3D: A Generative Model of High Quality 3D Textured Shapes Learned from Images. 2022. Available online: https://arxiv.org/pdf/2209.11163 (accessed on 17 July 2023).
  48. Wong, K.V.; Hernandez, A. A Review of Additive Manufacturing. ISRN Mech. Eng. 2012, 2012, 208760. [Google Scholar] [CrossRef]
  49. Kristiawan, R.B.; Imaduddin, F.; Ariawan, D.; Ubaidillah; Arifin, Z. A review on the fused deposition modeling (FDM) 3D printing: Filament processing, materials, and printing parameters. Open Eng. 2021, 11, 639–649. [Google Scholar] [CrossRef]
  50. Awasthi, P.; Banerjee, S.S. Fused deposition modeling of thermoplastic elastomeric materials: Challenges and opportunities. Addit. Manuf. 2021, 46, 102177. [Google Scholar] [CrossRef]
  51. Ultimaker, B.V. Ultimaker S3 and Ultimaker S5: Installation and User Manual; Ultimaker: Geldermalsen, The Netherlands, 2020. [Google Scholar]
  52. Sætra, H.S. Generative AI: Here to stay, but for good? Technol. Soc. 2023, 75, 102372. [Google Scholar] [CrossRef]
Figure 1. Basic process from text prompt to 3D-printed component.
Figure 1. Basic process from text prompt to 3D-printed component.
Bdcc 08 00074 g001
Figure 2. Process flow from text prompt, via a Text-to-3D algorithm, to printed component.
Figure 2. Process flow from text prompt, via a Text-to-3D algorithm, to printed component.
Bdcc 08 00074 g002
Figure 3. Explanation of the FDM process flow by an AI chatbot.
Figure 3. Explanation of the FDM process flow by an AI chatbot.
Bdcc 08 00074 g003
Figure 4. Chatbot answers about the slicing process.
Figure 4. Chatbot answers about the slicing process.
Bdcc 08 00074 g004
Figure 5. Chatbot answers to print parameters. Left: general answers after the basic training of the chatbot, right: more specific answers after a manual review of the questions and the re-training of the chatbot.
Figure 5. Chatbot answers to print parameters. Left: general answers after the basic training of the chatbot, right: more specific answers after a manual review of the questions and the re-training of the chatbot.
Bdcc 08 00074 g005
Figure 6. Chatbot answers regarding printing with a specific 3D printer.
Figure 6. Chatbot answers regarding printing with a specific 3D printer.
Bdcc 08 00074 g006
Figure 7. Initial digital designs of a 3D-printed lamp generated with the tool “Midjourney”. Prompt: 3D-printed lamp.
Figure 7. Initial digital designs of a 3D-printed lamp generated with the tool “Midjourney”. Prompt: 3D-printed lamp.
Bdcc 08 00074 g007
Figure 8. Customization and detailing of text prompts to generate more aesthetically and functionally optimised lamp designs. (A) Prompt: 3D-printed lamp with flat base and generative design; (B) Prompt: 3D-printed lamp with flat base and generative design, grey background; (C) Prompt: 3D-printed lamp with flat base, generative design and lattice structures, grey background; (D) Prompt: 3D-printed lamp with flat base, generative design and lattice structures, drop shape with open top, grey background.
Figure 8. Customization and detailing of text prompts to generate more aesthetically and functionally optimised lamp designs. (A) Prompt: 3D-printed lamp with flat base and generative design; (B) Prompt: 3D-printed lamp with flat base and generative design, grey background; (C) Prompt: 3D-printed lamp with flat base, generative design and lattice structures, grey background; (D) Prompt: 3D-printed lamp with flat base, generative design and lattice structures, drop shape with open top, grey background.
Bdcc 08 00074 g008
Figure 9. Final digital design of a 3D-printed lamp generated by “Midjourney” (left), the responding CAD design (middle) and the 3D-printed lamp from the CAD design (right).
Figure 9. Final digital design of a 3D-printed lamp generated by “Midjourney” (left), the responding CAD design (middle) and the 3D-printed lamp from the CAD design (right).
Bdcc 08 00074 g009
Figure 10. Digital designs of a 3D-printed lamp using “Shap-E”. Prompt for (a) a 3D-printed lamp; (b) a 3D-printed lamp shade; (c) a 3D-printed drop-shaped vase with lattice structures; and (d) a 3D-printed drop-shaped vase.
Figure 10. Digital designs of a 3D-printed lamp using “Shap-E”. Prompt for (a) a 3D-printed lamp; (b) a 3D-printed lamp shade; (c) a 3D-printed drop-shaped vase with lattice structures; and (d) a 3D-printed drop-shaped vase.
Bdcc 08 00074 g010
Figure 11. Final design of an AI-generated lamp or vase with the tool “Shap-E” (left) and the 3D-printed version (right). Prompt: generatively designed drop-shaped vase.
Figure 11. Final design of an AI-generated lamp or vase with the tool “Shap-E” (left) and the 3D-printed version (right). Prompt: generatively designed drop-shaped vase.
Bdcc 08 00074 g011
Figure 12. SWOT analysis of generative AI.
Figure 12. SWOT analysis of generative AI.
Bdcc 08 00074 g012
Table 1. Summary of future developments of chatbots and text generation algorithms as part of generative AI in additive manufacturing.
Table 1. Summary of future developments of chatbots and text generation algorithms as part of generative AI in additive manufacturing.
ChatbotsText Generation Algorithms
  • Referencing the answers of the chatbot with sources and supporting documents
  • Development of algorithms for 3D-print-compatible designs
  • Integration of chatbots in software (e.g., simulation software) for interactive user support
  • Development of algorithms for the production-orientated design of a particular AM process
  • Connecting chatbots to devices such as 3D printers to monitor the manufacturing process through regular comparison with historical data
  • Integration of material and part data into the algorithms for better control of the resulting design
  • Development of specially automated networks of hardware, software and chatbots
  • Development of special company-based algorithms for the integration of patented and existing solutions for the rapid creation of characteristic designs or different design variants
  • Development of a combination of chatbots and text generation algorithms for better explanation, user guidance and prompt input via special query forms for more targeted design generation
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Westphal, E.; Seitz, H. Generative Artificial Intelligence: Analyzing Its Future Applications in Additive Manufacturing. Big Data Cogn. Comput. 2024, 8, 74. https://doi.org/10.3390/bdcc8070074

AMA Style

Westphal E, Seitz H. Generative Artificial Intelligence: Analyzing Its Future Applications in Additive Manufacturing. Big Data and Cognitive Computing. 2024; 8(7):74. https://doi.org/10.3390/bdcc8070074

Chicago/Turabian Style

Westphal, Erik, and Hermann Seitz. 2024. "Generative Artificial Intelligence: Analyzing Its Future Applications in Additive Manufacturing" Big Data and Cognitive Computing 8, no. 7: 74. https://doi.org/10.3390/bdcc8070074

APA Style

Westphal, E., & Seitz, H. (2024). Generative Artificial Intelligence: Analyzing Its Future Applications in Additive Manufacturing. Big Data and Cognitive Computing, 8(7), 74. https://doi.org/10.3390/bdcc8070074

Article Metrics

Back to TopTop