Next Article in Journal
Caffeine–Acrylic Resin DLP-Manufactured Composite as a Modern Biomaterial
Previous Article in Journal
On Liquid Flow Maldistribution through Investigation of Random Open-Structure Packings
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Biomaterials Research-Driven Design Visualized by AI Text-Prompt-Generated Images

by
Yomna K. Abdallah
1,2 and
Alberto T. Estévez
1,*
1
iBAG-UIC Barcelona, Institute for Biodigital Architecture & Genetics, Faculty of Architecture, Universitat Internacional de Catalunya, 08017 Barcelona, Spain
2
Department of Interior Design, Faculty of Applied Arts, Helwan University, Cairo 11111, Egypt
*
Author to whom correspondence should be addressed.
Designs 2023, 7(2), 48; https://doi.org/10.3390/designs7020048
Submission received: 1 February 2023 / Revised: 8 March 2023 / Accepted: 22 March 2023 / Published: 24 March 2023

Abstract

:
AI text-to-image generated images have revolutionized the design process and its rapid development since 2022. Generating various iterations of perfect renders in few seconds by textually expressing the design concept. This high-potential tool has opened wide possibilities for biomaterials research-driven design. That is based on developing biomaterials for multi-scale applications in the design realm and built environment. From furniture to architectural elements to architecture. This approach to the design process has been augmented by the massive capacity of AI text-to-image models to visualize high-fidelity and innovative renders that reflect very detailed physical characteristics of the proposed biomaterials from micro to macro. However, this biomaterials research-driven design approach aided by AI text-to-image models requires criteria for evaluating the role and efficiency of employing AI image generation models in this design process. Furthermore, since biomaterials research-driven design is focused not only on design studies but also the biomaterials engineering research and process, it requires a sufficient method for protecting its novelty and copyrights. Since their emergence in late 2022, AI text-to-image models have been raising alarming ethical concerns about design authorship and designer copyrights. This requires the establishment of a referencing method to protect the copyrights of the designers of these generated renders as well as the copyrights of the authors of their training data referencing by proposing an auxiliary AI model for automatic referencing of these AI-generated images and their training data as well. Thus, the current work assesses the role of AI text-to-image models in the biomaterials research-driven design process and their methodology of operation by analyzing two case studies of biomaterials research-driven design projects performed by the authors aided by AI text-to-image models. Based on the results of this analysis, design criteria will be presented for a fair practice of AI-aided biomaterials research-driven process.

1. Introduction

To judge the role of creativity in the design process between humans and artificial intelligence, which is recently starting to grab the attention of designers, programmers, and theoreticians, as is reported in a limited number of recent studies [1], the current study focuses on a more complex design process that is derived from research on biomaterials. It involves a more scientific question of the composition, scale, and physical characteristics of the material and the dual relationship, compatibility, and fractal dimension between material and form both represented by AI-generated images. This is intended because the AI text-prompt-generated images have excelled in photorealistic material rendering, thus emphasizing the crucial role of creativity in biomaterials research-driven design as a problem-solving approach to achieve sustainability in furniture and architectural design, which is the highest-impact marker that will judge the futural relation between humans and AI in the design realm, whether it will be competition or complementation. This concept of biomaterials research-driven design was proposed earlier by the authors of the book AI to Matter Reality [2], where various biomaterials research projects shaped design projects that employed AI text-to-image models and classification models in the form-finding, rendering, and visualization phases of the design process. Furthermore, AI machine learning models were also upscaled in their application in bioengineered materials, especially in the regenerative medical biomaterials research field as reported in [3] and in biomolecular materials as reported in [4]. However, due to the particular situation of biomaterials research-driven design as an emerging design process with more focus on architectural and furniture design applications, as well as the novelty of the proposed concept of employing AI text-to-image models for biomaterials combination, prediction, and visualization, this complex design process requires study, analysis, evaluation, and criteria. Thus, the current study will exhibit an introduction to the AI text-to-image models and their operational methodology including the categorization and identification of the employed deep learning models, neural networks, training methods, and data mining, focusing on training data sources and their copyright status that requires accurate referencing. Then, a comparison between the latest and most popular examples of the text-to-image models (Stable Diffusion, DALL-E, and Midjourney) will be presented to compare their role in the design process and how it will define the authorship and copyright of the AI-generated designs. Consequently, two case studies of biomaterials research-driven design by the authors will be exhibited, explained, and analyzed using SWOT analyses to identify the role of AI text-to-image generated images in this specific category of the design process with an emphasis on evaluating the AI’s contribution to design creativity. Design criteria will be determined from the results of the analysis of these two case studies, and an auxiliary AI model will be proposed for automatic referencing generation to be embedded in the current AI text-to-image models to protect the copyrights of designers of the AI-generated images as well as the training datasets referenced.

2. AI Text-Prompt-Generated Images: The Models, the Operation, the In-Design Practice, and Authorship Debate

2.1. Text-to-Image AI Models and Architecture

The AI text-to-image is a machine learning (ML) model that uses a natural language (NL) described as an input and produces an image based on processing the provided description. It emerged from the upgrading of deep neural networks (DNNs). The NL model typically converts text into a latent illustration, while a generative model generates an image based on that illustration. These models have been trained on immense sums of image and text data from the web [5]. Originally, a generative model (e.g., a generative adversarial network (GAN)) can generate instances of output variables not related to probability distributions concluded from probable samples of input variables. GANs, given a training set, operate by opposing two neural networks in the form of a zero-sum game [6]. Typically, the generative network learns to plot from a latent space to a data distribution of interest, while the discriminator differentiates candidates created by the generator from the true data distribution. The training objective of the generative network is to deceive the discriminator network by generating new candidates that the discriminator thinks are not synthesized [6,7]. Autonomous backpropagation processes are applied to both networks so that the generator generates better samples, while the discriminator becomes more experienced in recognizing synthetic samples. This implies an increase in the scale of the neural networks accompanied by an increase in the scale of the training data. The latent variables produced by GANs as images are usually used for design purposes such as architecture and product design, as well as rebuilding 3D models of objects from images [8] and creating novel objects as 3D point clouds [9].
GANs do not openly model the likelihood function, nor do they offer a method for locating the latent variable related to a given sample [6]. This shadows the relation between the fed samples (images) and the results (latent variables), which plays a crucial role in judging the authenticity of the results produced by GANs, as well as hindering the possible monitoring of the GAN learning performance, unlike some generative models such as a flow-based generative model that traces the likelihood ratio between the reference distribution and the generator distribution.
The main principle of how text-to-image models work stems from the GAN coupling of the generator as a decoder mapping from a latent space to the image space, and the coder that generates a code of every latent vector for an image. This developed the autoencoder as an auxiliary network that performs encoding [10]. Similarly, variational autoencoders (VAEs) are unsupervised models that learn a probabilistic latent representation of their inputs, consisting of an encoder and a decoder as GANs, mainly to map high-dimensional data into a low-dimensional space and then reconstruct them back into the high-dimensional space.
The text-to-image models involve using various architectures with a text-encoding phase performed by a recurrent neural network (RNN) allowing output from some nodes to affect subsequent input to the same nodes [11,12] as a long short-term memory (LSTM) network that has feedback connections, and they process an entire sequence of data (e.g., speech or video). Typically, an LSTM unit includes a cell, an input gate, an output gate, and a forget gate [13]. The cell recalls values over random time intervals, and the three gates regulate the flow of information into and out of the cell, qualifying the LSTM networks for classification, processing, and predictions based on time-series data.
Recently, more popular AI text-encoding models have emerged, such as the transformer models. These are deep learning models that adopt the self-attention mechanism, differentially weighting the significance of each part of the input data, making them more useful for natural language processing (NLP). Like RNNs [14], transformers process sequential input data. However, they process the entire input all at once as they do not have a recurrent structure. The attention mechanism provides context for any position in the input sequence. This allows for more parallelization than RNNs and therefore reduces training times [15]. The transformer model translates the input text into tokens by a byte pair-encoding tokenizer, converting each token via a word embedded into a vector. Through employing an encoder–decoder architecture, the encoder with its layers processes the input iteratively one layer after another, while the decoder with its layers acts the same way on the encoder’s output. The encoder layer mainly focuses on programming which parts of the inputs are relevant to each other, while each decoder layer does the opposite, taking all the encodings and applying their integrated contextual information to generate an output sequence. Both the encoder and decoder layers have a feed-forward neural network for additional processing of the outputs [16].
Despite the popularity of conventional GANs for the image generation step, diffusion models are becoming more popular recently. Based on the same principle of training a model to generate low-resolution images and using one or more auxiliary deep learning models to upscale it, filling in finer details, diffusion models are a class of latent variable models that are a category of generative models. They are statistical models that relate a set of observable variables to a set of latent variables. The diffusion models’ goal is to learn the latent structure of a dataset by modeling the way in which data points diffuse through the latent space [17] through their variational inference training [18]. This qualifies them to be applied in image denoising, inpainting, super-resolution, and image generation [19].
Advances in both machine learning algorithms and computer hardware have led to more efficient methods for training GANs and diffusion models containing many layers of nonlinear hidden units and a very large output layer [20], especially after the advancement of graphic processing units (GPUs) in 2019, which had replaced CPUs with clouds as the dominant method of training large-scale AI models [21].

2.2. Data Mining

Typically, text-to-image models necessitate a high quantity of consistent data for the models to execute precise predictions as they are trained on large web-based datasets of text and image pairs. The larger the dataset is, the more difficult it is to train high-quality text-to-image models.
For training a machine learning model, a large and representative sample of data is collected and annotated. The training dataset varies as a corpus of text, a collection of images, and data collected from individuals. Usually, the training data consist of a set of training examples, each of which has one or more inputs, and are represented by a feature vector, while the training data are represented by a matrix. In the supervised learning ML model, the iterative optimization of an objective function contributes to the AI model learning a function that can be used to predict the output related to the new inputs [22], where the optimal function will enable the model to determine the output for inputs that were not a part of the training data [23]. In this case, the programmer in collaboration with the designer has more control over the obtained results. However, the creativity of the results and the possibility to use such models as form-finding methods in the design process is less. Nevertheless, this is not the case in the text-to-image models that employ unsupervised learning methods with more vagueness about the data sources and collection process, questioning the source and referencing of the data, the selection criteria, and the annotation accuracy. These parameters contribute to copyright violation problems, as well as possible bias or limitation in the annotation process of these data. For example, in the case of Midjourney, a text-prompt image generator, the training data are collected from the web; hence, the question here is whether these data are copyright-free. If they are copyright-protected content, the question is whether they are referenced properly. This is obviously not the case lacking any embedded referencing generator to properly reference the training data that contribute to the generation of the latent variables.
Since the text-to-image models are unsupervised machine learning methods, where the training data will be extensively processed to extract their mathematical and logical patterns to produce new results, some would argue that it is not mandatory to reference the training data for copyright protection. However, the data mining conducted by the model may fail to efficiently produce new and original results. In supervised ML models, the optimum performance occurs when the complexity of the hypothesis and the complexity of the function underlying the data are matching. However, sometimes underfitting or overfitting might occur in both supervised and unsupervised ML models. Usually, underfitting happens when the model cannot sufficiently capture the inherent structure of the data (e.g., giving results that have nothing to do with the used text), while overfitting of the data occurs when the hypothesis (condition) is too complex [24]. Overfitting is the generation of an analysis that closely resembles or exactly matches a particular set of data, hence failing to predict or produce future observations consistently [25], as the model begins to memorize training data rather than learning to generalize from their structure. The model’s number of parameters, data, compatibility with the data shape, and the magnitude of model error compared to the expected level of noise in the data are all important parameters that contribute to the overfitting problem. As it is described by Occam’s razor that any certain complicated function is a priori less probable than any given simple function, if there was not a significant improvement in training-data fitting to compensate for the complexity increase, then the new complex function “overfits” the data. The complex overfitted function will probably operate worse than the simpler function on validation data outside the training dataset [26,27]. While overfitting is more likely to occur in supervised learning models than in unsupervised ones such as the text-to-image model, overfitting is still likely to happen when the learning time interval is too long or when training examples are rare, causing the learner model to adjust to very specific random features of the training data. In this process of overfitting, the performance on the training examples still increases while the performance on unseen data becomes worse. This implies that overfitting can still occur in unsupervised models. From a design process point of view, overfitting can be considered a two-faceted problem, since the model fails to produce new original results and it increases the margins of copyright violation of the used training data, especially for web-based datasets.
Contributing to the copyright violation and authorship debate, other problems emerging from the integration of AI-generated images in the design process will be analyzed and evaluated to decide on the role of AI text-to-image models in the creativity and authorship of the design process in biomaterials research-driven design. Thus, a brief introduction to Midjourney, one of the most popular AI text-to-image models, will be presented in comparison with other existing models such as Stable Diffusion and DALL-E, followed by two case studies of biomaterials research-driven design projects that have employed the Midjourney model in the design process. Finally, design criteria for AI-aided design in biomaterials research-driven design process will be proposed for a fair practice integrating these models in the design process, as well as proposed solutions for the copyright violation problems and further recommendations for the actual integration of AI in the design process further than being just a sketching or brainstorming tool.

2.3. AI Text-to-Image Model Comparison and Copyright Debate

Text-to-image models were utilized to generate images for digital art and design from text. Stable Diffusion is one of these models released in 2022 [28]. Typically, it uses a latent diffusion model (LDM) [29] operated on consumer hardware with GPUs of a minimum of 8 GB VRAM, unlike Midjourney and DALL-E, which are cloud-based [30].
Particularly, the Stable Diffusion model includes the variational autoencoder (VAE), U-Net, and an optional text encoder. The VAE as described previously compresses the image from pixel space to a lower-dimensional latent space, where Gaussian noise is iteratively applied to the compressed latent illustration during forward diffusion [29]. The U-Net denoises the output from forward diffusion backward to obtain latent representation via a cross-attention mechanism [31]. Finally, the VAE generates the final image by converting the representation back into pixel space. For training on text, the fixed, pre-trained CLIP (Contrastive Language–Image Pre-training) ViT-L/14 text encoder is used to convert text prompts to an embedding space [32].
The training data used for Stable Diffusion as images and captions were taken from a freely accessible dataset scraped from the web, including 5 billion image–text pairs classified based on language, filtered into separate datasets by resolution, and predicted for containing a watermark, as well as for their aesthetics [33]. These data are composed of 12 million images with 47% coming from 100 different domains, such as Pinterest (8.5%), followed by websites such as WordPress, Blogspot, Flickr, and Wikimedia Commons [34].
Similarly, DALL-E is another text-to-image model that utilizes a diffusion model conditioned on CLIP [35], trained on 400 million pairs of images with text captions scraped from the web as well.
While Midjourney was released in July 2022 [36], details on its AI text-prompt image generator model were not released. Its training data were scraped from the web as well. This implies that these data are less likely to be copyright-free.
Midjourney has been more successful in creating more photorealistic and visually appealing images with higher resolution than the images generated by its competitors. Stable Diffusion suffers from degradation problems with the results, caused mainly by the possible mismatch of training data image resolutions and higher resolutions that users might try to obtain [37], as well as the need for larger VRAMs for the consumer devices to run the model if a higher quality image were used for training the model [38].
All three models enable generating new images in multiple styles and from various viewpoints. Particularly, DALL-E and Stable Diffusion can rearrange objects in images [35] and can correctly place design elements in novel compositions without explicit instruction [32,39]. Additionally, they enable the use of prompts to partially alter existing images via inpainting and outpointing.
Stable Diffusion allows the designer to have more control over the results through various option parameters covering sampling types, output image dimensions, seed values, and the scale value of the classifier-free guidance that adjusts how closely the output image obeys the prompt. Furthermore, it enables users to adjust the number of inference steps for the sampler [37]. It also allows selective emphasis over the text prompt by front-end operations, by using emphasis markers as enclosing text-prompt keywords within brackets. All three models enable image modification on an already existing image that was AI-generated by providing a new text prompt for modifying it [32].
Despite these merits of Stable Diffusion, Midjourney has recently been the most popular text-to-image model in the design realm, especially in architecture and furniture design, while animation and digital art have been dominated by DALL-E. Furthermore, the results of Midjourney found their way to invade the intrinsic phases of the design process as a form-finding process as emphasized by its founder that the designers use Midjourney for rapid prototyping of design and artistic concepts [40]. Therefore, the copyright protection of the designers’ concepts is an alarming ethical concern, particularly for the training datasets that contain copyrighted designs and artworks without proper referencing, as well as for protecting the copyright of the AI-generated results. In this regard, some Midjourney users have accused its developers of devaluing original creative work [41]. Similarly, Stable Diffusion faced similar reactions related to authorship and copyright violation after providing its users with complete freedom to use any resulting images from the model [42].
Despite the arguments that advocate that visual styles and compositions are not subject to copyright, some graphical design entities were successful in protecting their copyrights and intellectual property on recognizable brand logos, which cancels these arguments. If a simple abstract graphic design such as a logo is copyrighted, then, logically, any other design is equally deserving to be copyrighted as well, especially in the architecture, furniture, and fashion design realm, where the design is not only a visual composition concept for visual information but includes interconnected and multi-disciplinary aspects such as functionality, social issues, sustainability, materiality, technology, and production. Thus, the harmful impact of copyright violation on the human designer in this case will be fatal, not only by violating the equal opportunity fairness principle but also by establishing unfair design practices where original and authentic designers might gradually lose commercial viability against AI-based competitors [43,44]. Furthermore, online communities and platforms for sharing, trading, and/or collaborating on prompts for generating AI-generated images [45,46], as well as social media, have worsened the problem through the rapid sharing of the results that might be very similar to other already existing design projects that are still under development or for which the designers do not have high exposure in media. This forces the original designers to expose their design projects in the early stages while trying to protect their own copyrights of their novel ideas and designs.
Consequently, some metrics were developed to evaluate AI-generated images according to style diversity to prove the capacity of the AI model to produce new designs or art images [47]. Inception score (IS) is a common algorithmic metric for image diversity, based on the distribution of labels predicted by a pre-trained image classification model. Similarly, the related Fréchet inception distance compares the distribution of generated images and real training images according to features extracted by one of the final layers of a pre-trained image classification model.
Nevertheless, the copyright debate is an inherited issue in machine learning models. Tom M. Mitchell defines the ML learning process as follows: “A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P if its performance improves with experience E.” [23]. However, this definition is only focused on the operational aspect rather than identifying the capacity for creative thinking in comparison to human creativity. Breaking down Alan Turing’s question of “Can machines think?” to the question “Can machines do what we (as thinking entities) can do?” [48]. The answer is “not yet”, since our cognitive abilities as humans are based on interconnected networks of various signals, receptors, processing, and experiences with an infinite amount of data related to each of them. Furthermore, even modern-day machine learning has only two objectives: to classify data based on models developed by humans and to predict future outcomes based on these models. This seems more like a linear abstract process in comparison to the cognition of a human.
Despite these issues, many design and artistic entities have proposed AI art as enabling “new forms of artistic expression” [49] and as an “augmentation of human capability” [30] in fields of fast prototyping, generating inspirations, and generating textures and 3D objects. This will be judged in the following sections, which analyze and compare the roles of human and AI capacities of creativity in the biomaterials research-driven design process.

3. Materials and Methods: The Role of AI Text-to-Image Models in Biomaterials Research-Driven Design Process

In this section, the role of AI text-to-image models in biomaterials research-driven design will be analyzed and evaluated using two case studies of biomaterials research-driven design aided by AI-generated images developed by the authors earlier as biomaterials research projects. The case studies will be analyzed by SWOT analysis to identify the strengths, weaknesses, opportunities, and threats of the AI text-to-image model at each phase of the design process in the category of biomaterials research-driven design to analyze a deeper role of these models as generative form-finding tools in the design process, especially focusing on material research-driven design, emphasizing their strong role in rendering biomaterial physical properties and composition. The form-finding approach applying AI text-to-image models is an intended comparative criterion for the comparison between the form generated by the authors using Rhinoceros 3d + Grasshopper and plugins for generative algorithms following biomathematical modeling process and the forms generated by the AI text-to-image models to identify the role of human creativity that cannot be replaced by AI. Especially in developing design concepts as a problem-solving approach from interdisciplinary fields to achieve sustainability. This analysis will also tackle the problem of authorship and copyright of design regarding the use of these AI models in this design process, proving authorship and copyright of design concepts fed to the AI text-to-image models. The results of these case studies will inform the AI-aided design criteria in the biomaterials research-driven design process proposed to guarantee a fair practice using these text-to-image models with respect to human authorship.

3.1. Case Study 1: The Bone Tissue Chair Set

This research project started in 2019 at the Institute for Biodigital Architecture and Genetics and then was extended in 2021-2022 in collaboration with the Max Planck Institute of Colloids and Interfaces, Department of Biomaterial, Development of Mineralized Skeletal Materials research group, with the main objective of developing a bioactive self-biomineralized material from osteosarcoma cells, for augmented capacity of autonomous proliferation, differentiation, and increased spatial expansion and mineralization (solidification) over time [50]. The research questions the role of biomathematical-generated geometry in boosting the biomineralization process of the bioengineered material that is prepared from the minimum basic constituents. This material was first developed as a bioink that embeds the osteosarcoma cells in GELMA (gelatin methacrylate) hydrogel and uses direct extrusion 3D bioprinting for printing differential growth patterns to test how these patterns will affect the viability of bone cells and enable them to proliferate and biomineralize over time [50]. The new research phase explores the formal aspects of the developed material, questioning what biocompatible 3D geometry would be supportive of this bioactive material functionality and morphogenesis. The authors have decided to employ advanced bioimaging and microscopy techniques on multiple scales from nano to macro to provide high-end characterization and identification of the ovine bone tissue as a reference (thanks to its similarity to human bones) with its 12 levels of hierarchical structural motifs from fibril to cancellous bone. The project is conducted using focused ion beam scanning electron microscopy (FIB-SEM) and micro-CT (µCT) [51,52,53]. The project further employed biomathematical modeling using an AI-aided algorithm in Python for the augmentation and assembly of the generated microscopy images and 3D models to reconstruct high-resolution 3D models that can be 3D bioprinted using the developed bioink. These high-definition 3D models of the biomimetic hierarchical structural motifs of bone tissue are supposed to boost the phase shifting of the bioink by facilitating its biomineralization process after printing by providing the optimum interstitial spaces for medium circulation and interactions.
Interestingly, AI-ML models were already informed about various bone tissue hierarchical structural motifs and their morphological characteristics, as well as the ML models’ recognition of the previously learned bone tissue histology and cell types from previous literature in bone tissue bioengineering and high-end characterization microscopy that utilized AI-ML models to reconstruct their images [52,54]. Furthermore, in this research case, the biomathematical modeling is only applied as an accentuating and augmenting tool, not as a form-finder tool which makes the higher dependence on biomimicking the natural models of ovine bone tissue that have been studied microscopically and their morphological characteristics are more available due to the vitality of this research topic in regenerative medicine and bioengineering fields with high-impact published studies in scientific journals and editorials with profound databases ( such as Nature, Science, and NCBI.
The authors have previously employed Rhinoceros 3d and Grasshopper + Kangaroo and other plugins to develop a complex biomathematical model combining a differential growth + reaction–diffusion + solidification model algorithm for simulating the chronological development in the morphology of the proposed bioactive self-biomineralized material while mineralizing from the gel state to the solidified state, as well as differential growth 3D patterns within the volume of a 5 cm Petri dish. However, the furniture or architectural design scale was a more advanced stage that the designers wanted to visualize using AI text-prompt-generated images. The diagram in Figure 1 exhibits a comparison of the biomaterials research-driven design process between complex biomathematical modeling conducted by the authors and the AI text-prompt-generated images in terms of the form-finding process and physical characteristic representation of the biomaterial as the main subject in this study. Figure 1 shows that AI-generated images have achieved high-resolution renders that can be considered as the form-finding step as well as the end step in the design process as renders that accurately express the diversity of the bone tissue hierarchical structural motifs in its spongey (cancellous) tissue, its dense (cortical) bone tissue, and its mineralized muscle bone tissue. Furthermore, the AI text-to-image model achieved complex results that include both cancellous and cortical bone or cortical and minimalized muscle tissue. The AI text-prompt-generated images have exhibited success in learning to synthesize from the text prompt (bone tissue + chair, brick, façade, or pavilion) combining the learned mathematical patterns and logics from the two distinct fields of anatomy/histology and furniture design, as exhibited in Figure 2. This proves the complex cognitive capacity of the model to understand and discover patterns in both datasets (chairs and bone) and synthesize them into new knowledge, as well as representing the material’s detailed histological scale through learning to identify osteocytes which are the bone decomposers and osteoblasts which are the bone builders only by using the word prompt “osteo cells”, as exhibited in Figure 1. On the other hand, the AI-generated images (Figure 1 and Figure 2) might relate to the design style of H. R. Giger of biomorphic surrealism and exaggeration as well as Gaudi and Calatrava’s design style when compared to the biolearning process that the authors conducted by employing complex biomathematical modeling to develop the chronological biomineralization simulation of the engineered material from the bioink gel phase to the biomineralized material phase. This biolearning process performed by the authors, especially in the research’s early stages, focused on the abstract structural morphology of the biomineralization process exhibited more as cancellous bone tissue and more in a regular pattern than a biomimicry pattern that exhibits the real heterogeneous distribution of mineralization in bone tissue, mainly to test the differential growth pattern effect on osteosarcoma cell viability, not to visualize the final design product as furniture, bricks, etc. Thus, the main objective focused on the material itself and its morphological development via biomineralization. However, in the current phase, in which the research is focused on achieving biomimetic 3D models of hierarchical bone structural motifs informed by advanced microscopy, the AI-generated images seem to be closer to achieving a creative result of visualizing full design product models that are informed by the rich literature of bone tissue microscopical studies and reconstruction 3D modeling AI tools as well. While the biolearning design process developed by the authors is still in its infancy, it is starting to employ advanced microscopy techniques such as µCT and FIB-SEM to detect multi-scale hierarchical bone structural motifs from nano to macro.
The current study conducted a SWOT analysis of integrating AI-generated images in the biomaterials research-driven design process for the case study of the bone chair, as exhibited in Table 1. Figure 3 exhibits the statistical quantitative analysis of weighing the strengths and opportunities in comparison to the weaknesses and threats of employing AI text-to-image models in the biomaterials research-driven design case study 1: bone tissue furniture and architecture. It also exhibits the statistical estimation of the effect of AI text-to-image model integration in the design process and in the biomaterials research field, separately. The AI text-to-image model Midjourney achieved success as a form-finding step, as well as highly realistic renders, in addition to its compatibility with the biomimetic design approach and good understanding, classification, and synthesis of the training data from various distinct disciplines into new knowledge, avoiding overfitting or underfitting problems, achieving flexibility in multi-scale understanding and representation as well as the flexibility of the text prompt to be combined with other new keywords. However, the spotted weaknesses of the current practice of AI text-to-image integration in the design process outweigh the points of strength, mainly due to the complexity of results without an accessible and accurate method of converting these 2D images into 3D models, as well as the copyright and authorship violation problems of the training data sources and the AI-generated images caused by the lack of an automatic referencing generator to provide the references with their probability of contribution to every AI-generated-image. However, each of these problems opens an opportunity for further research to develop 2D-to-3D model conversion (e.g., through a point cloud) and to design an AI automatic referencing generator model of the produced AI images from their training data.
From Figure 3a, it can be concluded that in the integration of AI text-to-image models in this specific case of biomaterials research-driven design case study 1, the integration of AI text-to-image models has achieved approximately balanced performance where its strengths (33%) are slightly outweighing its weaknesses (30%). Similarly, it has achieved nearly equal opportunities and threats in this case study. However, when it comes to the quantitative–qualitative analysis of the impact of the integration of AI text-to-image models on each discipline separately as exhibited in Figure 3b, it achieved a higher positive impact on each discipline (design and biomaterials research). Its advantages outweigh its disadvantages in the design process by a 2.25:1 ratio. As for its impact on the biomaterials research field, its advantages outweigh its disadvantages by a 4:1 ratio. This proves that AI text-to-image models were very useful in the integration of this case study.

3.2. Case Study 2: The Barcelona Pearl Cloud Furniture Set

This project was developed as an entry for the Biennale of Sharjah, UAE 2022, and later presented at the 5th International Scientific Conference on Biomaterials and Nanomaterials [55]. As well as The Biomaterials World Forum in March 2022 [56]. The Barcelona pearl cloud furniture design project emerged from biomaterials research to develop iridescent material with similar physical characteristics to natural pearls but generated from seashell-based biocomposite material. Focusing on materials’ sustainability by recycling seashells from food waste and converting them into valuable products such as furniture, the project challenged the digital and robotic fabrication methods and tools due to the complex interwoven lattice-based forms, as well as challenging the fine-tuning capacity while scaling up the biocomposite material rheological properties in both phases of the project. The first phase of the project focused on synthesizing seashell-based biocomposite material with similar physical properties to natural pearls, while the second phase proposed customized pearl farming to produce large-size pearls to be assembled on the chair structure.
The pearl cloud chair form was generated following a form-finding process employing a biomathematical model abstracting the physical–chemical reaction of the natural pearl formation inside the oyster. The process begins with a foreign substance gliding into the oyster between the mantle and the shell, which irritates the mantle, as the mantle is responsible for producing the oyster’s shell, creating nacre that lines the inside of the shell. The oyster’s natural reaction is to cover that irritant with layers of the same nacre substance to protect itself, eventually forming a pearl [57]. This reaction was biomathematically modeled through a reaction–diffusion algorithm in the 3D space abstracting the diffusion of the chemical reaction causing the covering up of the foreign body with nacre layers inside the oyster’s shell 3D space. This created positive and negative attractor domains resulting from the diffusion domains (distribution) of the nacre chemical substance. Later, the positive domains were translated into polylines that resulted in the intricately interwoven lattice form of the Barcelona pearl cloud chair set. This case study is different from the previous one since the objective is not biomimetic forms but more complex biomathematical biobehavioral forms. However, the current case study employed form-finding methodology in comparing the human-generated design and the AI text-to-image generated renders as well, since form making would imply using the image modification method as an image-to-image model, which is not the case in the current research. Furthermore, comparing the creative cognitive capacities between a human design and an AI image generator can only be accomplished by comparing their form-finding results using the same keywords of the design concept (text prompt) to prove that AI cannot replace the human designer. In the current case study, AI produced high-resolution photorealistic renders of the iridescent nacre of the pearls in various ways. However, it failed to capture the histological details of the pearls’ material composition, as exhibited in Figure 4. The used prompt “mantle cells of pearl” did not represent the microscopic-level details of pearls sufficiently (like the SEM images exhibited in [58]. Furthermore, the AI-generated images using the prompts “pearl cloud chair” and “pearl seashell chair” failed to achieve relevance to the key formal features of the biomathematical biobehavioral model that was developed by the authors. There is a probability of 10–30% relevance only considering that the pearl cloud furniture design idea is based on the assembly of multiple pearls together to form continuous lattices, paths, or surfaces (Figure 4).
In this case study, despite the many iterations that the authors performed by multiple crossovers and scale-ups of the latent variables, the AI image generator failed to create distinct categories of the formal logic from the used text prompts (Figure 5), due to the novelty of the design concept and its exaggeration in proposing the usage of a precious and high-value material such as pearls for furniture design. This is unlike the previous case study of bone chairs that were already tackled by some artistic, heritage, and vernacular styles that used bones of animals in furniture design, for example, African indigenous tribes, surrealist artists and designers such as Giger, and even biomorphic designers. Furthermore, the current case does not intend to focus on the microscopic and fractal level of the natural pearl composition that provides a larger niche for formal manipulation and variation based on the recognition of a mathematical pattern by the AI image generator model as was the case of the bone tissue chairs, but to provide a tessellation or assembly method of these pearls following a complex biobehavioral, biomathematical logic of 3D reaction–diffusion in space. The target was the outer appearance, not the microscopic-level study, unlike the previous case study of bone chairs. Thus, in the current case, the AI image generator failed to coincide with the designers’ vision or tackle the complex form-finding methods that were applied by them. Some would argue that the text prompt used was not exactly reflecting the intentions of the designer of the biobehavioral logic of pearl formation; however, this is an invalid argument for two reasons: (1) For a fair comparison between AI and human cognitive capacities and imagination, the objective function must be as abstract as possible, and the search and approach domain must be freed, so if the used prompt was a keyword that initiated the design concept for the human designer, why could it not indicate the same cognitive capacities for AI? This reveals the significant difference in the cognitive complexity levels between humans and AI. (2) Using complex multi-word text prompts complicates the hypothesis for the text-to-image model, probably causing underfitting or overfitting of results. This was the case when the authors used reaction–diffusion pearl formation as a prompt.
Furthermore, when the authors used Midjourney, which allows for the free circulation and reuse of prompts that describe the authentic design concept in an open access fashion, the design concept and its prompts were exposed to other users that were copying the concept and sharing it, resulting in the copyright violation of the original design concept. Table 2 exhibits the analysis of these weaknesses and strengths of the AI image generator integration in the design case. Figure 6 exhibits a quantitative analysis weighing the strengths and opportunities in comparison to the weaknesses and threats of employing AI text-to-image models in the biomaterials research-driven design case study 2: the pearl cloud chair seashell-based biocomposite material from food waste. It also exhibits the statistical estimation of the AI text-to-image model integration in the design process and in the biomaterials research field in this case as well.
From Figure 6a, it can be concluded that in the integration of AI text-to-image models in this specific case of biomaterials research-driven design case study 2, the weaknesses (48%) drastically outweigh the strengths and opportunities together (33%). This proves that the integration of AI text-to-image models in this design case study was not successful. On the other hand, the quantitative–qualitative analysis of the impact of AI text-to-image models on each discipline separately exhibits that the advantages outweigh the impact of the disadvantages in the biomaterials research field by the ratio of 2:1. AI text-to-image models achieve slightly more advantages than disadvantages in the design process by the ratio of 4:3, as exhibited in Figure 6b. However, a sufficient performance of AI text-to-image models is not provided in this design case study.

4. Results: AI-Aided Design Criteria for Biomaterials Research-Driven Design Process

From the previous analysis of the two different cases of the biomaterials research-driven design, one being a biomimetic approach and the other being biolearning, it can be said that the AI text-to-image models’ role in the biomaterials research-driven design process has been limited to the brainstorming, form-finding, rendering, and visualization phases in the design process. Thus far, these AI models have not contributed deeply to the design’s technical drawings, simulation, optimization, and fabrication aspects. The following figure (Figure 7) exhibits a diagram of the AI text-to-image models’ role in the biomaterials research-driven design process in its various phases.
Thus, the authors have designed design criteria for the AI-aided biomaterials research-driven design process and presented them in the Table 3 below.

Conclusions and Authorship Decision

To analyze the possible integration of AI-generated images in the design process based on biomaterials research and to judge the authorship and copyright debate, from the exhibited two case studies of biomaterials research-driven design (the bone tissue chair set and the Barcelona pearl cloud chair set), it is demonstrated that the researcher/designer is the author of the AI-generated images, and that they can be more useful as rendering tools than as form-finding tools in case of advanced design with biomathematical complexity in modeling. AI-generated images depend on the designer’s background, experience, imagination, complexity, and creativity through the feeding of the AI-ML models with the designer’s concepts and ideas. The novelty of a research or design idea is what identifies its originality and copyrights, not the technicality of visualizing this idea. On the other hand, the current practice of AI-generated images depends on the previous state of the art and literature in various fields of science and design, especially high-end visual characterization techniques such as advanced microscopy techniques (e.g., FIB-SEM and µCT) that increase the training data accuracy and validity. However, the lack of an accurate embedded automatic referencing generator system contributes to the copyright violation of these data sources. Thus, the authors proposed possible methods for an ARG auxiliary model, while recommending further integration of the AI text-to-image tools in the design process through the conversion of 2D images to 3D objects via point clouds and 3D CNNs.

Author Contributions

Conceptualization, Y.K.A. and A.T.E.; methodology, Y.K.A. and A.T.E.; software, Y.K.A. and A.T.E.; validation, Y.K.A. and A.T.E.; formal analysis, Y.K.A. and A.T.E.; investigation, Y.K.A. and A.T.E.; resources, Y.K.A. and A.T.E.; data curation, Y.K.A. and A.T.E.; writing—original draft preparation, Y.K.A. and A.T.E.; writing—review and editing, Y.K.A. and A.T.E.; visualization, Y.K.A. and A.T.E.; supervision, Y.K.A. and A.T.E.; project administration, Y.K.A. and A.T.E.; funding acquisition, Y.K.A. and A.T.E. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Liu, V.; Vermeulen, J.; Fitzmaurice, G.; Matejka, J. 3DALL-E: Integrating Text-to-Image AI in 3D Design Workflows. arXiv 2022, arXiv:2210.11603. [Google Scholar]
  2. Estevez, A.T.; Abdallah, Y.K. AI to Matter-Reality: Art, Architecture & Design; iBAG-UIC Barcelona: Barcelona, Spain, 2022; p. 260. [Google Scholar]
  3. Al-Kharusi, G.; Dunne, N.J.; Little, S.; Levingstone, T.J. The Role of Machine Learning and Design of Experiments in the Advancement of Biomaterial and Tissue Engineering Research. Bioengineering 2022, 9, 561. [Google Scholar] [CrossRef]
  4. Rickert, C.A.; Lieleg, O. Machine learning approaches for biomolecular, biophysical, and biomaterials research. Biophys. Rev. 2022, 3, 021306. [Google Scholar] [CrossRef]
  5. Agnese, J.; Herrera, J.; Tao, H.; Zhu, X. A survey and taxonomy of adversarial neural networks for text-to-image synthesis. WIREs Data Min. Knowl. Discov. 2020, 10, e1345. [Google Scholar] [CrossRef] [Green Version]
  6. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Networks. Commun. ACM 2020, 63, 139–144. [Google Scholar] [CrossRef]
  7. Luc, P.; Couprie, C.; Chintala, S.; Verbeek, J. Semantic Segmentation Using Adversarial Networks. NASA ADS. Available online: https://ui.adsabs.harvard.edu/abs/2016arXiv161108408L (accessed on 1 November 2016).
  8. Zhang, H.; Shinomiya, Y.; Yoshida, S. 3D MRI Reconstruction Based on 2D Generative Adversarial Network Super-Resolution. Sensors 2021, 21, 2978. [Google Scholar] [CrossRef]
  9. Jian, Y.; Yang, Y.; Chen, Z.; Qing, X.; Zhao, Y.; He, L.; Chen, X.; Luo, W. PointMTL: Multi-Transform Learning for Effective 3D Point Cloud Representations. IEEE Access 2021, 9, 126241–126255. [Google Scholar] [CrossRef]
  10. Donahue, J.; Krähenbühl, P.; Darrell, T. Adversarial feature learning. arXiv 2016, arXiv:1605.09782. [Google Scholar]
  11. Abiodun, O.I.; Jantan, A.; Omolara, A.E.; Dada, K.V.; Mohamed, N.A.; Arshad, H. State-of-the-art in artificial neural network applications: A survey. Heliyon 2018, 4, e00938. [Google Scholar] [CrossRef] [Green Version]
  12. Tealab, A. Time series forecasting using artificial neural networks methodologies: A systematic review. Futur. Comput. Inform. J. 2018, 3, 334–340. [Google Scholar] [CrossRef]
  13. Gers, F.A.; Schmidhuber, J.; Cummins, F. Learning to Forget: Continual Prediction with LSTM. Neural Comput. 2000, 12, 2451–2471. [Google Scholar] [CrossRef] [PubMed]
  14. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. Adv. Neural Inf. Process. Syst. 2017, 30, 1–11. [Google Scholar]
  15. Wolf, T.; Debut, L.; Sanh, V.; Chaumond, J.; Delangue, C.; Moi, A.; Cistac, P.; Rault, T.; Louf, R.; Funtowicz, M.; et al. Transformers: State-of-the-Art Natural Language Processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, Online, 16–20 November 2020. [Google Scholar] [CrossRef]
  16. Sequence Modeling with Neural Networks (Part 2): Attention Models. Indico Data. Available online: https://indicodata.ai/blog/sequence-modeling-neural-networks-part2-attention-models/ (accessed on 18 April 2016).
  17. Gu, S.; Chen, D.; Bao, J.; Wen, F.; Zhang, B.; Chen, D.; Yuan, L.; Guo, B. Vector quantized diffusion model for text-to-image synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 10696–10706. [Google Scholar]
  18. Ho, J.; Jain, A.; Abbeel, P. Denoising diffusion probabilistic models. Adv. Neural Inf. Process. Syst. 2020, 33, 6840–6851. [Google Scholar]
  19. Ramesh, A.; Dhariwal, P.; Nichol, A.; Chu, C.; Chen, M. Hierarchical text-conditional image generation with clip latents. arXiv 2022, arXiv:2204.06125. [Google Scholar]
  20. Deep Neural Networks for Acoustic Modeling in Speech Recognition–AI Research. 2015. Available online: http://airesearch.com/ai-research-papers/deep-neural-networks-for-acoustic-modeling-in-speech-recognition/ (accessed on 23 October 2015).
  21. Analyst, J.K. GPUs Continue to Dominate the AI Accelerator Market for Now. Information Week. 2019. Available online: https://www.informationweek.com/ai-or-machine-learning/gpus-continue-to-dominate-the-ai-accelerator-market-for-now (accessed on 27 November 2019).
  22. Chen, L.-P. Mehryar Mohri, Afshin Rostamizadeh, and Ameet Talwalkar: Foundations of machine learning, second edition. Stat. Pap. 2019, 60, 1793–1795. [Google Scholar] [CrossRef]
  23. Mitchell, T.M. Machine Learning; Mcgraw-Hill: New York, NY, USA, 1997. [Google Scholar]
  24. Alpaydin, E. Introduction to Machine Learning; The Mit Press: Cambridge, MA, USA, 2014. [Google Scholar]
  25. Oxford Languages|The Home of Language Data. Available online: https://en.oxforddictionaries.com/definition/overfitting (accessed on 25 November 2022).
  26. Hawkins, D.M. The Problem of Overfitting. J. Chem. Inf. Comput. Sci. 2003, 44, 1–12. [Google Scholar] [CrossRef]
  27. Tetko, I.V.; Livingstone, D.J.; Luik, A.I. Neural network studies. 1. Comparison of overfitting and overtraining. J. Chem. Inf. Comput. Sci. 1995, 35, 826–833. [Google Scholar] [CrossRef]
  28. Diffuse The Rest—A Hugging Face Space by Huggingface-Projects. Available online: https://huggingface.co/spaces/huggingface-projects/diffuse-the-rest (accessed on 22 December 2022).
  29. Rombach, R.; Blattmann, A.; Lorenz, D.; Esser, P.; Ommer, B. High-Resolution Image Synthesis with Latent Diffusion Models. Machine Vision & Learning Group. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 10684–10695. Available online: https://ommer-lab.com/research/latent-diffusion-models/ (accessed on 22 December 2022).
  30. Vincent, J. Anyone Can Use This AI Art Generator—That’s the Risk. The Verge. Available online: https://www.theverge.com/2022/9/15/23340673/ai-image-generation-stable-diffusion-explained-ethics-copyright-data (accessed on 15 September 2022).
  31. Alammar, J. The Illustrated Stable Diffusion. 2022. Available online: https://jalammar.github.io/illustrated-stable-diffusion/ (accessed on 25 November 2022).
  32. Stable Diffusion. GitHub. Available online: https://github.com/CompVis/stable-diffusion (accessed on 30 September 2022).
  33. Baio, A. Exploring 12 Million of the 2.3 Billion Images Used to Train Stable Diffusion’s Image Generator. Available online: https://waxy.org/2022/08/exploring-12-million-of-the-images-used-to-train-stable-diffusions-image-generator/ (accessed on 30 August 2022).
  34. Ivanovs, A. Stable Diffusion: Tutorials, Resources, and Tools. Stack Diary. 2022. Available online: https://stackdiary.com/stable-diffusion-resources/ (accessed on 25 November 2022).
  35. Johnson, K. OpenAI Debuts DALL-E for Generating Images from Text. VentureBeat. 2021. Available online: https://venturebeat.com/business/openai-debuts-dall-e-for-generating-images-from-text/ (accessed on 5 January 2021).
  36. Inside Midjourney, The Generative Art AI That Rivals DALL-E. Available online: https://www.vice.com/en/article/wxn5wn/inside-midjourney-the-generative-art-ai-that-rivals-dall-e (accessed on 5 December 2022).
  37. Stable Diffusion with Diffusers. Available online: https://huggingface.co/blog/stable_diffusion (accessed on 25 November 2022).
  38. Smith, R. NVIDIA Quietly Launches GeForce RTX 3080 12GB: More VRAM, More Power, More Money. Available online: https://www.anandtech.com/show/17204/nvidia-quietly-launches-geforce-rtx-3080-12gb-more-vram-more-power-more-money (accessed on 22 December 2022).
  39. Meng, C.; Song, Y.; Song, J.; Wu, J.; Zhu, J.Y.; Ermon, S. Sdedit: Image synthesis and editing with stochastic differential equations. arXiv 2021, arXiv:2108.01073. [Google Scholar]
  40. Claburn, T. Holz, Founder of AI Art Service Midjourney, on Future Images. Available online: https://www.theregister.com/2022/08/01/david_holz_midjourney/ (accessed on 22 December 2022).
  41. Midjourney v4 Greatly Improves the Award-Winning Image Creation AI. (n.d.). TechSpot. Available online: https://www.techspot.com/news/96619-midjourney-v4-greatly-improves-award-winning-image-creation.html (accessed on 5 December 2022).
  42. Cai, K. Startup Behind AI Image Generator Stable Diffusion Is in Talks to Raise at a Valuation Up to $1 Billion. Forbes. 2022. Available online: https://www.forbes.com/sites/kenrickcai/2022/09/07/stability-ai-funding-round-1-billion-valuation-stable-diffusion-text-to-image/ (accessed on 25 November 2022).
  43. Heikkilä, M. This Artist Is Dominating AI-Generated Art. And He’s Not Happy about it. MIT Technology Review. Available online: https://www.technologyreview.com/2022/09/16/1059598/this-artist-is-dominating-ai-generated-art-and-hes-not-happy-about-it/ (accessed on 16 September 2022).
  44. Saharia, C.; Chan, W.; Saxena, S.; Li, L.; Whang, J.; Denton, E.; Ghasemipour, S.K.S.; Ayan, B.K.; Mahdavi, S.S.; Lopes, R.G.; et al. Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding. arXiv 2022, arXiv:2205.11487. [Google Scholar]
  45. Robertson, A. How DeviantArt Is Navigating the AI Art Minefield. The Verge. Available online: https://www.theverge.com/2022/11/15/23449036/deviantart-ai-art-dreamup-training-data-controversy (accessed on 15 November 2022).
  46. DeviantArt’s AI Image Generator Aims to Give More Power to Artists. Popular Science, 12 November 2022. Available online: https://www.popsci.com/technology/deviantart-ai-generator-dreamup/ (accessed on 12 November 2022).
  47. Frolov, S.; Hinz, T.; Raue, F.; Hees, J.; Dengel, A. Adversarial text-to-image synthesis: A review. Neural Netw. 2021, 144, 187–209. [Google Scholar] [CrossRef]
  48. Harnad, S. The Annotation Game: On Turing (1950) on Computing, Machinery, and Intelligence (PUBLISHED VERSION BOWDLERIZED). In Parsing the Turing Test: Philosophical and Methodological Issues in the Quest for the Thinking Computer; Springer: Berlin/Heidelberg, Germany, 2008; pp. 23–66. ISBN 13: 978-1-4020-6708-2, e-ISBN-13: 978-1-4020-6710-5. [Google Scholar]
  49. Dazed. AI Is Reshaping Creativity, and Maybe That’s a Good Thing. 2022. Available online: https://www.dazeddigital.com/art-photography/article/56770/1/cyborg-art-ai-text-to-image-art-reshaping-creativity-maybe-thats-not-a-bad-thing (accessed on 18 August 2022).
  50. Estevez, A.T. Biodigital Architecture: FELIXprinters and iBAG-UIC to Test Living Biomaterials for Sustainable Architecture. 3DPrint.com|the Voice of 3D Printing/Additive Manufacturing. Available online: https://3dprint.com/276251/biodigital-architecture-felixprinters-and-ibag-uic-to-test-living-biomaterials-for-sustainable-architecture/ (accessed on 3 December 2020).
  51. Wittig, N.K.; Østergaard, M.; Palle, J.; Christensen, T.E.K.; Langdahl, B.L.; Rejnmark, L.; Hauge, E.-M.; Brüel, A.; Thomsen, J.S.; Birkedal, H. Opportunities for biomineralization research using multiscale computed X-ray tomography as exemplified by bone imaging. J. Struct. Biol. 2021, 214, 107822. [Google Scholar] [CrossRef]
  52. Buss, D.J.; Kröger, R.; McKee, M.D.; Reznikov, N. Hierarchical organization of bone in three dimensions: A twist of twists. J. Struct. Biol. X 2022, 6, 100057. [Google Scholar] [CrossRef]
  53. Jia, Z.; Deng, Z.; Li, L. Biomineralized Materials as Model Systems for Structural Composites: 3D Architecture. Adv. Mater. 2022, 34, 2106259. [Google Scholar] [CrossRef]
  54. Tang, T.; Landis, W.; Raguin, E.; Werner, P.; Bertinetti, L.; Dean, M.; Wagermaier, W.; Fratzl, P. A 3D Network of Nanochannels for Possible Ion and Molecule Transit in Mineralizing Bone and Cartilage. Adv. NanoBiomed Res. 2022, 2, 2100162. [Google Scholar] [CrossRef]
  55. 5th International Scientific Conference on Biomaterials and Nanomaterials|Edinburgh-UK|Mar 2022|STATNANO. 2022. Available online: https://statnano.com/event/3016/5th-International-scientific-conference-on-Biomaterials-and-Nanomaterials#ixzz7lN63BGDb (accessed on 25 November 2022).
  56. BioMat|Biomaterials World Forum|Continuum Forums. 2022. Available online: https://www.continuumforums.com/biomaterials-world-forum/ (accessed on 25 November 2022).
  57. Dg, D. Process of Formation of Pearl in Molluscs. Bioscience. Available online: https://www.bioscience.com.pk/topics/zoology/item/870-process-of-formation-of-pearl-in-molluscs (accessed on 22 December 2022).
  58. Pérez-Huerta, A.; Cuif, J.-P.; Dauphin, Y.; Cusack, M. Crystallography of calcite in pearls. Eur. J. Miner. 2014, 26, 507–516. [Google Scholar] [CrossRef]
  59. Eberle, O.; Buttner, J.; Krautli, F.; Muller, K.-R.; Valleriani, M.; Montavon, G. Building and Interpreting Deep Similarity Models. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 1149–1161. [Google Scholar] [CrossRef]
  60. Li, Y.; Zhang, Z.; Liu, B.; Yang, Z.; Liu, Y. ModelDiff: Testing-based DNN similarity comparison for model reuse detection. In Proceedings of the 30th ACM SIGSOFT International Symposium on Software Testing and Analysis, New York, NY, USA, 11–17 July 2021; pp. 139–151. [Google Scholar]
  61. Liu, Z.; Sun, L.; Zhang, Q. High Similarity Image Recognition and Classification Algorithm Based on Convolutional Neural Network. Comput. Intell. Neurosci. 2022, 2022, 2836486. [Google Scholar] [CrossRef]
  62. Hinton, G.E.; Salakhutdinov, R.R. Reducing the Dimensionality of Data with Neural Networks. Science 2006, 313, 504–507. [Google Scholar] [CrossRef] [Green Version]
  63. Alidoost, F.; Arefi, H.; Tombari, F. 2D Image-To-3D Model: Knowledge-Based 3D Building Reconstruction (3DBR) Using Single Aerial Images and Convolutional Neural Networks (CNNs). Remote Sens. 2019, 11, 2219. [Google Scholar] [CrossRef] [Green Version]
  64. Han, X.-F.; Laga, H.; Bennamoun, M. Image-Based 3D Object Reconstruction: State-of-the-Art and Trends in the Deep Learning Era. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 43, 1578–1604. [Google Scholar]
  65. Xu, C.; Yang, S.; Galanti, T.; Wu, B.; Yue, X.; Zhai, B.; Zhan, W.; Vajda, P.; Keutzer, K.; Tomizuka, M. Image2Point: 3D Point-Cloud Understanding with 2D Image Pretrained Models. arXiv 2021, arXiv:2106.04180. [Google Scholar]
Figure 1. The bone chair biomineralization research project: Comparison of the biomaterials research-driven design process between biolearning biomathematical modeling by the authors and the biomimetic approach using AI text-prompt-generated images.
Figure 1. The bone chair biomineralization research project: Comparison of the biomaterials research-driven design process between biolearning biomathematical modeling by the authors and the biomimetic approach using AI text-prompt-generated images.
Designs 07 00048 g001
Figure 2. The bone chair biomineralization research project: AI text-prompt-generated latent variables of bone tissue chair variations between cancellous, cortical, and mineralized muscle tissue). It exhibits success in learning the data patterns from two distinct disciplines (histology/anatomy and furniture design) due to the sufficient training data obtained from the previous literature, especially in high-end characterization advanced microscopy studies of bone tissue hierarchical structural motifs.
Figure 2. The bone chair biomineralization research project: AI text-prompt-generated latent variables of bone tissue chair variations between cancellous, cortical, and mineralized muscle tissue). It exhibits success in learning the data patterns from two distinct disciplines (histology/anatomy and furniture design) due to the sufficient training data obtained from the previous literature, especially in high-end characterization advanced microscopy studies of bone tissue hierarchical structural motifs.
Designs 07 00048 g002
Figure 3. The bone chair biomineralization research project: statistical analysis of the SWOT analysis of the role of AI text-to-image model integration in the biomaterials research-driven design process. (a) Statistical estimation of weights of strengths, weaknesses, opportunities, and threats from the SWOT analysis. (b) Statistical estimation of the impact of AI text-to-image models on the design process and the biomaterials research field.
Figure 3. The bone chair biomineralization research project: statistical analysis of the SWOT analysis of the role of AI text-to-image model integration in the biomaterials research-driven design process. (a) Statistical estimation of weights of strengths, weaknesses, opportunities, and threats from the SWOT analysis. (b) Statistical estimation of the impact of AI text-to-image models on the design process and the biomaterials research field.
Designs 07 00048 g003
Figure 4. The pearl cloud chair seashell-based biocomposite material from food waste: comparison of the biomaterials research-driven design process between biolearning biomathematical modeling by the authors and the form-finding process using AI text-prompt-generated images.
Figure 4. The pearl cloud chair seashell-based biocomposite material from food waste: comparison of the biomaterials research-driven design process between biolearning biomathematical modeling by the authors and the form-finding process using AI text-prompt-generated images.
Designs 07 00048 g004
Figure 5. The pearl cloud chair seashell-based biocomposite material from food waste: the limited formal categories of latent variables generated by the AI image generator Midjourney.
Figure 5. The pearl cloud chair seashell-based biocomposite material from food waste: the limited formal categories of latent variables generated by the AI image generator Midjourney.
Designs 07 00048 g005
Figure 6. The pearl cloud chair seashell-based biocomposite material from food waste: statistical analysis of the SWOT analysis of the role of AI text-to-image model integration in the biomaterials research-driven design process. (a) Statistical estimation of weights of strengths, weaknesses, opportunities, and threats from the SWOT analysis. (b) Statistical estimation of the impact of AI text-to-image models on the design process and the biomaterials research field.
Figure 6. The pearl cloud chair seashell-based biocomposite material from food waste: statistical analysis of the SWOT analysis of the role of AI text-to-image model integration in the biomaterials research-driven design process. (a) Statistical estimation of weights of strengths, weaknesses, opportunities, and threats from the SWOT analysis. (b) Statistical estimation of the impact of AI text-to-image models on the design process and the biomaterials research field.
Designs 07 00048 g006
Figure 7. The role of AI text-to-image models in the biomaterials research-driven design process.
Figure 7. The role of AI text-to-image models in the biomaterials research-driven design process.
Designs 07 00048 g007
Table 1. The bone chair biomineralization research project: SWOT analysis of integrating AI text-to-image model Midjourney in the biomaterials research-driven design process.
Table 1. The bone chair biomineralization research project: SWOT analysis of integrating AI text-to-image model Midjourney in the biomaterials research-driven design process.
SWOT Analysis of the AI-Text-to-Image Models Role in the Biomaterials Research Driven Design Case 1: Bone Tissue Architecture/Furniture Set.
Evaluation AnalysisRole in Design ProcessRole in Biomaterials Research
Strength Points
  • A form-finding tool through AI capacity of classifying data, identifying them, and extracting their mathematical patterns as well as synthesizing them into new knowledge.
  • Useful for form-finding in biomimetic formal design, since the machine learning process is conducted on clearer correspondence between the training data and the results. Avoiding overfitting or non-fitting results problems.
  • High-resolution photorealistic rendering tool of the design proposals.
  • Successful multi-scale detailed material representation of physical and morphological characteristics (Cell- Tissue- Product).
  • Similarity between the AI-generated Images of material details with the renders resulting from the biomathematical complex modelling process by authors.
  • Successful crossover of training data from different disciplines (Anatomy/Histology: Medicine and Furniture Design).
  • Rapid results in design and proof of concept visualization in biomaterials research-driven design.
  • Flexible text prompt employing various combinations with the same text prompt: Bone tissue+ Chair, Brick, Pavilion, etc.
  • Variety of latent variables (results) using the same text prompt: Bone Tissue Chair, representing accurately various hierarchical structural motifs of bone tissue (Cancellous, Cortical, and Mineralized Muscles Tissue).
-
Rapid sketching and brainstorming phase.
-
Form finding tool in design form generation phase.
-
Bottom-up Biomimetic design form generation phase.
-
Rendering, visualization and presentation phase.
-
Complex form generation capacity based on biomathematical modelling logic.
-
Microscopic texture and physical characteristics of the developed biomaterial.
-
Successful prediction of multidisciplinary complex form generation driven by combining biomaterials research topics with design elements.
-
Accurate prediction and form generation based on anatomical and histological references. Thanks to the advanced and rich literature review in bone tissue regeneration research.
Weakness Points
  • The cropped AI-generated Images sometimes that affect the 3D full representation imagination of the design model (as the different views and perspectives of the model), which affects the full knowledge of the form in the design process.
  • The increased number of needed upscales and iterations of the latent variants in order to provide a complete model view (not cropped) which is time and money-consuming (in the case of fermium AI text to Image models as Midjourney that was used in this case study).
  • The lack of efficient software to translate the AI-generated Images into 3D models that can be fabricated by digital Fabrication technologies (e.g. 3D printing).
  • The formal complexity of the AI-generated Images hinders the ability to tackle them through 3D modelling/parametric software (e.g., rhinoceros+ Grasshopper; 3D Max; etc).
  • The Lack of an auxiliary automatic referencing generator model to reference the training data of the AI text-to-Image model violates other researchers’/designers’ copyright due to the possible visual similarity between the AI-generated Images and their designs.
  • The unprotected copyrights of the AI-generated Images and their text prompt on some of the open source platforms of the AI text-to-image models, cause copyright violations by the developers and other users of these platforms allowing them to use without permission other designers/researchers´ design concepts and ideas.
  • Forcing researchers/designers to publish their research-driven design projects rapidly in an early stage, to protect the novelty of their ideas if they have to use AI-generated Images. Due to the open source of AI text-to-image models. Which applies stress on the researchers and hinders the quality of published scientific literature or developing state of art.
  • Commercializing the design process using AI text-to-image models due to the support and featuring by the developers of these models only for high exposure, and impactful capital entities (designers; researchers; firms, etc). Due to the developers’ desire for more exposure to increase the profit while sacrificing the design quality, value, and aesthetics, as well as violating the originality and novelty copyrights of the original authors of the text prompts and their latent variables.
-
Not efficient in design form technical studies, and optimization phase.
-
Not efficient in design file preparation for the digital fabrication phase.
-
Hinders the most important design phase, which is building the design concept due to copyright violation and degenerating the design literature due to focusing on profit while sacrificing design quality and value.
-
Hinders the quality and value of biomaterials research-driven design, due to the stress of rapid publishing in early research phases to protect research novelty and design copyrights.
Opportunities
  • Opening new interdisciplinary research-driven design fields focused on material synthesis, morphology, and physical/chemical characteristics in a design inspired and Aided by AI text-to-image models.
  • Encouraging advancement in AI Machine learning models to translate the 2D images resulting from the AI text to Images into 3D design models that enable digital fabrication (e.g., 3D printing). By-point cloud models. Which will revolutionize advancement in the design process and facilitate it.
  • Emphasizing collaborative material research-based design with high-end characterization methods such as advanced microscopy (e.g., µCT; FIB-SEM; and MRI), as well as the employed Machine learning algorithms to reconstruct their results (as in this case study; µCT and FIB-SEM were used in the reconstruction visualization of the ovine femur bone).
  • Enriching the state of the art of the research-based design process, thanks to the rapid and varied results of AI-generated Images. While increasing the interaction and sharing of knowledge between artists, designers, researchers, and programmers.
  • Opening new research horizons for developing automatic referencing generator models attached/embedded in AI-generated Images with an accurate probability distribution of visual similarity with the used training data will give more insight and control on monitoring the unsupervised learning process, which will consequently contribute to the optimization of these text-to-images models.
-
Enriching the design state-of-art for the search and data collection phase in the design process.
-
Augmenting the integration of multidisciplinary research in the design concept phase.
-
Augmenting the role of AI classification model in controlling the automatic referencing process of the AI-generated images.
-
Encouraging research in developing 2D to 3D AI models for the design file technical optimization and digital fabrication technology.
-
Encouraging research in the integration of AI pattern recognition in advanced high end characterization imaging methods as advanced microscopy (e.g., µCT; FIB-SEM; and MRI).
Threats
  • Training data sources vagueness resulting in authorship and copyright violation of the original authors of these data.
  • Copyright and authorship violation of the authors of AI-generated images due to the free unpermitted circulation (reuse) of prompts and results on open platforms and communities of these AI texts to Image models (e.g., Midjourney).
  • Threatening design originality, novelty, and quality derived by the obligation of rapid sharing and publication of the obtained results in an attempt to protect the authorship of novel design/research concepts and ideas.
  • Misleading judgment of authorship of AI-generated images, which might result in human artists/designers’ unemployment.
  • Creating rejection of AI Aided Design Practice as a reaction to conceiving AI as a competitor not as a helper.
Table 2. The pearl cloud chair seashell-based biocomposite material from food waste: SWOT analysis of integrating AI text-to-image model Midjourney in the biomaterials research-driven design process.
Table 2. The pearl cloud chair seashell-based biocomposite material from food waste: SWOT analysis of integrating AI text-to-image model Midjourney in the biomaterials research-driven design process.
SWOT Analysis of the AI-Text-to-Image Models Role in the Biomaterials Research Driven Design Case 2: Pearls Cloud Furniture Set.
EvaluationAnalysisRole in Design ProcessRole in Biomaterials Research
Strength Points
  • Rapid results in alternative design varieties representation in biomaterials research driven design for macro-meso scale outer physical characteristics.
  • High resolution photorealistic rendering tool of the design latent variables expressing the high iridescent appearance of the pearls.
  • Sketching and Drafting Phase.
  • Presentation, Rendering and Visualization Phase.
  • Macro scale physical properties representation.
Weakness Points
  • Fail to achieve similarity between the AI-generated images and the design generated by algorithmic aided design from complex biobehavioural biomathematical logic of reaction-diffusion in 3D of the pearls design developed by the authors. Indicating that AI image generators cannot synthesize developed knowledge beyond the first level of direct meaning of text prompts and training images annotation.
  • Unsuitable for form-making or top-down design methodology especially when using multiple words to describe the design concept in the text prompt.
  • Unsuitable for the multi-objective design process as the current case study that addressed developing sustainable biocomposite material, digital fabrication advancement for 3D printing of the lace-based interwoven design form, as well as the tessellation of pearls.
  • The complete dependence on training data with direct interpretation reduces the design concept complexity and results in under-fitted results.
  • The cropped AI-generated images affect the 3D full representation imagination of the design model and affect the possibility of converting the 2D to 3D model for 3D printing or digital fabrication.
  • The increased number of needed upscales and iterations of the latent variants in order to provide a complete model view which is time-consuming and costly.
  • The lack of an auxiliary automatic referencing generator model for referencing the training data of the AI text-to-Image model violates other researchers/designers’ copyright due to the possible visual similarity between the AI-generated images and their designs (as in this case study; the pearls cloud chair design was published by the authors themselves before the release of text-to-image models and that was later copied in concept by a user of the open-source of Midjourney for a project of furniture design using the text prompt containing the words: pearl furniture.
  • The unprotected copyrights of the AI-generated images and their text prompt on AI text-to-image platforms and forums. Resulting in copyright violations by the developers and other users of these platforms allowing them to use without permission other designers/researchers´ design concepts and ideas.
  • Forcing researchers/designers to publish their research-driven design projects rapidly in the early stages, due to their worries of copying or violating their design concepts and to protect their research novelty.
  • Commercializing the design process by using AI text-to-image models due to the selective and biased support and featuring of the generated works of well-known or high-impact designer profiles, and capital entities. This hinders equal opportunities in design practices, promoting subjective evaluation and criticism based on economic status while sacrificing design quality, value, and aesthetics.
  • Not suitable for complex biobehavioural logics integration in form generation.
  • Limited design complexity.
  • Suitable only for form finding not form making design approach.
  • Complexity of multi faceted biomaterials research driven design cannot be addressed due to problems of over fitting or failing to fit when using complex prompts.
Opportunities
  • Opening new interdisciplinary research-driven design fields focused on material synthesis, morphology, and physical/chemical characteristics in a design inspired and aided by AI text-to-image models.
  • Emphasizing collaborative material research-based design with high-end characterization methods such as advanced microscopy (e.g., µCT; FIB-SEM; and MRI), as well as the employed Machine Learning algorithms to reconstruct their results (as in this case study, µCT and FIB-SEM reconstructs of Pearls can be utilized to further enrich the detailed representation of the nacre material composition).
  • Enriching the state of the art of the research-based design process, thanks to the rapid and varied results of AI-generated Images. While increasing the interaction and shared knowledge between artists, designers, researchers, and programmers.
  • Opening new research horizons for developing automatic referencing generator models embedded in AI-generated images with an accurate probability distribution of visual similarity with the used training data will give more insight and control on monitoring the unsupervised learning process, which will consequently contribute to the optimization of these text-to-images models.
  • Useful for research and data collection phase at the beginning of the design process for brainstorming and inspiration.
  • Encouraging research in simulating the physical characteristics of high-value materials while using recycled, sustainable, and cheap materials. Which introduces a new concept of material value.
Threats
  • Training data sources vagueness resulting in authorship and copyright violation of the original authors of these data.
  • Copyright and authorship violation of the authors of AI-generated images due to the free unpermitted circulation (reuse) of prompts and results on open platforms and communities of these AI texts to image models.
  • Threatening design originality, novelty, and quality are derived from the obligation of rapid sharing and publication of the obtained results in an attempt to protect the authorship of novel design/research concepts and ideas.
  • Misleading judgment of the authorship of AI-generated images, which might result in artists/designers’ unemployment.
  • Affecting main design phase which is design concept originality and significance.
Table 3. The Design Process Criteria for AI-Aided Biomaterials Research-Driven Design.
Table 3. The Design Process Criteria for AI-Aided Biomaterials Research-Driven Design.
Criterion Description Phase of Design Process
Form Finding (Bottom-Up) Vs. Form Making (Top-Down)
  • AI image generator models are recommended as form-finding tools in biomimetic research/design cases.
  • AI image generator models are not recommended for complex indirect levels of biomathematical modeling inferred from biobehavioral logics.
  • AI image generator models are insufficient for form-making top-down design approaches for intended geometry or form generation.
  • To avoid data overfitting or underfitting, it is not recommended to use complex text prompts.
  • Design Concept
  • Design Methodology
  • Form Generation Approach
Material Morphology from Nano to Meso/Macro
  • AI text-to-image models are highly recommended for high-fidelity realistic rendering of the material physical characteristics on the meso or macro scale.
  • However, the rendering of material microscopic composition (tissue, cell, and intracellular) scale level depends significantly on the availability of training data in this regard from the literature on employing high-end characterization and identification microscopy and analysis methods that apply AI pattern recognition algorithms to reconstruct their results (e.g., FIB-SEM, µCT). Thus, the authors highly recommend the collaboration and establishment of high-resolution microscopy image datasets with their proper referencing to contribute to the enrichment of the design process and research accuracy when using AI image generators.
  • Design Concept
  • Design Methodology
  • Design Research
  • Form Generation Approach
  • Rendering and Presentation
  • Biodigital Fabrication (Biotechnological Practices, e.g., Biomanufacturing, 3D Bioprinting) and Digital Fabrication
Auxiliary Model for Automatic Referencing Generator (Training Data and AI-Generated New Knowledge)
  • To guarantee a fair practice of integrating AI text-to-image models in the design process, regarding the sources and the copyright status of the training data used for these AI image generation models, the authors propose the embedding of an auxiliary automatic referencing generator (ARG) within the text-to-image model.
  • This ARG model can be employed for either referencing the training data with the highest visual similarity with the latent variables generated by AI or referencing the AI-generated images to their users (designers), or both. Some recent studies have been proposing various methods for similarity detection [59,60,61]. This developed ARG model will be presented by the authors in a future study, so as not to exceed the scope and objectives of the current study that is focused on biomaterials research-driven design that is employing AI text-to-image models within this specific design process. A brief explanation of the proposed ARG model is presented in the following paragraphs:
  • The proposed models are designed on two levels: The first is data classification prior to processing, where the data classification can include ARG to automatically detect and reference all the images scraped from the internet concerning a specific text prompt (keyword). Alternatively, in the final phase after generating the latent variables, the ARG model would employ similarity learning for regression and classification by utilizing a similarity function that measures how similar two objects are from the AI-generated images and the training data or by employing cluster analysis that is typically applied in the data classification and labeling, defined by a similarity metric to define the similarity between members of the same cluster and separation to detect the difference between clusters. In this case, the clustering will facilitate the following referencing step of the training data, as well as facilitating the relevance detection between the latent variable and the training data in a specific cluster, referencing only the most relevant cluster to the results, which will reduce the needed processing capacity and time. The following proposed model may apply one or more of these methods, mainly for referencing the training data and their relevance to the latent variables:
  • Variational Bayesian methods are employed in multifaceted statistical models involving observed variable “data” as well as unknown parameters and latent variables. They offer a reasoned estimation of the subsequent probability of the unobserved variables, for performing statistical inference based on them. This makes the model suitable for detecting the relation between the latent variables such as unobserved data and the training data fed to the model. This enables it to analyze the latent variables in terms of likelihood with the training data of the model and thus will be useful in detecting which image(s) exactly from the training data contributed to the generation of the obtained latent variables. This similarity detection would employ a web-based referencing search model to generate the accurate referencing of these images and their copyright state at the same time. However, it will be mainly directed to the referencing of the training data, not referencing the AI-generated images to their designers.
  • The unrestricted Boltzmann machine (URBM) is a generative stochastic artificial neural network that learns a probability distribution over its set of inputs, typically applied in dimensionality reduction [62], classification and collaborative filtering. The unrestricted Boltzmann machine consists of a bipartite graph form for its neurons, which is a pair of nodes from each of the two groups of units (the “visible” and “hidden” units, respectively) that may have a connection between them, as well as connections between hidden units. Thus, URBM could be more suitable for the auxiliary ARG model for referencing both the training data and the AI-generated images.
  • These two proposed models will be extended and described in detail in a future study due to the tightened scope of the current study. Interestingly, these two proposed models are subclasses of the generative model, which also includes the variational autoencoder, GANs, and the diffusion model. This makes the proposed referencing models in harmony with the text-to-image generation model.
  • Design Research
  • Biomaterials Research-Driven Design—Data Validation
Translate 2D Images to 3D Models
  • The AI text-to-image models are suitable for form-finding and exploration processes. Due to the complexity of some of the AI-generated images that would require extensive 3D digital modeling when using conventional digital design methods, it is mandatory to develop new methods to translate the 2D images into 3D digital models in various 3D object extensions (obj, stl, etc.). This should revolutionize the design process, facilitating the rapid generation of novel and creative designs fed by the human designer’s imagination. Some recent studies have reported various methods to transfer 2D images to 3D models [63,64]. One promising method is the use of point clouds [65] for the conversion of 2D AI-generated images into 3D digital design models. Later, these points can be converted to polygon mesh or triangle mesh models, NURBS surface models, or CAD models through surface reconstruction. This can be achieved by using convolutional neural networks (CNNs) by converting the 2D CNNs to 3D CNNs to generate 3D objects from 2D images.
  • Form Generation Approach
  • Form 3D Modeling
  • Design’s Technical Aspects (Study, Simulation, and Optimization)
  • Rendering and Presentation
  • Biodigital Fabrication (Biotechnological Practices, e.g., Biomanufacturing, 3D Bioprinting) and Digital Fabrication
Integration of Materials Research-Driven Design
  • The integration of material composition on physical–chemical and/or histological-anatomical levels significantly enriches the form-finding process and increases the margins of creativity and novelty of outcomes when utilizing AI text-to-image models due to the structural complexity, variation, and fractal morphology of biomaterials. With the help of high-end characterization imagery techniques, both will force novelty and creativity forward in the design process.
  • Design Concept
  • Design Methodology
  • Design Research
  • Form Generation Approach
  • Biodigital Fabrication (Biotechnological Practices, e.g., Bi-manufacturing, 3D Bioprinting) and Digital Fabrication
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Abdallah, Y.K.; Estévez, A.T. Biomaterials Research-Driven Design Visualized by AI Text-Prompt-Generated Images. Designs 2023, 7, 48. https://doi.org/10.3390/designs7020048

AMA Style

Abdallah YK, Estévez AT. Biomaterials Research-Driven Design Visualized by AI Text-Prompt-Generated Images. Designs. 2023; 7(2):48. https://doi.org/10.3390/designs7020048

Chicago/Turabian Style

Abdallah, Yomna K., and Alberto T. Estévez. 2023. "Biomaterials Research-Driven Design Visualized by AI Text-Prompt-Generated Images" Designs 7, no. 2: 48. https://doi.org/10.3390/designs7020048

Article Metrics

Back to TopTop