Next Article in Journal
Multimodal Named-Entity Recognition Based on Symmetric Fusion with Contrastive Learning
Previous Article in Journal
Class-Driven Robust Non-Negative Matrix Factorization with Dual-Hypergraph Regularization for Data Clustering
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Generative AI-Driven Industrial Design Framework for Human–GenAI Co-Creation

1
School of Design, Xi’an Technological University, Xi’an 710021, China
2
School of Mechanical Engineering, Xi’an Technological University, Xi’an 710021, China
3
School of Creative Design, South China Normal University, Shanwei 516600, China
*
Authors to whom correspondence should be addressed.
Symmetry 2026, 18(2), 352; https://doi.org/10.3390/sym18020352
Submission received: 29 December 2025 / Revised: 2 February 2026 / Accepted: 11 February 2026 / Published: 13 February 2026
(This article belongs to the Section Computer)

Abstract

Generative AI (GenAI) is accelerating design space exploration and multimodal prototyping in industrial design (ID), bringing new efficiencies and possibilities to early-stage ideation and cross-media expression. Yet many studies do not clearly define stage-wise human–GenAI roles, preserve constraints as traceable cross-stage artifacts, or provide verifiable stage-wise evaluation, undermining traceability in both concept convergence and concept-to-engineering handover. To address these issues, this paper proposes GID-HGCC, a GenAI-driven human–GenAI co-creation ID framework that links four core stages: requirements confirmation, concept generation, concept evaluation, and 3D modeling. First, it specifies stage-wise responsibilities and defines the corresponding inputs and outputs. Second, it establishes a traceable cross-stage artifact flow—“structured prompts–candidate concepts–evaluation outputs–3D engineering issue list”—to support continuous constraint transmission and explicit documentation. Third, it integrates a multi-dimensional evaluation criteria system with IVIFNs–CRITIC–TOPSIS for concept ranking, and further strengthens convergence reliability via preference–consistency diagnostics. The framework is validated through a case study on a portable passive cervical spine rehabilitation training device. Expert preferences over stage-wise co-creation artifacts exhibit an overall medium-to-high level of consistency, and the Top-5 overlap between each expert and the group ranking ranges from 0.80 to 1.00. These results demonstrate that GID-HGCC offers an operational reference for constraint-guided human–GenAI co-creation in ID, improving traceability and handover reliability from requirements confirmation to engineering refinement.

1. Introduction

Industrial design (ID) links user needs, technological innovation, commercial value, and manufacturing. It is often described as a canonical workflow—research, ideation, concept generation, prototype testing, and modeling. Yet early-stage concept exploration is still constrained by limited time and manpower. As a result, exploration often relies on designers’ tacit knowledge and intuition. This can narrow the creative scope and lead to early lock-in of solutions, thereby reducing downstream decision quality and iteration efficiency [1]. Generative artificial intelligence (GenAI), enabled by large language model (LLM) and diffusion-style generative modeling, extends AI from recognition to multimodal synthesis [2]. GenAI can generate text, images, and even 3D content, thereby accelerating design-space exploration and expanding both the breadth and depth of ideation [3,4,5,6]. However, unconstrained GenAI ideation may produce outputs that appear plausible but drift from requirements, making results difficult to control and justify in professional design settings [7]. Human–GenAI co-creation offers a practical way to leverage GenAI’s generative capacity while maintaining design control [8]. In this mode, designers set goals and constraints and make final judgments, while GenAI supports high-throughput exploration under constraint injection and supervision.
Existing research on GenAI in ID can be grouped into two streams: workflow orchestration and design reasoning and decision support. At the workflow orchestration level, GenAI is embedded in platform-based “generate–evaluate–iterate” loops, which can shorten design cycles and improve perceived outcomes [9]. Within this stream, LLMs are often used to synthesize heterogeneous inputs into structured requirement artifacts by surfacing latent needs, extracting domain knowledge, and formalizing design semantics via targeted prompting. This complements established methods such as QFD, functional analysis, and TRIZ [10]. For design reasoning and decision support, GenAI can quickly transform scattered design requirements and preferences into a unified design solution under specified parameters, and facilitate more intuitive communication and consensus-building among stakeholders during the iterative process through shared formats [11]. For instance, text-to-image models (e.g., Midjourney, Stable Diffusion, Nano Banana Pro) translate design semantics into visual feature candidates that enable intuitive discussion and negotiation [12]. Empirical studies further suggest that GenAI can increase the number of novel ideas by providing heuristic images based on conceptual similarity, thereby supporting designers’ creativity [13].
Despite this progress, many studies still emphasize process standardization and toolchain integration. They often provide limited engineering-oriented descriptions of how human designers and GenAI collaborate across key stages. In particular, stage-wise task division, intermediate deliverables, and cross-stage artifact traceability are frequently underspecified. Consequently, stage outputs (e.g., structured prompts, candidate concepts, evaluation outputs, and modeling issue lists) may remain fragmented and loosely connected, making it difficult to build a traceable artifact-and-constraint flow across the process. More importantly, form-control constraints—such as symmetry/near-symmetry targets, intentional asymmetry required by ergonomics and interface layout, and assembly and interface requirements—are often not represented in an explicit and transferable form, which weakens constraint continuity. This can introduce information loss or semantic drift during cross-stage migration, from requirements translation through generation and evaluation to modeling, increasing the risk of misalignment between the final form and the original semantic intent.
Therefore, constructing a GenAI driven design framework requires attention to three issues. (1) Specify the responsibilities and division of labor between human designers and GenAI across key design stages while leveraging their respective strengths [14]. (2) Establish a traceable cross-stage artifact flow, so that stage-wise co-creation deliverables serve as explicit carriers of constraint expressions and can be propagated with traceable linkage across stages [15]. (3) Given the large volume of GenAI-generated alternatives, it requires developing a multidimensional evaluation index system centered on design and other context-relevant factors, together with explicit criteria that operationalize structural intentions (e.g., symmetry, near-symmetry, and function-driven asymmetry), to avoid the fallacy that “generation implies validity” and to identify genuinely valuable candidate solutions [16].
Building on the foregoing analysis, we propose a GenAI-driven ID framework for human–GenAI co-creation, hereafter referred to as GID-HGCC. The framework is anchored in four stages: requirements confirmation, concept generation, concept evaluation, and 3D modeling. It provides a continuous, cross-stage constraint mechanism by integrating multi-source heterogeneous data input, structured text prompt generation, candidate solution generation, fuzzy multi-criteria evaluation of candidate solutions, and 3D modeling problem structuring. This ensures the traceability of requirements translation across stages and provides interpretable decision support for downstream 3D model refinement. Notably, the constraints considered in this study differ from hard geometric parameter constraints in engineering; instead, they are operationalized through explicit prompts, evaluation criteria, and other artifacts. This study makes three contributions.
(1)
We propose the GID-HGCC framework spanning requirements confirmation, concept generation, concept evaluation, and 3D modeling, explicitly defining stage-wise human decision authority and GenAI generation/assistance roles to make co-creation operational rather than tool-centric.
(2)
We establish a cross-stage constraint transfer mechanism based on traceable artifact flows. This mechanism displays the transformation path of design constraints through co-created artifacts such as structured prompts, candidate concepts, evaluation outputs, and a checklist of best concept 3D modeling issues, supporting the traceability of the design process.
(3)
We rank and select the preferred concept using fuzzy multi-criteria evaluation, and then validate the resulting ranking via expert preference consistency check to confirm stable expert consensus. This indicates that, under the continuous guidance of cross-stage constraints, the phased products of human–GenAI co-creation demonstrate stable consistency in expert preference judgments.

2. Related Works

2.1. GenAI Enabled Design Workflow

GenAI is widely expected to drive near-term industrial transformation, with forecasts indicating sizable economic impact and rapid market growth [17,18]. It has already permeated digital content creation, and is increasingly used in creative design to generate visual assets (e.g., interfaces, logos, posters, advertisements, and fonts) [19], layouts [20], and paintings [21], as well as to support product concept visualization and early structural ideation. Especially in the design field, GenAI is increasingly leveraged to augment established methods, workflows, and toolchains. By coupling human intent with GenAI-enabled generation, designers can explore larger sets of requirement-aligned alternatives, which has been discussed as a driver of “design democratization” and a catalyst for process change [22]. Specifically, at the method level, Jiang et al. [23] linked generative design with personalized mass customization through automated shape synthesis and structural design. Yang et al. [24] proposed an intelligent customization approach for complex products under dynamic uncertainty, operationalizing the process with quantitative models for classification, optimization, probabilistic reasoning, decision-making, and knowledge representation. Pan et al. [25] combined LLMs with knowledge graphs to support context-aware conceptual design through a closed-loop method integrating demand mining, knowledge retrieval, and solution generation. In terms of workflows, Fang et al. [26] proposed a GenAI-enhanced concept design framework grounded in the Double Diamond and analyzed collaboration mechanisms across stages. Zhou and Chen further analyzed how LLMs can support different stages of the same model, framing LLMs can function as both tools and design materials [27]. Wang et al. [28] linked user needs to design features and proposed a multimodal generative framework with integrated evaluation for rapid 3D previews. Regarding the toolchains, Kim and Maher [29] studied AI-enabled collaborative innovation design tools can affect the novelty, diversity, and quantity of ideas. Li et al. [30] introduced ProdGen and a corresponding LLM-based agent to automate the complex product design pipeline from requirements to solution. Lu et al. [31] used ChatGPT/Midjourney/Vega AI to streamline product form design and validated AI-generated imagery for affective design. Liu et al. [32] developed requirement-driven pipelines with engineering semantics for vehicle exterior generation.
Collectively, these works demonstrate that introducing GenAI into the design field is feasible and necessary from the perspectives of methods, workflows, and toolchains. However, there are shortcomings in how constraints at different stages of the ID process are explicitly represented and communicated. Specifically, the boundaries of responsibility between humans and GenAI at each stage are not effectively defined, and the inputs and outputs at each stage often remain at a descriptive level. Furthermore, constraints are often treated as implicit experience within the design process and are not effectively organized into a traceable artifact flow through the use of artifacts. Therefore, future research on GenAI-driven design frameworks should address these issues by clearly defining the responsibilities of humans and GenAI, and establishing a traceable artifact flow through the use of stage-specific deliverables. This will help reduce constraint decay and semantic drift across stages, and improve the traceability and reliability of the concept-to-engineering handover process.

2.2. Stage-Wise GenAI Applications

Industry analysts anticipate accelerating GenAI adoption and increasingly multimodal capabilities, which together strengthen the near-term relevance of GenAI for ID [33,34]. Correspondingly, prior work has begun to embed GenAI across the ID process rather than treating it as a standalone tool. To clarify where and how GenAI contributes, this review synthesizes the literature stage by stage, organizing representative applications into four core phases: requirements confirmation, concept generation, concept evaluation, and 3D modeling.
Requirements confirmation. Traditional approaches (e.g., questionnaires, focus groups, interviews) remain valuable but can be constrained by sample size and subjective interpretation. LLMs are increasingly used to synthesize heterogeneous evidence—such as product reviews, standards, patents, and visual references—into more structured requirement statements and constraint descriptions, reducing time spent on manual searching and consolidation. As a representative LLM, ChatGPT can efficiently retrieve and synthesize relevant evidence into concise requirement statements from targeted prompts [35]. When used for evidence-to-brief support, LLMs can synthesize dispersed evidence into constraint-aware requirements (e.g., recurring themes, sentiment cues, and user segments), thereby reducing information-filtering burden and enabling faster requirement prioritization and early market acceptance estimation [36]. In terms of information retrieval and mining, GenAI can turn unstructured and multimodal evidence into representations that facilitate efficient retrieval and synthesis. Ghali et al. [37] proposed a generative text retrieval model (GTR), which combines LLM generation with vector-database retrieval and achieves over 90% accuracy on manually annotated datasets. Zhang et al. [38] introduced a visual implicit knowledge distillation framework (VIKDF) to enhance LLM dialogue generation in zero-resource settings, supporting deeper requirement mining.
Concept generation. Concept generation is a critical phase in ID because it externalizes the designer’s initial understanding of the problem and solution space, significantly shaping downstream cost and overall design outcomes [39]. Text-to-image models can translate prompts into visual renderings, enabling rapid concept exploration and iteration [40,41]. Alcaide-Marzal and Diego-Mas [42] explored text-to-image GenAI in computer-aided concept design and its value for shape exploration. Wang et al. [43] developed ViMimic by coupling an LLM with analogy-based structured retrieval to automate analogy retrieval and recombination for ideation. Chen et al. [44] developed AskNatureNet to extract biologically inspired design (BID) knowledge and generate biomimetic concepts in natural language. Image-to-image models transform input images (e.g., sketches or wireframes) into semantically consistent renderings, supporting fast visual refinement. Wu et al. [45] generated car front images from hand-drawn wireframes to better align outputs with designer specifications. Lee et al. [46] developed an Eco-Innovation Assistant based on GenAI, providing eco-innovation solutions via design sketches. Liu et al. [47] introduced Sketch2Photo to produce photorealistic images from partial sketches or edge maps. Beyond single-modality pipelines, multimodal workflows further broaden exploration. Edwards et al. [48] introduced a Sketch2Prototype framework for rapid concept exploration that progresses from sketches to text, text to images, and images to 3D models. Cai et al. [49] developed DesignAID to generate text ideas via LLMs and render them as images. Yong et al. [50] integrated feature extraction with sketch inversion for watch design. Zhang et al. [51] introduced IGDT-MFP to support interactive mouse-appearance design via feature-to-parameter mapping.
Concept evaluation. GenAI can enhance decision-making quality by leveraging multimodal generative with semantic evaluation data, providing more evidence-informed support for screening and ranking design concepts [52]. In the automotive industry, GenAI-enabled simulation in virtual environments allows concept evaluation before physical prototyping, helping detect potential issues earlier and reducing iteration cost and time-to-market [53]. LLMs further support rapid prototyping by enabling early feasibility checks and more targeted allocation of design resources [54]. Tsumoto et al. [55] proposed a deep-learning-based concept identification framework that clusters large sets of alternatives and organizes them into comparable design concepts via classification. Chen et al. [56] introduced an LLM-based cross-cultural design supervision approach (CO-STAR) to structure cultural evidence and support decision-making through multi-level evaluation indicators. Dorri et al. [57] developed an AI-driven VR platform that integrates neural preference prediction with TOPSIS to rank solutions and relay selected outcomes to the design team, improving decision efficiency and customer satisfaction. In parallel, benchmarking efforts such as AIGCBench provide standardized evaluation dimensions (e.g., alignment, motion, temporal consistency, and quality) for assessing generative outputs, offering references for future evaluation studies [58].
3D modeling. GenAI autonomously generates lightweight, high-strength, and cost-effective 3D models that meet specific mechanical properties, material distribution, and manufacturing constraints. Li et al. [59] proposed a 3D product generative design model that utilizes GANs and target-embedding variational autoencoders. Liang et al. [60] integrated large-scale vision–language models with topology optimization, proposing LMTO to decompose structural semantics and support automatic generation and efficient editing of structural concepts. Zang et al. [61] introduced text2shape based on a “requirement-function-behavior” structure, feeding engineering-semantic requirement texts into a conditional Wasserstein generative adversarial network (CWGAN) to produce corresponding 3D models. Park et al. [62] combined a β variational autoencoder (β VAE) generator with a deep neural network (DNN) agent to explore and optimize geometric shapes for prefabricated parts. Ajay et al. [63] combined neural rendering with multimodal images and text to generate 3D shape models and colors. Zhou and Camba [64] integrated LLMs into parametric CAD systems (e.g., CadVLM, CAD-Assistant, CADgpt) to infer design intent from multimodal inputs and assist generation, refinement, and geometric constraint completion. Jiang et al. [65] developed AutoTRIZ, an LLM-based tool that generates technical solutions automatically from user problem descriptions, following the systematic reasoning process of TRIZ.
Based on the above analysis, Figure 1 summarizes the stage-wise roles of GenAI in supporting core ID activities. Table 1 outlines representative GenAI technologies at each stage and their respective advantages. Overall, existing research shows steady progress in GenAI’s stage-specific support, including evidence-based requirement structuring, rapid concept visualization, data-supported concept evaluation, and early 3D model generation. However, most studies still optimize outputs within individual stages, while the handoffs between adjacent stages remain insufficiently specified. As a result, requirement semantics and constraint intent are often reinterpreted during transitions, undermining continuity. This motivates organizing the core stages of ID around the sustained, explicit representation of constraints and establishing a traceable cross-stage artifact flow—via structured stage outputs—to reduce semantic drift and constraint loss.

2.3. Human–GenAI Co-Creation

As an emerging interdisciplinary topic, human–GenAI co-creation investigates how human creativity and judgment can be coupled with GenAI’s generative and computational capabilities to produce outcomes that neither party can achieve alone [66]. Recent studies have examined this phenomenon from multiple perspectives, including conceptual framing, task allocation, collaboration strategies, application contexts, and trust calibration. Conceptually, co-creation is often described as a complementary coupling: humans contribute creativity, contextual understanding, and ethical judgment, while GenAI contributes fast synthesis, pattern discovery, and scalable generation, together supporting information exchange, decision-making, and task execution toward shared goals [67,68].
Task allocation studies consistently report that humans are better suited to handle creative tasks and decision-making, while GenAI excels in labor-intensive tasks such as color matching, composition, and layout [14,15,26]. Shi et al. [8] proposed a collaborative enhancement framework in which designers focus on discovery, visualization, creation, and testing, while AI contributes understanding and adaptability. Vaccaro et al. [69] found that while combining humans and AI in decision-making tasks can lead to performance losses, it produces synergistic effects in creative tasks. Hao et al. [70] noted that GenAI can reduce cognitive bias and support analytical decision-making, but may also foster over-reliance on generated insights. Wang et al. [5] explored LLMs in customized generative design, proposing three collaboration schemes—passive, auxiliary, and active—to help designers select appropriate LLM performance-enhancement strategies.
In terms of application contexts, human–GenAI co-creation is most prominent in design fields such as product, graphic, architectural, fashion, and game design [19,21,28,71]. GenAI aids creative divergence and concept generation, aligns user needs with design models via natural language processing, and supports the development of augmented reality-based collaborative design platforms. Trust has also been recognized as a key condition for effective collaboration. Ding et al. [72] developed a Bayesian model to predict human trust under varying AI capability levels, providing a basis for trust calibration and risk analysis in human–AI systems. Figure 2 summarizes the core advantages of human–GenAI collaboration in co-creation.
As illustrated in Figure 2, humans remain stronger in problem framing, value judgments, and creative leaps, whereas GenAI is efficient at large-scale synthesis, variation generation, and evidence processing. Their roles are therefore complementary: GenAI can expand visual communication and reduce the burden of information filtering, which supports faster ideation and the co-evolution of design problems and solutions; meanwhile, human guidance can steer generation toward feasible, responsible outcomes by injecting contextual and ethical constraints. However, existing research has paid limited attention to the evaluability and verifiability of human–GenAI co-created outputs across design stages. In particular, co-created concepts are seldom assessed using standardized criteria and well-defined methodologies, making it difficult to determine whether experts with different backgrounds reach a shared level of acceptance. Moreover, there is insufficient evidence on whether co-created outputs can stably converge as the process advances. These gaps motivate the integration of multi-criteria decision-making methods and preference–consistency diagnostics into human–GenAI co-creation workflows, so as to support reliable concept convergence and improve decision traceability in co-creation settings.
To address three recurring gaps in prior work—unclear stage-wise role boundaries between humans and GenAI, the lack of traceable carriers for cross-stage constraint transfer, and the absence of stage-specific evaluation for co-created outputs—this study proposes a GID-HGCC framework spanning four stages: requirements confirmation, concept generation, concept evaluation, and 3D modeling. The framework specifies stage-wise inputs and outputs while clarifying human/GenAI responsibilities, and operationalizes cross-stage traceability through an artifact flow of structured prompts, candidate concepts, evaluation outputs, 3D engineering-issue lists. In addition, it integrates IVIFNs–CRITIC–TOPSIS with preference–consistency diagnostics to provide decision support for reliable concept convergence.

3. Materials and Methods

3.1. Research Design

This study addresses semantic decay and drift in requirement semantics and constraint intentions during cross-stage transitions in human–GenAI co-creation at key stages of industrial design. Building on an analysis of prior work in GenAI-assisted design, industrial design practice, and human–AI co-creation, we propose a GID-HGCC framework (Figure 3). The framework is organized into four stages: requirements confirmation, concept generation, concept evaluation, and 3D modeling. First, it specifies stage-wise responsibilities and task boundaries between human decision-makers and GenAI, making the co-creation process describable and executable. Second, it introduces a cross-stage mechanism for constraint continuity by treating structured prompts, candidate concepts, evaluation criteria and outputs, and a 3D modeling issue list as explicit carriers of constraint expressions, thereby forming a traceable artifact flow. Table 2 summarizes the human–GenAI role allocation, the input–output artifacts of each stage, and how constraint information is carried and traced across the workflow. Section 4 presents a case study to operationalize the GID-HGCC framework. In the concept evaluation phase, it combines fuzzy multi-criteria evaluation with preference–consistency testing to examine whether criterion-based selection of candidate concepts yields a stable expert preference consensus. This examination is conducted under the condition that constraint-carrying artifacts are explicitly propagated across stages.

3.2. Requirements Confirmation

In this stage, multiple data sources are utilized to address the limitations of relying on a single form of data, such as insufficient precision and comprehensiveness. The proposed framework integrates various input data, including design parameters document, user reviews, reference images of market products, hand-drawn sketches, 3D design files, design standards, industry norms and patents. Specifically, design parameters document provides detailed information on the product’s structure, material processes, human-product interaction methods, performance parameters, aesthetic requirements, and usage scenarios. They also enable symmetry/near-symmetry and justified local asymmetry to be specified as interpretable structural constraints. User reviews reflect user satisfaction and emotional needs regarding related products. Reference images from the market showcase the stylistic language, color schemes, and compositional trends of similar products. Hand-drawn sketches capture the designer’s initial concepts and intentions, adding a level of human input and stylized expression to the requirement definition. 3D design files illustrate the product’s basic structure and assembly relationships. Design standards define mandatory safety, stability, and human-product interaction requirements specified by regulations. Industry norms and patents highlight existing technologies and potential innovation gaps.
By feeding design positioning (I1) together with manually curated and verified multi-source data (I2) into an LLM, it can produce a constraint-aware structured prompt text (O1) for subsequent generation and evaluation. In this step, human decision-makers set objectives and oversee quality control, while GenAI organizes dispersed materials into a prompt-ready specification. Crucially, key morphology-related constraints—such as symmetry/near-symmetry cues and function-driven intentional asymmetry—are made explicit in O1 rather than remaining implicit in narrative descriptions. During co-creation, designers review O1 and add targeted constraint patches when omissions or conflicts are detected (e.g., functional priorities, structural cues, performance bounds, or prohibitions). For instance, in ergonomic chair design, a text-only prompt (e.g., “a high-performance ergonomic chair inspired by dragonfly-wing venation”) is often too abstract to convey the venation’s structural logic and performance implications, yielding outputs that look plausible yet deviate from user and engineering requirements. By contrast, when relevant standards, posture/load requirements, material constraints, and mechanism references are pre-integrated and quality-checked, O1 becomes more standardized, actionable, and traceable—reducing speculative drift and avoiding the mistaken inference that a generated result is inherently well-justified.

3.3. Concept Generation

In the concept generation stage, structured prompt text (I3, that is O1) produced in requirements confirmation stage is used as the primary driver for generation, optionally supplemented with visual references (I4, part of I2). Then candidate concepts (O2) are produced using text-to-image and image-to-image models, while humans remain responsible for review, patching, and regeneration to keep outputs aligned with the intended semantics and constraints. Rather than pursuing unconstrained diversity, this stage operationalizes a controlled iteration loop: when the generated concepts show minor deviations from O1, designers apply targeted patches and regenerate; when major deviations occur, the process returns to revise O1 so that the constraint specification. By using constraint-based processing, overall symmetry/approximate symmetry cues and functionally driven intentional asymmetry are incorporated into the iterative generation process, until candidate design concepts that are visually and structurally proportional are generated. These concepts serve as input for the concept evaluation phase and as traceable artifacts for reverse modification guided by the concept evaluation results.

3.4. Concept Evaluation

This stage aims to select the best concept from multiple GenAI-assisted alternatives, which requires a robust evaluation scheme. Given the high visual diversity of GenAI-generated concepts, together with residual uncertainty in user fit and ethical compliance [73], this stage establishes a six-dimensional evaluation criteria system spanning design, technology, society, economy, environment, and ethics (Table 3). The criteria are defined by the research team, and GPT–5.1 Thinking is used to harmonize the wording and interpretation of each criterion to ensure consistent understanding among experts. This unified reference supports systematic screening of candidate concepts with respect to visual plausibility, structural proportions, and overall feasibility. In these dimensions, design, technology, society, and ethics are considered positive criteria, while economy and environment are treated as negative criteria. In terms of economic performance, both the complexity of components and the extent of surface decoration are considered negative criteria, as they contribute to increased production costs and may reduce cost-effectiveness.
Within the proposed framework, the concept evaluation process is demonstrated using a fuzzy multi-criteria decision-making (IVIFNs–CRITIC–TOPSIS) method, leveraging the strengths of human judgement. Suppose A = A 1 , A 2 , , A m is the set of candidate design concepts, C = C 1 , C 2 , , C n is the set of evaluation criteria, and L = L 1 , L 2 , , L s is the set of decision-makers. During the initial evaluation, each decision-maker provides ratings using a nine-level linguistic scale, which is subsequently converted into fuzzy information. Since interval-valued intuitionistic fuzzy numbers (IVIFNs) offer a richer representation of uncertainty than conventional fuzzy numbers and help preserve the information contained in linguistic judgments, IVIFNs are chosen as the conversion format. Specifically, the linguistic-to-IVIFN mapping is carried out using Table 4 proposed by Liu et al. in reference [74]. After the individual evaluations are obtained, predefined decision-maker importance weights ω = ω 1 , ω 2 , , ω s T are directly assigned to reflect each expert’s relative relevance and experience. Then weighted geometric aggregation operator based on IVIFNs (Equation (1)) is applied to aggregate the evaluation values for all candidate design concepts under each criterion, resulting in the initial group decision matrix, as shown in Equation (2).
I V I F W G α ˜ 1 , α ˜ 2 , , α ˜ j , , α ˜ n = j = 1 n ω j α ˜ j = j = 1 n μ ¯ j ω j , j = 1 n μ ¯ j ω j , 1 j = 1 n 1 ν ¯ j ω j , 1 j = 1 n 1 ν ¯ j ω j
P ˜ = p ˜ i j m × n = p ˜ 11 p ˜ 12 p ˜ 1 n p ˜ 21 p ˜ 22 p ˜ 2 n p ˜ m 1 p ˜ m 2 p ˜ m n
where p ˜ i j = μ ¯ i j , μ ¯ i j , ν ¯ i j , ν ¯ i j denotes the evaluation value of the i-th (i = 1, 2,…, m) design concept under the j-th (j = 1, 2,…, n) criterion.
Next, CRITIC is employed to determine objective criterion weights, and TOPSIS is used to rank the candidate design concepts. CRITIC computes weights from the evaluation data by jointly considering each criterion’s variability (standard deviation) and its redundancy with other criteria (inter-criterion correlations), thereby reflecting the information contribution of each criterion and reducing reliance on subjective weighting [75]. TOPSIS then ranks concepts according to their relative closeness to the positive and negative ideal solutions, offering an intuitive and efficient distance-based decision rule that remains discriminative even when concept performance is similar [76]. Specifically, after obtaining the initial group decision matrix, CRITIC is used to calculate the criterion weights (Equation (3)), which are applied to form the weighted normalized matrix. TOPSIS closeness coefficients are then computed (Equation (4)), and concepts are ranked in descending order of Ci; the top-ranked concept is selected for subsequent 3D modeling.
ω j = E j j = 1 n E j
where E j represents the information content embedded in the j-th criterion. The larger the value of E j , the greater the amount of information contained in the criterion, indicating its higher importance in the evaluation process, and consequently, a larger weight.
C i = D i D i + D i +
where D i represents the distance between the i-th design concept and the negative ideal solution, while D i + represents the distance between the i-th design concept and the positive ideal solution. A larger value of Ci indicates that the i-th design concept performs better overall.
Furthermore, to test the robustness of the co-creation concept ranking conclusions, this paper obtained the ranking of candidate solutions from each expert using the same computational logic, and conducted a correlation analysis on the consistency of pairwise expert rankings and “expert-group” rankings. This is supplemented with a Top-k overlap rate indicator for cross-validation, providing evidence to support the evaluability and verifiability of the phased products under the GID-HGCC framework.

3.5. 3D Modeling

In the 3D modeling stage, the best design concept (I9) originated from concept evaluation stage is imported into an image-to-3D generator to obtain an initial mesh-based 3D proxy model. The model is first checked against the concept’s key visual constraints (e.g., overall symmetry/near-symmetry, function-driven local intentional asymmetry, and continuity of major contours). If deviations are observed, regeneration is performed with targeted descriptive adjustments; otherwise, the model is confirmed and a suitable mesh style is selected for downstream processing. After an acceptable proxy is obtained, designers conduct lightweight refinement and verification using conventional 3D tools, focusing on geometry quality and manufacturability-related issues that are typically implicit in generative meshes. The output is not treated as a finished engineering model, but as an explicit, traceable intermediate deliverable: a 3D proxy model accompanied by a structured engineering semantics issue list (O4). This checklist provides actionable modification targets and prioritization cues for subsequent engineering refinement, helping preserve constraint intent and reduce semantic drift in the transition from concept to engineering implementation.

4. Experiment Result and Analysis

This section demonstrates the application of the proposed GID-HGCC framework through a case study.

4.1. Problem Description

In today’s fast-paced lifestyle, prolonged desk work and extensive use of electronic devices have led to a significant rise in cervical spine disorders, with a noticeable trend toward younger age groups. These issues not only cause pain and restricted mobility but can also result in dizziness, upper limb numbness, and other symptoms, severely affecting both quality of life and work efficiency. In response, cervical spine rehabilitation devices have emerged, offering continuous support for rehabilitation, strengthening neck muscles, improving cervical curvature, alleviating symptoms, and promoting recovery. Among these, the portable passive cervical spine rehabilitation training device has gained popularity due to its ability to overcome time and space limitations, making it particularly suitable for modern, fast-paced living. However, as the market for these products grows and categories rapidly evolve, designers face increasing cognitive load and decision-making complexity during the reference and adaptation processes. Therefore, this section focuses on the portable passive cervical spine rehabilitation training device as the design object, applying the proposed GID-HGCC framework throughout the product development process. The application process will be illustrated, providing practical guidance for applying this framework in design practice.

4.2. Implementation Process

4.2.1. Requirements Confirmation in the Case Study

After determining the design objectives, this study draws on process materials archived from the research team’s prior graduation-design teaching (market reference images, 3D design documents, and process sketches). It further supplements these sources with evidence collected from publicly accessible channels—namely e-commerce user reviews, design standards, industry norms, and patents—via public web scraping and manual retrieval. On this basis, the research team compiled a design parameters document. Together with the other materials, it serves as input to GPT–5.1 Thinking for requirement summarization and constraint specification.
To ensure compliance and reduce potential infringement risks, we adopted tiered governance and process control based on data sensitivity and rights attributes. (1) The design-parameter document was synthesized by the research team through systematic analysis of publicly available information. (2) User reviews were collected from publicly accessible pages (e.g., JD.com and Taobao) in accordance with platform rules; only usage-experience content relevant to this study was extracted, and no personally identifiable information was collected or processed. Any potentially traceable cues in review text were de-identified while preserving the original meaning. (3) Competitor images and 3D files originated from archived process materials under the graduation-design teaching management framework and were used solely for design-feature comparison and constraint refinement. (4) Process sketches were interim artifacts produced by students based on reference materials and conceptual derivations. (5) Design standards, industry norms, and patents were obtained through legitimate channels and cited in a standardized manner; they were used only to construct and check feature-level constraints. All original and teaching/research materials are not released externally as reusable data, and their use is restricted to the research team’s teaching and research activities.
Next, the design positioning and verified multi-source data were consolidated into a single PDF and fed into GPT–5.1 Thinking. The design parameters document summarizes the rehabilitation device’s structure, materials and manufacturing processes, human–product interaction, performance parameters, aesthetic requirements, and usage scenarios (Table 5), where paired features (e.g., curved wrapping components and a curvature-matched mandible support) indicate overall symmetry or near-symmetry intent, whereas side-placement requirements (e.g., a side-integrated air pump and hidden ports) indicate intentional local asymmetry driven by usability and interface layout. The user review section compiles cross-brand, real-world feedback on cervical spine rehabilitation devices (Table 6). Table 7 provides reference images, process sketches capturing style and technical cues, and 3D models of selected products. Table 8 lists relevant design standards, and Table 9 summarizes industry specifications and patent documents. Figure 4 shows how the PDF input is transformed into structured prompt text by GPT–5.1 Thinking. In practice, we made only minor revisions to the generated prompt text using simple presentation rules (e.g., no environment, dual-view display, and no text in the image), and iteratively produced a revised structured prompt for subsequent candidate generation and basic constraint alignment in the concept generation stage.

4.2.2. Concept Generation in the Case Study

This section adopts Midjourney as the text-to-image generation tool. First, the constraint-aware structured prompt derived from the requirements confirmation stage is entered into Midjourney to generate an initial concept for the portable passive cervical spine rehabilitation device (Figure 5). Note that the screenshot in Figure 5 was captured from a Chinese-language software interface; any occasional non-English UI text is incidental and does not affect scientific understanding or the reported results. The generated concept reflects the intended material and color cues, human–product interaction, and wearing scenarios under the specified visual constraints. The initial output is then refined through a lightweight human–GenAI iteration to obtain the finalized rendering of concept A1 (Figure 6; see the interface note in Figure 5). By repeating the same prompt–review–patch–regenerate cycle, ten candidate concepts that are largely aligned with the predefined requirements were produced for subsequent concept evaluation (Table 10).

4.2.3. Concept Evaluation in the Case Study

Based on the concept evaluation process established in Section 3.4, this study quantitatively evaluated 10 candidate design concepts to identify the optimal solution for subsequent 3D modeling and engineering refinement. A decision-making panel of five experts from industry and academia was formed, representing five domains: industrial design, product design, medical device development, ergonomics, and generative design. All experts had more than five years of relevant research or project experience and hands-on experience in using GenAI tools in practical settings.
During the evaluation, each expert independently assessed the 10 concepts according to the proposed criteria system, using the linguistic terms defined in Table 4. The linguistic ratings were then converted into IVIFNs, following the predefined linguistic-to-interval mapping rules, to explicitly capture evaluation uncertainty. Taking Decision Maker L1 as an example, Table 11 presents the IVIFNs evaluation matrix derived from their original linguistic assessments, which served as the input for subsequent group information aggregation and concept ranking.
Considering each expert’s professional relevance and practical experience in medical equipment industrial design, this study assigns differentiated importance weights to the five decision-makers, represented by ω L = 0.3 , 0.2 , 0.25 , 0.15 , 0.1 T . Using these weights, the five individual initial IVIFNs decision matrices are aggregated via the weighted geometric aggregation operator (Equation (1)), yielding the experts’ initial group decision matrix reported in Table 12. This matrix is subsequently used as the input for calculating the decision-criterion weights.
After obtaining the initial decision matrix, the IVIFNs-CRITIC method is applied to determine the weights of each criterion, represented as ω j = 0.173 , 0.127 , 0.202 , 0.133 , 0.179 , 0.186 T . Additionally, based on the IVIFNs-TOPSIS method, the ranking of the ten alternative design concepts is obtained, as shown in Table 13. From the table, it is evident that design concept A9 is the optimal design concept.
To further assess the robustness of the co-creation concept rankings, an expert preference consistency analysis was conducted for the 10 candidate concepts. First, individual rankings were derived for each expert using TOPSIS, based on the experts’ initial IVIFNs decision matrices and the criterion weights (see Table 14). Next, Kendall’s correlation coefficients were computed in SPSSAU (Version 26.0) for expert–expert pairs and for each expert relative to the aggregated group ranking, and the results were visualized as a correlation heatmap (Figure 7). Where LG denotes the experts group ranking. The pairwise correlations among experts are uniformly positive and largely fall within 0.51–0.82, suggesting a moderate-to-high level of agreement across experts with different disciplinary backgrounds. Correlations between the group ranking and individual experts are higher, ranging from 0.73 to 0.91, indicating that the aggregated ranking aligns closely with individual judgments and is broadly representative of the panel. In addition, the Top-5 check in Table 14 shows a stable group Top-5 set (A9, A5, A10, A6, and A8). The Top-5 overlap between the group and each expert ranges from 0.80 to 1.00, with L1, L3 and L5 matching the group set exactly. Moreover, A9, A5, and A10 appear in the Top-5 for all experts as well as the group ranking. Taken together, these results indicate stable consistency in expert preferences, providing a credible basis for selecting the optimal concept and supporting subsequent engineering refinement.

4.2.4. 3D Modeling in the Case Study

After determining the optimal design concept, this study employed Rodin3D’s model generation workflow to convert the selected concept into an editable 3D digital model. Specifically, the optimal concept (A9) was imported into the HYPER3D platform and processed using the Generate command to produce an initial mesh. The generated output was then inspected for visual consistency with the intended design constraints, including (1) overall symmetry or near-symmetry, (2) localized intentional asymmetry driven by functional requirements, and (3) continuity and curvature consistency of key contour lines. If the model satisfied these requirements, Confirm was selected and an appropriate mesh style—Smart Low-Poly, Triangular Mesh, or Quad Mesh—was chosen to match subsequent editing and export needs (Figure 8). Note that the screenshot in Figure 8 was captured from a Chinese-language software interface; any occasional non-English UI text is incidental and does not affect scientific understanding or the reported results. Otherwise, additional descriptive terms were entered into the text box, relevant tags and a mesh style were adjusted, and Regenerate was iterated until the overall form and visual semantics aligned with A9.
Once a satisfactory initial model was obtained, refinement was performed in the editor as shown in Figure 9 (see the interface note in Figure 8). First, the Brush tool was applied with controlled radius and intensity to optimize the main posterior neck support surface, lateral support regions, and edge transition zones, improving surface continuity and contour fidelity. Next, the Crease tool was used to introduce ordered anti-slip textures and other functional micro-details on the inner contact surface to enhance fit and frictional stability. The core posterior-neck curve was then globally shaped using Smoothing/Flatten, with repeated cross-checks between key viewpoints (e.g., side and top views) to ensure smooth 3D surface transitions. After local modifications, Smooth was applied for global mesh integration and surface polishing, reducing minor discontinuities and visual artifacts introduced during detail modeling. Finally, global alignment was verified using tools such as Move, by comparing the model against reference renderings across six standard orthographic views and selected perspective views to calibrate overall proportions, component relationships, and spatial posture. The above tools are highlighted in Figure 8 with red frame.
After finalizing the geometry, the material module was used to generate textures by entering the prompt “white neck support brace.” If the generated texture did not meet expectations, the redo function was used to iterate until an acceptable result was obtained and confirmed. The model was then exported by setting the output parameters (Mass = 0.50 kg, Height = 9.5 cm) and downloading the model in OBJ or STL format, completing the 3D model generation and output for this study (see Figure 10; see the interface note in Figure 8).
Finally, the OBJ mesh exported from Rodin was imported into Rhino 7 (see Figure 11) and a manual audit against geometric quality and manufacturability criteria were performed, resulting in the issue list summarized in Table 15. Note that the screenshot in Figure 11 was captured from a Chinese-language software interface; any occasional non-English UI text is incidental and does not affect scientific understanding or the reported results. This list serves as an explicit intermediate product in the “concept-to-engineering” transition phase, making the information that is usually implicit or missing in generative meshes at the “engineering semantics” level explicit, structured, and traceable. This provides actionable modification targets and prioritization criteria for subsequent engineering refinement, used to verify the feasibility of the conceptual design under manufacturing constraints, and to improve the reproducibility and practical engineering value of the process.

4.3. Result Analysis

This study validated the proposed GID-HGCC framework through a design case of a portable passive cervical spine rehabilitation training device. The results indicate that GID-HGCC operationalizes human–GenAI co-creation as a constraint-driven and auditable workflow. Where structured prompts encode requirement-derived constraints for controlled generation, candidate concepts provide a comparable design space, evaluation outputs support transparent selection, and the checklist derived from the selected concept’s 3D model functions as an engineering quality gate. Together, these artifacts enable decisions to be explicit, traceable, and transferable, rather than remaining implicit in designers’ tacit judgment. During the requirement confirmation phase, the LLM consolidated multi-source evidence (Table 5, Table 6, Table 7, Table 8 and Table 9) into an explicit requirement set and a structured prompt package (Figure 4), covering ergonomic factors, usage scenarios, and key visual features. This step made the requirement basis auditable and reusable for downstream generation and evaluation. In the concept generation phase, Midjourney was utilized to generate and iteratively refine concepts against the confirmed requirements, yielding ten candidates for evaluation (Table 10). Analysis reveals that concepts A1A4 exhibit stylistic similarities and high resemblance to existing products; A5A6 offer outstanding visual appeal but possess complex structures; A7A8 feature simple structures and provide good posterior support, though wearing comfort may be inadequate; A9A10 achieve a better balance among structural simplicity, cervical support, and wearing comfort. During the concept evaluation phase, The ten concepts were assessed using the established criteria system and fuzzy evaluation procedure (Table 11, Table 12 and Table 13), and A9 was ultimately selected as preferred concept. A9 integrates a knob-based adjustment mechanism that supports controlled changes in angle and tightness, consistent with the intended “dynamic correction + static support” use logic. Preference consistency diagnostics (Kendall correlation heatmap and Top-5 overlap; Figure 7 and Table 14 indicate stable agreement among experts and strong alignment between individual and group rankings, supporting the robustness of the selected concept. In the 3D modeling phase, Rodin3D converted A9 into an editable 3D mesh (Figure 8, Figure 9 and Figure 10). The exported OBJ was then reviewed in Rhino, and a structured issue log (Table 15) was compiled to externalize manufacturability-relevant gaps (e.g., part boundaries, thickness, interfaces, surface continuity), providing prioritized targets for subsequent engineering refinement.
Overall, the case demonstrates that GID-HGCC does not merely generate outputs; it turns the co-creation process into a controlled sequence of artifacts and checks, improving the transparency and reliability of concept selection and facilitating a disciplined handoff from concept intent to engineering detailing.

5. Discussion

5.1. Framework Comparison

Table 16 presents a comparative analysis of the proposed framework and the traditional ID framework, focusing on four key stages of ID.
As shown in Table 16, the GID-HGCC framework proposed in this paper, based on the traditional four-stage industrial design process, utilizes a phased human-GenAI co-creation approach to solidify the key inputs and outputs of the “research–generation–evaluation–modeling” stages into a traceable workflow. This mechanism mitigates constraint decay and semantic drift caused by the implicit expression of constraints in traditional processes. Specifically, in the requirements confirmation stage, traditional methods often embed constraints in dispersed evidence materials and experiential judgments; this paper uses ChatGPT (GPT-5.1 Thinking) to structure multi-source information after human review, forming a semantic text of requirements that can be directly used for subsequent generation and evaluation, and achieving constraint pre-positioning. In the concept generation stage, traditional sketching is easily limited by expression accuracy and solution coverage; this paper, driven by structured semantics, combines text-to-image/image-to-image generation and iterative screening of concept renderings, improving the breadth of solution exploration and reducing information loss in the “requirements–concept” conversion. In the concept evaluation stage, both methods primarily rely on expert multi-criteria decision-making, but this framework uses concept renderings as a unified review input, improving visual communication consistency and rating comparability. In the 3D modeling stage, traditional methods directly proceed from sketches to engineering modeling, with engineering semantics often being added in the later stages, easily leading to rework; this paper converts the best concept into a 3D proxy model and compiles a list of engineering semantic issues, providing clear guidance for subsequent targeted refinement. In summary, GID-HGCC improves the controllability and traceability of the concept-to-engineering connection through a chain mechanism of “requirements semantic structuring–generation-driven constraint application–evaluation object consistency–explicit engineering problem identification.”

5.2. Comparison of Design Concepts Ranking Methods

To verify the feasibility and effectiveness of the concept ranking strategy in this GID-HGCC framework, this study mapped nine-level natural language variables to 1–9 according to preset rules. Four methods—TOPSIS, CRITIC + TOPSIS, IVIFNs + TOPSIS, and IVIFNs + CRITIC + TOPSIS—were used to rank 10 candidate concepts. The results are shown in Figure 12. The results show that the rankings of TOPSIS and CRITIC + TOPSIS are completely consistent, indicating that the introduction of CRITIC objective weights did not change the overall ranking under this case’s data structure, demonstrating the robustness of the ranking strategy to weight settings. At the same time, after introducing IVIFNs (CRITIC + TOPSIS and IVIFNs + CRITIC + TOPSIS), the rankings remained largely consistent, with differences mainly concentrated in the top four ranked concepts. For example, A10 changed from 1st to 3rd, while A9 changed from 2nd to 1st. This indicates that the introduction of IVIFNs allows the fuzziness and hesitation in linguistic evaluation to be explicitly expressed as interval intuitionistic fuzzy information, thus alleviating the problem of “difficulty in distinguishing similar scores”, and improving the sensitivity and interpretability of the ranking results to expert uncertain judgments. In summary, the comparison results in Figure 12, from the two aspects of ranking stability and uncertainty handling, verify the feasibility and effectiveness of the concept ranking strategy in co-creation concept evaluation.

6. Conclusions

Generative AI (GenAI) is increasingly reshaping traditional industrial design, accelerating early-stage ideation and supporting rapid multimodal prototype development. Its key impact is not “replacing” designers but rather driving systemic changes in how needs are expressed, solutions are explored, and decisions are justified. This necessitates a human-centered approach to the design process, establishing controllable and traceable collaborative mechanisms for AI outputs. To this end, this paper proposes the GID-HGCC framework, which organizes and links the four core stages of industrial design—requirements confirmation, concept generation, concept evaluation, and 3D modeling—defining the roles and responsibilities of humans and GenAI, as well as their input and output relationships at each stage. Within this framework, a traceable artifact flow is formed throughout the entire process: structured prompts built on requirements evidence, candidate concepts generated by prompt-driven generation, structured evaluation outputs, and a list of engineering semantic issues obtained from reviewing the selected concept’s 3D proxy model. By explicitly defining and transmitting key constraints across stages, this artifact flow provides actionable modification targets and prioritization criteria for subsequent engineering refinement. A case study on a portable passive cervical spine rehabilitation training device demonstrates that the workflow can be executed end-to-end and supports a transparent concept selection and concept-to-engineering handover. Overall, GID-HGCC offers an operational, process-oriented way to organize human–GenAI co-creation and provides a practical reference for similar product contexts.
This study still has the following limitations. First, it primarily validates feasibility and does not include controlled comparisons against a purely human-led traditional workflow, as key factors (e.g., input evidence, iteration cycles, and team conditions) are difficult to hold constant in practice. Second, the human–GenAI iteration strategy remains coarse-grained; the respective contributions of humans and GenAI are not yet quantified, and a reusable, parameterizable iteration protocol has not been established. Third, evaluation relies mainly on internal expert multi-criteria judgments, without broader evidence from target users or downstream engineering/manufacturing stakeholders, limiting conclusions about user experience and manufacturing performance. Future research will address these issues through multi-case studies with controlled baselines, a structured iteration strategy library (including standardized prompt/constraint update rules and ablation-based quantification of intervention effects), and expanded evaluations involving users and engineering/manufacturing metrics to clarify practical benefits and boundary conditions.

Author Contributions

Conceptualization, formal analysis, investigation, methodology, funding acquisition, supervision, writing—original draft, writing—review and editing, C.C.; methodology, funding acquisition, data curation, validation, writing—original draft, writing—review and editing, F.C.; conceptualization, writing—review and editing, supervision, resources, validation, B.Z.; methodology, visualization, data curation, writing—original draft, R.J.; conceptualization, methodology, writing—review and editing, C.D.; formal analysis, validation, investigation, writing—review and editing, Z.S.; formal analysis, investigation, writing—review and editing, Y.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the Shaanxi Provincial Social Science Foundation Project (No. 2024J053), the Special Research Project of Shaanxi Provincial Department of Education (No. 24JK0123), the Internship Forest Farm of Xinjiang Agricultural University (No. H202506176), and the Shaanxi Provincial Natural Science Foundation Project (No. 2025JC-YBQN-787).

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding authors.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Nault, E.; Waibel, C.; Carmeliet, J.; Andersen, M. Development and test application of the UrbanSOLve decision-support prototype for early-stage neighborhood design. Build. Environ. 2018, 137, 58–72. [Google Scholar] [CrossRef]
  2. Storey, V.C.; Pastor, O.; Guizzardi, G.; Liddle, S.W.; Maaß, W.; Parsons, J.; Ralyté, J.; Santos, M.Y. Large language models for conceptual modeling: Assessment and application potential. Data Knowl. Eng. 2025, 160, 102480. [Google Scholar] [CrossRef]
  3. Mustapha, K.B. A survey of emerging applications of large language models for problems in mechanics, product design, and manufacturing. Adv. Eng. Inform. 2025, 64, 103066. [Google Scholar] [CrossRef]
  4. Abrusci, L.; Dabaghi, K.; D’Urso, S.; Sciarrone, F. AI4Design: A generative AI-based system to improve creativity in design-A field evaluation. Comput. Educ. Artif. Intell. 2025, 8, 100401. [Google Scholar] [CrossRef]
  5. Wang, X.Z.; Jiang, Z.M.J.; Xiong, Y.; Liu, A. Human-LLM collaboration in generative design for customization. J. Manuf. Syst. 2025, 80, 425–435. [Google Scholar] [CrossRef]
  6. Kanervisto, A.; Bignell, D.; Wen, L.Y.; Grayson, M.; Georgescu, R.; Macua, S.V.; Tan, S.Z.; Rashid, T.; Pearce, T.; Cao, Y.H.; et al. World and Human Action Models towards gameplay ideation. Nature 2025, 638, 656–663. [Google Scholar] [CrossRef]
  7. Wen, W.; Huang, Y.B.; Zhao, X.X.; Zhang, P.Y.; Liu, K.; Shi, G.W. EdgeAIGC: Model caching and resource allocation for Edge Artificial Intelligence Generated Content. Digit. Commun. Netw. 2025, 11, 1941–1950. [Google Scholar] [CrossRef]
  8. Shi, Y.; Gao, T.; Jiao, X.H.; Cao, N. Understanding design collaboration between designers and artificial intelligence: A systematic literature review. Proc. ACM Hum.-Comput. Interact. 2023, 7, 1–35. [Google Scholar] [CrossRef]
  9. Blandino, G.; Montagna, F.; Cantamessa, M.; Colombo, S. A comparative review on the role of stimuli in idea ganeration. Artif. Intell. Eng. Des. Anal. Manuf. 2023, 37, e19. [Google Scholar] [CrossRef]
  10. Boers, J.; Etty, T.; Baars, M.; Broekhoven, K.V. Exploring cognitive strategies in human-AI interaction: ChatGPT’s role in creative tasks. J. Creat. 2025, 35, 100095. [Google Scholar] [CrossRef]
  11. Wang, Y.; Zhang, J.S.; Shen, C.Y.; Yu, H.L.; Luo, S.J. Generative AI aids personalized product aesthetic generation and evaluation based on style themes. Adv. Eng. Inform. 2025, 68, 103756. [Google Scholar] [CrossRef]
  12. Wu, F.; Hsiao, S.W.; Lu, P. An AIGC-empowered methodology to product color matching design. Displays 2024, 81, 102623. [Google Scholar] [CrossRef]
  13. Wang, B.H.; Han, J.; Zhao, X.Y.; Yin, Y.; Chen, L.Q.; Childs, P. Creative combinational design through generative AI in different dimensional representations: An exploration. Des. Artif. Intell. 2025, 1, 100006. [Google Scholar] [CrossRef]
  14. Yu, W. AI as a co-creator and a design material: Transforming the design process. Des. Stud. 2025, 97, 101303. [Google Scholar] [CrossRef]
  15. Sreenivasan, A.; Suresh, M. Design thinking and artificial intelligence: A systematic literature review exploring synergies. Int. J. Innov. Stud. 2024, 8, 297–312. [Google Scholar] [CrossRef]
  16. Borg, K.; Sahadevan, V.; Singh, V.; Kotnik, T. Leveraging Generative Design for Industrial Layout Planning: SWOT Analysis Insights from a Practical Case of Papermill Layout Design. Adv. Eng. Inform. 2024, 60, 102375. [Google Scholar] [CrossRef]
  17. Deloitte Center for Integrated Research. Four Futures of Generative AI in the Enterprise: Scenario Planning for Strategic Resilience and Adaptability. 2024. Available online: https://www.deloitte.com/us/en/insights/topics/digital-transformation/generative-ai-and-the-future-enterprise.html (accessed on 1 October 2025).
  18. Markets and Markets. Generative AI Market by Software (Foundation Models, Model Enablement & Orchestration Tools, Gen AI SaaS), Modality (Text, Code, Video, Image, Multimodal), Application (Content Management, BI & Visualization, Search & Discovery)- Global Forecast to 2032. 2025. Available online: https://www.marketsandmarkets.com/Market-Reports/generative-ai-market-142870584.html (accessed on 10 October 2025).
  19. Li, H.; Xue, T.; Zhang, A.J.; Luo, X.X.; Kong, L.Q.; Huang, G.H. The application and impact of artificial intelligence technology in graphic design: A critical interpretive synthesis. Heliyon 2024, 10, e40037. [Google Scholar] [CrossRef] [PubMed]
  20. Shi, Y.; Shang, M.Y.; Qi, Z.Q. Intelligent layout generation based on deep generative models: A comprehensive survey. Inf. Fusion 2023, 100, 101940. [Google Scholar] [CrossRef]
  21. Oksanen, A.; Cvetkovic, A.; Akin, N.; Latikka, R.; Bergdahl, J.; Chen, Y.; Savela, N. Artificial intelligence in fine arts: A systematic review of empirical research. Comput. Hum. Behav. Artif. Hum. 2023, 1, 100004. [Google Scholar] [CrossRef]
  22. Saadi, J.I.; Yang, M.C. Generative design: Reframing the role of the designer in early-stage design process. J. Mech. Des. 2023, 145, 041411. [Google Scholar] [CrossRef]
  23. Jiang, Z.M.J.; Wen, H.; Han, F.; Tang, Y.L.; Xiong, Y. Data-driven generative design for mass customization: A case study. Adv. Eng. Inform. 2022, 54, 101786. [Google Scholar] [CrossRef]
  24. Yang, H.; Li, R.; He, X.; Zhang, H.Z.; Wu, F.W.; Liu, J.J.; Guan, Z.Y. An intelligent customized design method for complex products under the influence of dynamic uncertainty. Adv. Eng. Inform. 2025, 66, 103480. [Google Scholar] [CrossRef]
  25. Pan, X.Y.; Zhuang, W.B.; Wen, S.J.; Yu, W.G.; Bao, J.S.; Li, X.Y. A context-aware KG-LLM collaborated conceptual design approach for personalized products: A case in lower limbs rehabilitation assistive devices. Adv. Eng. Inform. 2025, 66, 103422. [Google Scholar] [CrossRef]
  26. Fang, C.; Zhu, Y.J.; Fang, L.; Long, Y.H.; Lin, H.; Cong, Y.F.; Wang, S.J. Generative AI-enhanced human-AI collaborative conceptual design: A systematic literature review. Des. Stud. 2025, 97, 101300. [Google Scholar] [CrossRef]
  27. Zhou, Y.H.; Chen, C.H. Examining the Impact of Large Language Models on Design: Functions, Strengths, Limitations, and Roles. Des. Artif. Intell. 2025, 1, 100017. [Google Scholar] [CrossRef]
  28. Wang, Z.; Li, J.S.; Pan, H.R.; Wu, J.Y.; Yan, W.A. Research on multimodal generative design of product appearance based on emotional and functional constraints. Adv. Eng. Inform. 2025, 65, 103106. [Google Scholar] [CrossRef]
  29. Kim, J.; Maher, M.L. The effect of AI-based inspiration on human design ideation. Int. J. Des. Creat. Innov. 2023, 11, 81–98. [Google Scholar] [CrossRef]
  30. Li, Z.N.; Liu, Z.Y.; Sa, G.D.; Sun, J.C.; Hou, M.J.; Tan, J.R.; Sun, L.; Wei, J. Knowledge-enhanced large language models for ideation to implementation: A new paradigm in product design. Appl. Soft Comput. 2025, 176, 113147. [Google Scholar] [CrossRef]
  31. Lu, P.; Hsiao, S.W.; Tang, J.; Wu, F. A generative-AI-based design methodology for car frontal forms design. Adv. Eng. Inform. 2024, 62, 102835. [Google Scholar] [CrossRef]
  32. Liu, Y.H.; Yang, M.L.; Jiang, P.Y. CGAN-driven intelligent generative design of vehicle exterior shape. Expert Syst. Appl. 2025, 274, 127066. [Google Scholar] [CrossRef]
  33. Kretzschmar, M.; Dammann, M.P.; Schwoch, S.; Braun, F.; Saske, B.; Paetzold-Byhain, K. Evaluating the Current Role of Generative AI in Engineering Development and Design-A Systematic Review. In DS 130: Proceedings of NordDesign 2024, Reykjavik, Iceland, 12–14 August 2024; Design Society: Glasgow, UK, 2024; pp. 21–30. [Google Scholar] [CrossRef]
  34. Akhtar, P.; Ghouri, A.M.; Ashraf, A.; Lim, J.J.; Khan, N.R.; Ma, S. Smart product platforming powered by AI and generative AI: Personalization for the circular economy. Int. J. Prod. Econ. 2024, 273, 109283. [Google Scholar] [CrossRef]
  35. Tools such as ChatGPT threaten transparent science; here are our ground rules for their use. Nature 2023, 613, 612. [CrossRef]
  36. Mariani, M.; Dwivedi, Y.K. Generative artificial intelligence in innovation management: A preview of future research developments. J. Bus. Res. 2024, 175, 114542. [Google Scholar] [CrossRef]
  37. Ghali, M.K.; Farrag, A.; Won, D.; Jin, Y. Enhancing knowledge retrieval with in-context learning and semantic search through generative AI. Knowl.-Based Syst. 2025, 311, 113047. [Google Scholar] [CrossRef]
  38. Zhang, B.; Ma, H.; Ding, J.; Wang, J.; Xu, B.; Lin, H.F. Distilling implicit multimodal knowledge into large language models for zero-resource dialogue generation. Inf. Fusion 2025, 118, 102985. [Google Scholar] [CrossRef]
  39. Karagoz, A.T.; Alqusair, O.; Liu, C.; Li, J. Advances in conceptual process design: From conventional strategies to AI-assisted methods. Chin. J. Chem. Eng. 2025, 84, 60–76. [Google Scholar] [CrossRef]
  40. Bouschery, S.G.; Blazevic, V.; Piller, F.T. Augmenting human innovation teams with artificial intelligence: Exploring transformer-based language models. J. Prod. Innov. Manag. 2023, 40, 139–153. [Google Scholar] [CrossRef]
  41. Karadag, D.; Ozar, B. A new frontier in design studio: AI and human collaboration in conceptual design. Front. Archit. Res. 2025, 14, 1536–1550. [Google Scholar] [CrossRef]
  42. Alcaide-Marzal, J.; Diego-Mas, J.A. Computers as co-creative assistants. A comparative study on the use of text-to-image AI models for computer aided conceptual design. Comput. Ind. 2025, 164, 104168. [Google Scholar] [CrossRef]
  43. Wang, B.H.; Zhao, X.Y.; Zuo, H.Y.; Song, Y.X.; Han, J.; Childs, P.; Chen, L.Q. From analogy to innovation: A creative conceptual design approach leveraging large language models. Adv. Eng. Inform. 2025, 67, 103427. [Google Scholar] [CrossRef]
  44. Chen, L.Q.; Cai, Z.B.; Jiang, Z.J.; Luo, J.X.; Sun, L.Y.; Childs, P.; Zuo, H.Y. AskNatureNet: A divergent thinking tool based on bio-inspired design knowledge. Adv. Eng. Inform. 2024, 62, 102593. [Google Scholar] [CrossRef]
  45. Wu, Y.; Ma, L.S.; Yuan, X.F.; Li, Q.N. Human-machine hybrid intelligence for the generation of car frontal forms. Adv. Eng. Inform. 2023, 55, 101906. [Google Scholar] [CrossRef]
  46. Lee, C.K.M.; Liang, J.Y.; Yung, K.L.; Keung, K.L. Generating TRIZ-inspired guidelines for eco-design using Generative Artificial Intelligence. Adv. Eng. Inform. 2024, 62, 102846. [Google Scholar] [CrossRef]
  47. Liu, H.; Xu, Y.; Chen, F. Sketch2Photo: Synthesizing photo-realistic images from sketches via global contexts. Eng. Appl. Artif. Intell. 2023, 117, 105608. [Google Scholar] [CrossRef]
  48. Edwards, K.M.; Man, B.; Ahmed, F. Sketch2Prototype: Rapid conceptual design exploration and prototyping with generative AI. Proc. Des. Soc. 2024, 4, 1989–1998. [Google Scholar] [CrossRef]
  49. Cai, A.; Rick, S.R.; Heyman, J.L.; Zhang, Y.X.; Filipowicz, A.; Hong, M.; Klenk, M.; Malone, T. Designaid: Using generative ai and semantic diversity for design inspiration. In Proceedings of the ACM Collective Intelligence Conference CI’23; Association for Computing Machinery: New York, NY, USA, 2023; pp. 1–11. [Google Scholar] [CrossRef]
  50. Dai, Y.; Li, Y.; Liu, L.J. New product design with automatic scheme generation. Sens. Imaging 2019, 20, 29. [Google Scholar] [CrossRef]
  51. Zhang, L.; Li, Z.Q.; Zheng, Y. An interactive generative design technology for appearance diversity-Taking mouse design as an example. Adv. Eng. Inform. 2024, 59, 102263. [Google Scholar] [CrossRef]
  52. Yuan, C.X.; Marion, T.; Moghaddam, M. Leveraging end-user data for enhanced design concept evaluation: A multimodal deep regression model. J. Mech. Des. 2022, 144, 021403. [Google Scholar] [CrossRef]
  53. Corvello, V. Generative AI and the future of innovation management: A human centered perspective and an agenda for future research. J. Open Innov. Technol. Mark. Complex. 2025, 11, 100456. [Google Scholar] [CrossRef]
  54. Park, K.; Park, S.; Joung, J. Contextual Meaning-based Approach to Fine-grained Online Product Review Analysis for Product Design. IEEE Access 2023, 12, 4225–4238. [Google Scholar] [CrossRef]
  55. Tsumoto, R.; Yaji, K.; Nomaguchi, Y.; Fujita, K. Deep concept identification for generative design. Adv. Eng. Inform. 2025, 65, 103354. [Google Scholar] [CrossRef]
  56. Marzi, G.; Balzano, M. Artificial intelligence and the reconfiguration of NPD Teams: Adaptability and skill differentiation in sustainable product innovation. Technovation 2025, 145, 103254. [Google Scholar] [CrossRef]
  57. Dorri, M.; Hoseinpour, S.; Maghrebi, M. AI-driven enhancement of customer-centric design for improved satisfaction and decision-making. Autom. Constr. 2025, 175, 106220. [Google Scholar] [CrossRef]
  58. Fan, F.D.; Luo, C.J.; Gao, W.L.; Zhan, J.F. AIGCBench: Comprehensive evaluation of image-to-video content generated by AI. BenchCouncil Trans. Benchmarks Stand. Eval. 2023, 3, 100152. [Google Scholar] [CrossRef]
  59. Li, X.; Xie, C.; Sha, Z.H. A predictive and generative design approach for three-dimensional mesh shapes using target-embedding variational autoencoder. J. Mech. Des. 2022, 144, 114501. [Google Scholar] [CrossRef]
  60. Liang, Z.L.; Zhang, Y.F.; Wang, Y.J.; Li, W.H. Integrating large models with topology optimization for conceptual design realization. Adv. Eng. Inform. 2025, 67, 103524. [Google Scholar] [CrossRef]
  61. Zang, T.S.; Yang, M.L.; Liu, Y.H.; Jiang, P.Y. Text2shape: Intelligent computational design of car outer contour shapes based on improved conditional Wasserstein generative adversarial network. Adv. Eng. Inform. 2024, 62, 102892. [Google Scholar] [CrossRef]
  62. Park, D.; Park, J.; Kim, N. A 3D preform design method based on a generative artificial intelligence algorithm. J. Manuf. Process. 2025, 144, 190–208. [Google Scholar] [CrossRef]
  63. Jain, A.; Mildenhall, B.; Barron, J.T.; Abbeel, P.; Poole, B. Zero-shot text-guided object generation with dream fields. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR); IEEE: Piscataway, NJ, USA, 2022; pp. 857–866. [Google Scholar] [CrossRef]
  64. Zhou, J.W.; Camba, J.D. The status, evolution, and future challenges of multimodal large language models (LLMs) in parametric CAD. Expert Syst. Appl. 2025, 282, 127520. [Google Scholar] [CrossRef]
  65. Jiang, S.; Li, W.F.; Qian, Y.P.; Zhang, Y.J.; Luo, J.X. AutoTRIZ: Automating engineering innovation with TRIZ and large language models. Adv. Eng. Inform. 2025, 65, 103312. [Google Scholar] [CrossRef]
  66. Gao, M.Y.; Li, C.; Petzold, F.; Tiong, R.L.K.; Yang, Y.W. Lifecycle framework for AI-driven parametric generative design in industrialized construction. Autom. Constr. 2025, 174, 106146. [Google Scholar] [CrossRef]
  67. Grabe, I.; González-Duque, M.; Risi, S.; Zhu, J.C. Towards a framework for human-AI interaction patterns in co-creative GAN applications. In Joint Proceedings of the ACM IUI Workshops; Semantic Scholar: Seattle, WA, USA, 2022; Available online: https://api.semanticscholar.org/CorpusID:248302085 (accessed on 4 August 2025).
  68. Jiang, T.T.; Sun, Z.M.; Fu, S.T.; Lv, Y. Human-AI interaction research agenda: A user-centered perspective. Data Inf. Manag. 2024, 8, 100078. [Google Scholar] [CrossRef]
  69. Vaccaro, M.; Almaatouq, A.; Malone, T. When combinations of humans and AI are useful: A systematic review and meta-analysis. Nat. Hum. Behav. 2024, 8, 2293–2302. [Google Scholar] [CrossRef] [PubMed]
  70. Hao, X.Y.; Demir, E.; Eyers, D. Exploring collaborative decision-making: A quasi-experimental study of human and generative AI interaction. Technol. Soc. 2024, 78, 102662. [Google Scholar] [CrossRef]
  71. Davis, R.L.; Wambsganss, T.; Jiang, W.; Kim, K.G.; Käser, T.; Dillenbourg, P. Fashioning creative expertise with generative AI: Graphical interfaces for design space exploration better support ideation than text prompts. In Proceedings of the CHI Conference on Human Factors in Computing Systems; Association for Computing Machinery: New York, NY, USA, 2024; pp. 1–26. [Google Scholar] [CrossRef]
  72. Ding, S.; Pan, X.; Hu, L.H.; Liu, L.Z. A new model for calculating human trust behavior during human-AI collaboration in multiple decision-making tasks: A Bayesian approach. Comput. Ind. Eng. 2025, 200, 110872. [Google Scholar] [CrossRef]
  73. Tiribelli, S.; Giovanola, B.; Pietrini, R.; Frontoni, E.; Paolanti, M. Embedding AI ethics into the design and use of computer vision technology for consumer′s behaviour understanding. Comput. Vis. Image Underst. 2024, 248, 104142. [Google Scholar] [CrossRef]
  74. Liu, H.C.; You, J.X.; Duan, C.Y. An integrated approach for failure mode and effect analysis under interval-valued intuitionistic fuzzy environment. Int. J. Prod. Econ. 2019, 207, 163–172. [Google Scholar] [CrossRef]
  75. Verma, R.; Álvarez-Miranda, E. Multiple-attribute group decision-making approach using power aggregation operators with CRITIC-WASPAS method under 2-dimensional linguistic intuitionistic fuzzy framework. Appl. Soft Comput. 2024, 157, 111466. [Google Scholar] [CrossRef]
  76. Li, K.; Chen, C.Y.; Zhang, Z.L. Mining online reviews for ranking products: A novel method based on multiple classifiers and interval-valued intuitionistic fuzzy TOPSIS. Appl. Soft Comput. 2023, 139, 110237. [Google Scholar] [CrossRef]
  77. YY/T 0726-2020; Instrumentation for Use in Association with Non-Active Surgical Implants—Gerneral Requirements. China Standard Press: Beijing, China, 2020.
  78. GB/T 16886.11-2011; Biological Evaluation of Medical Devices—Part 11: Tests for Systemic Toxicity. China Standard Press: Beijing, China, 2011.
  79. T/QGCML 1538—2023; Airbag Cervical Traction Fixator. China Association for Standardization (Group Standard): Beijing, China, 2023.
  80. YY/T 0697-2016; Electric Cervical and Lumbar Traction Therapy Device. China Standard Press: Beijing, China, 2016.
  81. DB50/T 1427-2023; Standard for the Provision of Assistive Product in Health Care and Medical Institution. Local Standard of Chongqing Municipality: Chongqing, China, 2023.
  82. CN202510483167.6; Bone Conduction Cervical Massage U-Shaped Pillow Based on Audio Vibration Analysis. China National Intellectual Property Administration (CNIPA): Beijing, China, 2025.
  83. CN202310933532.X; An Adjustable Cervical Massager. China National Intellectual Property Administration (CNIPA): Beijing, China, 2023.
  84. CN202310724004.3; A Cervical Massager and a Posture Correction Method Based on the Same. China National Intellectual Property Administration (CNIPA): Beijing, China, 2025.
  85. CN202010568014.9; A Novel Cervical Massager. China National Intellectual Property Administration (CNIPA): Beijing, China, 2021.
Figure 1. Application of GenAI in ID core stages.
Figure 1. Application of GenAI in ID core stages.
Symmetry 18 00352 g001
Figure 2. The core benefits of human and GenAI. The orange- and blue-framed items indicate the human- and GenAI-related capabilities, respectively; the dotted circle is used only to visually highlight their co-creation relationship and carries no additional meaning.
Figure 2. The core benefits of human and GenAI. The orange- and blue-framed items indicate the human- and GenAI-related capabilities, respectively; the dotted circle is used only to visually highlight their co-creation relationship and carries no additional meaning.
Symmetry 18 00352 g002
Figure 3. The overall flowchart of the proposed GID-HGCC framework.
Figure 3. The overall flowchart of the proposed GID-HGCC framework.
Symmetry 18 00352 g003
Figure 4. Key information generation for design requirements based on multi-source information: (a) Initial generation process; (b) Iterative process.
Figure 4. Key information generation for design requirements based on multi-source information: (a) Initial generation process; (b) Iterative process.
Symmetry 18 00352 g004
Figure 5. Preliminary design concept generation.
Figure 5. Preliminary design concept generation.
Symmetry 18 00352 g005
Figure 6. The rendering of design concept A1.
Figure 6. The rendering of design concept A1.
Symmetry 18 00352 g006
Figure 7. Kendall’s correlation heatmap.
Figure 7. Kendall’s correlation heatmap.
Symmetry 18 00352 g007
Figure 8. Initial generation of 3D Models. Note: Numeric formatting follows the software default (Chinese interface), so thousands separators may be omitted (e.g., 10,000).
Figure 8. Initial generation of 3D Models. Note: Numeric formatting follows the software default (Chinese interface), so thousands separators may be omitted (e.g., 10,000).
Symmetry 18 00352 g008
Figure 9. Fine-tuning of the 3D Model.
Figure 9. Fine-tuning of the 3D Model.
Symmetry 18 00352 g009
Figure 10. 3D model parameter configuration and export.
Figure 10. 3D model parameter configuration and export.
Symmetry 18 00352 g010
Figure 11. Mesh model of the design concept in Rhino. Note: Negative values are displayed using a standard hyphen (-) by the software interface (e.g., -1), rather than the typographic minus sign (−); this formatting does not affect interpretation.
Figure 11. Mesh model of the design concept in Rhino. Note: Negative values are displayed using a standard hyphen (-) by the software interface (e.g., -1), rather than the typographic minus sign (−); this formatting does not affect interpretation.
Symmetry 18 00352 g011
Figure 12. Ranking results using different methods.
Figure 12. Ranking results using different methods.
Symmetry 18 00352 g012
Table 1. Representative GenAI technologies for ID core stages.
Table 1. Representative GenAI technologies for ID core stages.
Application StageRepresentative Technologies/ApproachesAdvantages
Requirements confirmationChatGPT, Claude, LLaMA, Gemini, DeepSeek, Kimi, etc.These technologies support evidence aggregation and requirement mining from heterogeneous sources (e.g., reviews, standards, patents), enabling faster requirement structuring and prioritization.
Concept generationStable Diffusion, DALL·E, Imagen, Midjourney, Nano banana Pro, Pix2Pix, Sketch2Photo, Sketch2Prototype, etc.These technologies enable rapid visualization and iteration of concept alternatives, expanding exploration breadth and accelerating early-stage refinement.
Concept
evaluation
LLMs (data processing and standardization of evaluation criteria interpretation), multi-criteria evalaution, etc.These approaches support concept screening and ranking by organizing large candidate sets, summarizing evaluation evidence, and enabling multi-criteria decision support (often within virtual/interactive settings).
3D modelingSketch2CAD, DreamLens, DreamFusion, Latent-NeRF, 3DFY Prompt, Magic3D, Shape-E, Rodin, Hunyuan3D, TripoSR, Masterpiece Studio, 3DFY, Sloyd, Meshy, Vectary, BlenderGPT addon, etc.These technologies generate early 3D surrogates or assist CAD modeling from multimodal inputs, helping identify geometry/structure issues earlier and supporting subsequent engineering refinement.
Table 2. Stage-wise Human–GenAI Responsibilities and I/O Boundaries.
Table 2. Stage-wise Human–GenAI Responsibilities and I/O Boundaries.
StageHuman Decision-MakersGenAIInputOutputConstraints Handling
Requirements confirmationDesign positioning; data quality control; prompt text refinementGenerate constraint-aware structured promptsI1 + I2O1Inject symmetry/near-symmetry/intentional asymmetry requirements into O1
Concept generationReview, patching, and regenerationGenerate candidate design conceptsI3 + I4 (optional)O2If minor deviation from O1: patch and regenerate; if major deviation from O1: return to and revise O1
Concept evaluationDefine evaluation criteria and method; execute evaluation; finalize resultsConsistency calibration for criterion definition and interpretationI5 + I6 + I7 + I8O3Treat symmetry/near-symmetry/intentional asymmetry as items to be checked and validated within the criteria-based verification process
3D modelingCompile issue listGenerate a 3D proxy modelI9O4Fix constraint-mismatch points in an explicit issue list to support subsequent targeted refinement
Table 3. Evaluation dimensions and specifics of design concepts.
Table 3. Evaluation dimensions and specifics of design concepts.
Evaluation DimensionSpecific Evaluation Elements
DesignComprehensive evaluation of product design performance based on its form language, CMF (Color, Material, Finish) properties, and interface design, including visual balance and justified local asymmetry.
TechnologyComprehensive evaluation of product technical performance based on its structural characteristics, seam treatment, and material transmittance, considering structural regularity and part correspondence.
SocietyComprehensive evaluation of product social performance based on its human fit, application scenarios, and detailing, including bilateral usability/comfort and scenario-driven asymmetry when needed.
EconomyComprehensive evaluation of product economic performance based on the complexity of its components and surface decoration, considering part reuse via symmetric modules and added complexity from asymmetric layouts.
EnvironmentComprehensive evaluation of product environmental performance based on its material texture, grain structure, and connection method, considering material efficiency from regular/symmetric construction where applicable.
EthicsComprehensive evaluation of product ethical performance based on its adjustment range and sensory stimulation, ensuring symmetry cues are not misleading.
Table 4. Linguistic terms and corresponding IVIFNs.
Table 4. Linguistic terms and corresponding IVIFNs.
Linguistic Terms (Abbreviation)IVIFNs
Extremely high (EH)([0.90, 0.90], [0.10, 0.10])
Very high (VH)([0.75, 0.85], [0.05, 0.15])
High (VH)([0.60, 0.75], [0.10, 0.20])
Medium high (MH)([0.45, 0.60], [0.15, 0.25])
Medium (M)([0.50, 0.50], [0.50, 0.50])
Medium low (ML)([0.35, 0.45], [0.40, 0.55])
Low (L)([0.25, 0.35], [0.50, 0.60])
Very low (VL)([0.15, 0.20], [0.60, 0.75])
Extremely low (EL)([0.10, 0.10], [0.90, 0.90])
Table 5. Design parameters document.
Table 5. Design parameters document.
Design DimensionKey ParametersTechnical IndicatorsAdditional Details
InfrastructureOverall shape (unfolding)28 × 16 × 9.5 cm (L × W × H)Streamlined profile, radius of curvature R ≥ 15 mm
Folding pattern≤14 × 8 × 5 cm (parallel overlap)Hidden joints, no exposed parts
Total weight≤500 g (framework ≤ 300 g)Skeleton Percentage ≥ 30%
Materials and ProcessesOuter frameAluminum alloy thick ≤ 1.2 mm/ABS, Matte FinishChamferC0.5 mm, roughness Ra ≤ 0.8 μm
Middle airbagCompartmentalised TPU (Individual inflation of 3 air chambers)Air pump integrated on the side, ports hidden
Inner liner3Dgel (permeability ≥ 60%) + ice-cream fabricMicroporous density ≥ 120 holes/cm2
Support pointSkin-friendly foam contact surface ≥ 15 cm2Curved wraps with curvature-matched mandibles
Human-product interactionWearable systemsnap button adjustment (32–42 cm), Operation ≤ 10 sSchematic of the “one-pull-one-buckle” dynamic
Inflation controlPush Stroke ≤ 2 cm, 1 time = 5 kPaAir Pump Bump Height 3 mm
Emergency Pressure ReliefZero air pressure 3 s after unplugging the pumpSeparate nozzle structure
Performance ParametersCorrective functionIndividual inflation of three air chambers (0–30 kPa)C3–C7 Schematic diagram of segmental gradient support
DurabilityFolding ≥ 5000 times, airbag cycle ≥ 10,000 timesHinge reinforced construction
Environmental adaptationNoise ≤ 30 dB, moisture absorption and quick drying ≥ 0.8 g/sVisualisation of permeable layer profiles
Aesthetic requirementColour schemeLuminous white (#F5F5F5) + shallow grey (#E0E0E0)Two-tone fade, aberration ΔE ≤ 1.5
Aesthetic requirementCovertWearing thickness ≤ 3.5 cmFits the natural curve of the neck (curvature 1:1.2)
Usage ScenariosTreatment modeGradient inflation with three chambersAirbag Expansion Dynamics (0 → 30 kPa)
Daily modePhysical limit neutral position (0–15° adjustable)Non-inflatable bracket structure
Table 6. Online user reviews (partial).
Table 6. Online user reviews (partial).
Users ProductsUser Reviews
User1Symmetry 18 00352 i001(Oxy cervical spine massager): this massage instrument is really great. Originally held a try to see the mentality of buy, the results of the use of direct real incense warning. Like my head down every day to play mobile phones + overtime dog, shoulders and neck hard like a stone, with it pressed for 15 min, feel the whole person is alive ~ force enough but will not hurt, the hot function is open, crispy simple straight too on the head.
User2Symmetry 18 00352 i002(Kangjia cervical spine massager): this massage instrument multi-touch design, red light assistance, accurate massage neck, relaxation of muscles, simple neck hanging, touch skin-friendly, very lightweight and easy to carry, there are a lot of gears can be selected, the intensity can be flexibly adjusted to adapt to the different needs of massage The massage effect is very good, daily relaxation and practical. Overall experience is good. Neck-mounted design fits the neck, multi-touch head with red light, massage up very strong, gear can be adjusted according to their own tolerance, after using the neck is not so stiff, home office to relieve fatigue good use.
User3Symmetry 18 00352 i003(SKG shoulder and neck massager): overall okay, feel for the neck some problems should be good, but I am sitting in front of the computer for a long time, feel that the shoulder more need to massage, this for the shoulder massage feel the role is not very big. Massage before to smear some water or gel, will feel stimulated, do not touch the general, want the effect is good or touch point. Winter use should be more comfortable, there is heating.
User4Symmetry 18 00352 i004(Oxy cervical spine massager): not bad, that is, the three gears are not too obvious, the intensity is okay, the back of the bandage cannot be adjusted.
User5Symmetry 18 00352 i005(Xiaomi Mijia Cervical Massager): bought two at once, the one that presses the neck and the one that presses the shoulders. Take advantage of 618, all take. The force is definitely enough, and the pinched shoulders are hot and comfortable.
User6Symmetry 18 00352 i006(SKG cervical spine massager G1): SKG G1 massage force is comfortable, hot like a hot towel. Small and easy to use, the office must have, while the activity to get too cost-effective.
User7Symmetry 18 00352 i007(USK cervical spine massager): massage is good, but also a very good deal, the body’s fatigue has been greatly relieved, according to their own needs to adjust the mode and intensity, with a few days of experience is good, value for money.
User8Symmetry 18 00352 i008(Haier/Haier cervical spine massager): tried a few days, this massage instrument is really a must-have tool for office workers. Can free hands, let me work while enjoying the massage, it is too good, there are 5 modes to choose from, I like to use the vitality mode, each massage for 10 min, neck stiffness relief a lot of colleagues to see me use all want to buy.
Table 7. Selected product styles, structures, technical reference images, hand-drawn sketches, and 3D models.
Table 7. Selected product styles, structures, technical reference images, hand-drawn sketches, and 3D models.
Selected product styles, structures, technical reference images
Image 1Image 2Image 3Image 4Image 5Image 6Image 7Image 8Image 9Image 10
Symmetry 18 00352 i009Symmetry 18 00352 i010Symmetry 18 00352 i011Symmetry 18 00352 i012Symmetry 18 00352 i013Symmetry 18 00352 i014Symmetry 18 00352 i015Symmetry 18 00352 i016Symmetry 18 00352 i017Symmetry 18 00352 i018
Image 11Image 12Image 13Image 14Image 15Image 16Image 17Image 18Image 19Image 20
Symmetry 18 00352 i019Symmetry 18 00352 i020Symmetry 18 00352 i021Symmetry 18 00352 i022Symmetry 18 00352 i023Symmetry 18 00352 i024Symmetry 18 00352 i025Symmetry 18 00352 i026Symmetry 18 00352 i027Symmetry 18 00352 i028
Image 21Image 22Image 23Image 24Image 25Image 26Image 27Image 28Image 29Image 30
Symmetry 18 00352 i029Symmetry 18 00352 i030Symmetry 18 00352 i031Symmetry 18 00352 i032Symmetry 18 00352 i033Symmetry 18 00352 i034Symmetry 18 00352 i035Symmetry 18 00352 i036Symmetry 18 00352 i037Symmetry 18 00352 i038
Selected hand-drawn sketches
Sketch 1Sketch 2Sketch 3Sketch 4Sketch 5Sketch 6Sketch 7Sketch 8Sketch 9Sketch10
Symmetry 18 00352 i039Symmetry 18 00352 i040Symmetry 18 00352 i041Symmetry 18 00352 i042Symmetry 18 00352 i043Symmetry 18 00352 i044Symmetry 18 00352 i045Symmetry 18 00352 i046Symmetry 18 00352 i047Symmetry 18 00352 i048
Selected 3D models
Model 1Model 2Model 3Model 4Model 5Model 6Model 7
Symmetry 18 00352 i049Symmetry 18 00352 i050Symmetry 18 00352 i051Symmetry 18 00352 i052Symmetry 18 00352 i053Symmetry 18 00352 i054Symmetry 18 00352 i055
Table 8. Design standards (partial).
Table 8. Design standards (partial).
Standard No.Standard TitleReference Information
YY/T 0726-2020 [77]Instruments for use in association with non-active surgical implants—General requirementsGeneral requirements for non-active medical devices regarding design attributes and material selection.
GB/T 16886.11-2011 [78]Biological evaluation of medical devicesRequirements for medical devices in selecting skin-contact materials.
T/QGCML 1538-2023 [79]Airbag cervical traction fixatorClassification of inflatable cervical spine products.
YY/T 0697-2016 [80]Electric cervical and lumbar traction therapy deviceSpecific requirements for electric cervical/lumbar traction therapy equipment.
DB50/T 1427-2023 [81]Standard for the Provision of Assistive product in Health care and medical InstitutionConfiguration principles for rehabilitation assistive devices (safety, suitability, and effectiveness).
Table 9. Industry norms and patents (partial).
Table 9. Industry norms and patents (partial).
Patent No.Patent TitleReference Information
CN202510483167.6
[82]
Bone conduction cervical massage U-shaped pillow based on audio vibration analysisOverview and technical specifications of the cervical massage U-shaped pillow.
CN202310933532.X
[83]
An adjustable cervical massagerOverview and mechanical structure of the adjustable cervical massager.
CN202310724004.3
[84]
A cervical massager and a posture correction method based on the sameOverview of the cervical massager.
CN202010568014.9
[85]
A novel cervical massagerMain Components and Ergonomic Design of a Cervical Massager with Guasha Function.
Table 10. Ten candidate design concepts after the preliminary selection.
Table 10. Ten candidate design concepts after the preliminary selection.
A1A2A3A4A5
Symmetry 18 00352 i056Symmetry 18 00352 i057Symmetry 18 00352 i058Symmetry 18 00352 i059Symmetry 18 00352 i060
A6A7A8A9A10
Symmetry 18 00352 i061Symmetry 18 00352 i062Symmetry 18 00352 i063Symmetry 18 00352 i064Symmetry 18 00352 i065
Table 11. Evaluation data for L1.
Table 11. Evaluation data for L1.
C1C2C3C4C5C6
A1([0.50, 0.50],
[0.50, 0.50])
([0.60, 0.75],
[0.10, 0.20])
([0.50, 0.50],
[0.50, 0.50])
([0.50, 0.50],
[0.50, 0.50])
([0.50, 0.50],
[0.50, 0.50])
([0.35, 0.45],
[0.40, 0.55])
A2([0.50, 0.50],
[0.50, 0.50])
([0.60, 0.75],
[0.10, 0.20])
([0.45, 0.60],
[0.15, 0.25])
([0.35, 0.45],
[0.40, 0.55])
([0.50, 0.50],
[0.50, 0.50])
([0.35, 0.45],
[0.40, 0.55])
A3([0.50, 0.50],
[0.50, 0.50])
([0.45, 0.60],
[0.15, 0.25])
([0.45, 0.60],
[0.15, 0.25])
([0.35, 0.45],
[0.40, 0.55])
([0.50, 0.50],
[0.50, 0.50])
([0.35, 0.45],
[0.40, 0.55])
A4([0.45, 0.60],
[0.15, 0.25])
([0.60, 0.75],
[0.10, 0.20])
([0.50, 0.50],
[0.50, 0.50])
([0.50, 0.50],
[0.50, 0.50])
([0.35, 0.45],
[0.40, 0.55])
([0.50, 0.50],
[0.50, 0.50])
A5([0.60, 0.75],
[0.10, 0.20])
([0.75, 0.85],
[0.05, 0.15])
([0.60, 0.75],
[0.10, 0.20])
([0.35, 0.45],
[0.40, 0.55])
([0.35, 0.45],
[0.40, 0.55])
([0.50, 0.50],
[0.50, 0.50])
A6([0.45, 0.60],
[0.15, 0.25])
([0.75, 0.85],
[0.05, 0.15])
([0.60, 0.75],
[0.10, 0.20])
([0.35, 0.45],
[0.40, 0.55])
([0.35, 0.45],
[0.40, 0.55])
([0.50, 0.50],
[0.50, 0.50])
A7([0.50, 0.50],
[0.50, 0.50])
([0.60, 0.75],
[0.10, 0.20])
([0.60, 0.75],
[0.10, 0.20])
([0.50, 0.50],
[0.50, 0.50])
([0.25, 0.35],
[0.50, 0.60])
([0.45, 0.60],
[0.15, 0.25])
A8([0.45, 0.60],
[0.15, 0.25])
([0.60, 0.75],
[0.10, 0.20])
([0.60, 0.75],
[0.10, 0.20])
([0.35, 0.45],
[0.40, 0.55])
([0.25, 0.35],
[0.50, 0.60])
([0.45, 0.60],
[0.15, 0.25])
A9([0.75, 0.85],
[0.05, 0.15])
([0.75, 0.85],
[0.05, 0.15])
([0.90, 0.90],
[0.10, 0.10])
([0.25, 0.35],
[0.50, 0.60])
([0.15, 0.20],
[0.60, 0.75])
([0.75, 0.85],
[0.05, 0.15])
A10([0.75, 0.85],
[0.05, 0.15])
([0.90, 0.90],
[0.10, 0.10])
([0.75, 0.85],
[0.05, 0.15])
([0.15, 0.20],
[0.60, 0.75])
([0.10, 0.10],
[0.90, 0.90])
([0.90, 0.90],
[0.10, 0.10])
Table 12. Initial group decision matrix.
Table 12. Initial group decision matrix.
C1C2C3C4C5C6
A1([0.477, 0.543],
[0.365, 0.400])
([0.491, 0.642],
[0.135, 0.235])
([0.495, 0.509],
[0.473, 0.479])
([0.449, 0.484],
[0.472, 0.516])
([0.384, 0.449],
[0.467, 0.539])
([0.405, 0.459],
[0.472, 0.532])
A2([0.469, 0.558],
[0.313, 0.362])
([0.556, 0.689],
[0.161, 0.247])
([0.464, 0.568],
[0.275, 0.336])
([0.350, 0.450],
[0.400, 0.550])
([0.390, 0.464],
[0.432, 0.536])
([0.426, 0.477],
[0.457, 0.523])
A3([0.472, 0.553],
[0.331, 0.375])
([0.477, 0.627],
[0.140, 0.240])
([0.479, 0.538],
[0.382, 0.412])
([0.369, 0.457],
[0.416, 0.543])
([0.466, 0.490],
[0.481, 0.510])
([0.391, 0.454],
[0.462, 0.536])
A4([0.520, 0.645],
[0.195, 0.273])
([0.566, 0.717],
[0.110, 0.210])
([0.484, 0.528],
[0.414, 0.435])
([0.377, 0.453],
[0.442, 0.541])
([0.411, 0.472],
[0.447, 0.528])
([0.490, 0.519],
[0.444, 0.458])
A5([0.610, 0.752],
[0.095, 0.195])
([0.725, 0.834],
[0.058, 0.158])
([0.649, 0.784],
[0.083, 0.183])
([0.449, 0.484],
[0.472, 0.516])
([0.434, 0.479],
[0.462, 0.521])
([0.464, 0.568],
[0.275, 0.400])
A6([0.559, 0.703],
[0.111, 0.211])
([0.759, 0.846],
[0.070, 0.146])
([0.634, 0.774],
[0.088, 0.188])
([0.449, 0.484],
[0.472, 0.516])
([0.369, 0.457],
[0.416, 0.543])
([0.484, 0.528],
[0.414, 0.435])
A7([0.474, 0.548],
[0.348, 0.388])
([0.491, 0.642],
[0.135, 0.235])
([0.505, 0.656],
[0.130, 0.230])
([0.342, 0.411],
[0.500, 0.558])
([0.312, 0.403],
[0.467, 0.569])
([0.477, 0.627],
[0.140, 0.240])
A8([0.473, 0.592],
[0.231, 0.304])
([0.627, 0.769],
[0.090, 0.190])
([0.566, 0.717],
[0.110, 0.210])
([0.311, 0.412],
[0.437, 0.568])
([0.316, 0.417],
[0.432, 0.566])
([0.484, 0.610],
[0.206, 0.285])
A9([0.778, 0.860],
[0.060, 0.140])
([0.733, 0.839],
[0.055, 0.155])
([0.829, 0.877],
[0.078, 0.123])
([0.205, 0.279],
[0.539, 0.672])
([0.158, 0.199],
[0.698, 0.771])
([0.750, 0.850],
[0.050, 0.150])
A10([0.771, 0.857],
[0.058, 0.143])
([0.792, 0.865],
[0.065, 0.135])
([0.785, 0.862],
[0.063, 0.138])
([0.150, 0.200],
[0.600, 0.750])
([0.113, 0.123],
[0.848, 0.868])
([0.844, 0.882],
[0.083, 0.118])
Table 13. Candidate design concept ranking.
Table 13. Candidate design concept ranking.
D i + D i CiRanking
A10.2440.0990.289 10
A20.2170.1290.373 8
A30.2320.1120.326 9
A40.2000.1420.415 7
A50.1110.2400.684 2
A60.1300.2240.633 4
A70.1890.1720.476 6
A80.1600.1990.554 5
A90.0920.2490.730 1
A100.1110.2290.674 3
Table 14. Experts’ rankings of the candidate design concept.
Table 14. Experts’ rankings of the candidate design concept.
A1A2A3A4A5A6A7A8A9A10
L110897356412
L210985237614
L397108126534
L497108365421
L591087456321
Table 15. Engineering semantics issue list.
Table 15. Engineering semantics issue list.
Problem TypeSpecific ManifestationAdjustment Strategy
Missing engineering semanticsMonolithic mesh; no part boundaries/parting lines/gapsDecompose into parts; define parting lines and a typical assembly gaps
Zero wall thickness; non-solid shellApply nominal wall thickness and close to watertight solids
No internal supports (ribs/bosses)Add ribs/bosses at key load/assembly points
Unstructured interfaces and movable componentsStrap interface blended; no locating featuresModel explicit strap interfaces with locating geometry
Buttons/knobs fused to housingSeparate as parts; introduce small functional clearances
Functional openings reduced to surface effectsVents shown as indent/relief, not through-holesReplace relief with true perforations (pattern + cut-through)
Texture/branding not embodied in geometryGrip/LOGO exists only as texture mappingConvert textures/logos into physical relief where needed
Limited ergonomic groundingContact surfaces visually plausible but not data-drivenFit key contact surfaces to anthropometric references
Geometric consistency and surface qualityLeft–right asymmetry in nominally symmetric partsMirror and enforce symmetry during edits
Wrinkles/poor highlight flow; low surface fairnessRebuild/fair surfaces and improve continuity (as needed)
Table 16. GID-HGCC framework vs. traditional ID framework.
Table 16. GID-HGCC framework vs. traditional ID framework.
DimensionTraditional ID FrameworkGID-HGCC Framework
Requirements confirmationThe research evidence is summarized to form a requirements document primarily in natural language, with key constraints not explicitly expressed.LLMs are used to transform manually reviewed multi-source data into structured semantic text of requirements under constraints.
Concept generationCandidate concepts are derived based on sketches.Concept renderings are generated driven by structured semantic text.
Concept evaluationExpert multi-criteria evaluation is conducted based on sketches.Expert multi-criteria evaluation is conducted based on concept renderings.
3D modelingEngineering semantic modeling and refinement are performed, with rework frequently occurring in the later stages.The best concept rendering is converted into a 3D proxy model, and a list of engineering semantic issues is compiled.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, C.; Cheng, F.; Zhang, B.; Jin, R.; Dong, C.; Sun, Z.; Zhou, Y. A Generative AI-Driven Industrial Design Framework for Human–GenAI Co-Creation. Symmetry 2026, 18, 352. https://doi.org/10.3390/sym18020352

AMA Style

Chen C, Cheng F, Zhang B, Jin R, Dong C, Sun Z, Zhou Y. A Generative AI-Driven Industrial Design Framework for Human–GenAI Co-Creation. Symmetry. 2026; 18(2):352. https://doi.org/10.3390/sym18020352

Chicago/Turabian Style

Chen, Chen, Fangmin Cheng, Boyi Zhang, Ruozhen Jin, Chaoyi Dong, Zhixue Sun, and Yaxuan Zhou. 2026. "A Generative AI-Driven Industrial Design Framework for Human–GenAI Co-Creation" Symmetry 18, no. 2: 352. https://doi.org/10.3390/sym18020352

APA Style

Chen, C., Cheng, F., Zhang, B., Jin, R., Dong, C., Sun, Z., & Zhou, Y. (2026). A Generative AI-Driven Industrial Design Framework for Human–GenAI Co-Creation. Symmetry, 18(2), 352. https://doi.org/10.3390/sym18020352

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop