Bridging Cognitive and Expression Spaces in Creative AI by Integrating DIKWP-TRIZ and Semantic Mathematics
Abstract
1. Introduction
- Integration of DIKWP-TRIZ with Semantic Mathematics: We formulate a unified framework where TRIZ’s inventive problem-solving principles are applied within and between DIKWP elements, under formal semantic constraints. This bridges cognitive modeling and expression generation, enabling AI to handle complex tasks with both creativity and semantic rigor.
- Modeling 3-No Problems in Input/Output: We formally characterize how incompleteness, inconsistency, and imprecision manifest in LLM inputs and outputs, and extend the DIKWP-TRIZ model to address each issue. By mapping the 3-No problems onto DIKWP’s Cognitive, Semantic, and Expression spaces, the framework offers robust strategies for resolving ambiguity and contradictions in queries and responses.
- DIKWP × DIKWP Cognitive Transformations: We define 25 possible transformations between each pair of DIKWP elements, each guided by TRIZ principles. This complete enumeration over L × L provides a structural index of DIKWP-typed transformation categories; for each category, we list candidate TRIZ principles as design-time guidance.
- Semantic Axioms for Consistency and Completeness: We introduce formal axioms within the semantic mathematics framework. We derive how these axioms enforce expression-space consistency and completeness in the LLM’s output.
- Purpose-Driven Inference and Ethical Alignment: By including the “Purpose” dimension from DIKWP, the framework is designed to steer AI reasoning with goal-oriented and ethical considerations. We propose an architecture where the Purpose component evaluates and guides inference steps, ensuring the AI’s solutions are not only technically sound but also aligned with intended goals and human values.
2. Related Work
2.1. TRIZ and Its Limitations in Cognitive Problems
2.2. Cognitive Modeling in AI and DIKWP Advancements
3. Theoretical Framework for Integrating DIKWP-TRIZ with Semantic Mathematics
3.1. DIKWP × DIKWP Transformation Flow Mechanisms for Creative AI
- (M1) Define the transformation intent for , i.e., what must change and what must be preserved (e.g., emphasizes cleaning/segmentation; emphasizes translating goals into information requirements).
- (M2) Abstract each TRIZ principle into one or more operator tags (e.g., segmentation, extraction, feedback, parameter adaptation, and hybridization). This abstraction is standard when transferring TRIZ from physical engineering to software/IT contexts.
- (M3) Assign a TRIZ principle to if its operator tag(s) can plausibly realize the transformation intent in the adopted representation. Multiple principles may apply; the set is intentionally permissive.
- (M4) Record the assignment as in the trace graph when the principle is actually invoked at run time, enabling later auditing and empirical ablation once a prototype exists.
3.2. Integrating Semantic Mathematics into DIKWP-TRIZ
- Cognitive Space: Contains the agent’s internal mental representations (perceptions, thoughts, and intermediate reasoning states). DIKWP elements in this space correspond to the agent’s internal state (e.g., raw sensory data or personal goals).
- Semantic Space: Contains abstracted, language-agnostic meanings of concepts and their relations. DIKWP elements here are represented in a formal semantic model (e.g., ontologies or logical formulas capturing the content of data, information, knowledge, etc.).
- Expression (Conceptual) Space: Interfaces with language and symbols. DIKWP elements in this space correspond to external language and symbol expressions (words, sentences, symbols) that convey the content to the outside world.
3.3. Purpose-Driven Inference Mechanism
4. Formalizing 3-No Problems in LLM Inputs and Outputs
4.1. Definition and Formalization of 3-No Problems
- For incompleteness: It checks if all required DIKWP elements to answer the query are present. If certain data or information is missing, a placeholder or query is created. For example, if the question is “How do I improve system performance?” and it lacks context (what system? what metrics?), the system notes incomplete Data and Information.
- For inconsistency: It checks the input for logical contradictions or mutual exclusivity. This uses a Knowledge base of facts; any input statements that conflict (or conflict with known facts) are marked, e.g., if the user says “I need a vegan recipe with chicken,” the system flags a contradiction between vegan and chicken.
- For imprecision: It analyzes linguistic ambiguity (multiple interpretations) by mapping terms in the query to the Semantic Space. If a term has several possible concepts (e.g., “bank” could mean river bank or financial bank), or the request is too broad, the system flags them accordingly.
- Missing data triggers a Data → Information search, perhaps querying a knowledge base or asking the user for clarification (if interactive). This is analogous to how a person would ask follow-up questions. In our framework, an Information innovation step might be to automatically gather context (for example, retrieving relevant background info from stored data [69]).
- Contradictions in input are handled by Wisdom → Knowledge transformation: The system uses higher-order wisdom (e.g., a rule “vegan means no animal products”) to adjust or interpret the query in a consistent way (maybe by assuming the user’s priority or reinterpreting the request as a hypothetical). Alternatively, the system may split the problem into sub-problems to handle each scenario consistently, using TRIZ’s separation principles creatively to accommodate the contradiction.
- Ambiguity in input is addressed by Purpose-driven disambiguation: The system infers the likely user intent (Purpose) from whatever hints are available (user profile, conversation history, etc.) and uses that to choose an interpretation. For instance, if a user asks about “bank account” in a financial advice context, Purpose (seeking financial advice) guides the conceptual disambiguation of “bank” to the financial institution meaning. If uncertainty remains, the system can generate an Information → Data question to the user for clarification (in effect, a follow-up question) [70].
- As the answer is being composed (in the Expression Space), the system monitors completeness: Have all parts of the question been addressed by some Data and Information? The DIKWP model can enforce a kind of coverage integrity, where the Cognitive Space coverage is analyzed to ensure no aspect is overlooked. If an expected component is missing, the generation is not finalized until a required DIKWP transformation step (e.g., Data → Knowledge) fills the gap.
- Consistency of the output is ensured by a final Knowledge-dimension validation. Before rendering the answer into text, the system runs a semantic consistency check: all statements intended for output are checked against each other and against a fact database. Any inconsistency triggers a correction cycle (perhaps Knowledge → Data: verifying facts, or Knowledge → Knowledge: removing contradictory statements). This is akin to a proof checker or a truth-verification pass on the draft answer.
- Precision of the output is improved by tailoring the expression in the Expression Space using Purpose and Wisdom. The Purpose dimension will strip extraneous or generic content that does not serve the user’s goal (preventing vague rambling). Wisdom will ensure the phrasing is context-appropriate and unambiguous.
4.2. Classification of 3-No Problem Scenarios
5. Semantic Axioms and Cognitive Transformations
5.1. Axioms for Semantic Consistency and Completeness
5.1.1. Definitions and Notation
5.1.2. The Existence Axiom
5.1.3. The Uniqueness Axiom
5.1.4. The Transitivity Axiom
| Algorithm 1: DIKWP–TRIZ reasoning loop with semantic verification |
6: best ← None 7: for a in A do , a) ∪ ) 10: if not ok then continue ) ), score) 13: end for ← best ) then break 17: end for )) ) |
5.2. Application to DIKWP-TRIZ Transformations
- When performing a DIKWP transformation (say Information → Knowledge), the Existence axiom demands that any new piece of knowledge derived must be expressible in the Semantic and Expression Space. This means our transformation algorithms cannot introduce truly ineffable concepts—if they derive something, it must be backed by data or information that can be pointed to. For instance, if the system “has a hunch” (Wisdom) that a certain solution might work, Existence forces it to either articulate that hunch (perhaps as a hypothesis) or seek data to support it, rather than silently use it. This ties into explainability: each step’s result exists in a shareable form. Conversely, if some input information has no impact on the solution, the system should explicitly recognize that (so it does not violate Existence by leaving an input unaccounted for). The result is that every input and every intermediate is tracked and can be output if needed (no hidden state that is not representable).
- The Uniqueness axiom heavily influences the knowledge representation in the DIKWP model. For each transformation, especially those that merge or abstract ideas (like Data → Information or Knowledge → Wisdom), the system must check that it is not accidentally duplicating concepts. For example, during Information → Knowledge, if two pieces of information imply the same knowledge, the framework should combine them into one knowledge concept rather than keep two separate parallel concepts (which could later diverge and cause inconsistency). Uniqueness also requires that if a transformation creates a concept that already exists, it should unify them. In TRIZ terms, this is akin to trimming redundancies—the framework actually had a goal of reducing overlaps, which aligns with our axiom. On the expression side, uniqueness ensures that when generating the final answer, if the same concept was arrived at via two different paths, it is described once. Practically, this eliminates contradictory answers like an LLM first saying “Solution is X” and later saying “Solution might be not X” due to picking different words for the same concept; our system would realize those refer to one concept X and reconcile them.
- The Transitivity axiom ensures that cognitive chains are coherent. In DIKWP-TRIZ, one might go through a chain like Data → Information → Knowledge → Wisdom. Transitivity implies that the essential logical thread is preserved: if the original data indicated a certain outcome, the wisdom-dimension conclusion should reflect that unless intentionally overridden. If a transformation overrides something (say Wisdom rejects a Knowledge piece for ethical reasons), the system notes a break in transitivity (with justification, like “this path pruned due to ethical conflict”). Most of the time, though, transformations build on each other. Transitivity allows the framework to do multi-hop reasoning reliably: consider a case where, to solve a problem, the AI does Data → Information (find relevant facts), Information → Knowledge (derive a principle), and Knowledge → Information (apply the principle to get a specific insight). Thanks to transitivity, it should reach the same specific insight if it had, for example, directly looked up data or gone another route. This property reduces order-dependence—the solution does not arbitrarily differ if the sequence of transformations changes, as long as, logically, it covers the same ground. This is important for a robust AI; it means the reasoning graph is somewhat redundant or cross-checked.
- It will refuse to accept a new piece of knowledge that it cannot attach to some expression (flagging a potential violation of Existence).
- It will run a unification algorithm to enforce Uniqueness whenever the Knowledge graph is updated (merging nodes that are essentially the same concept, preventing duplicates).
- It will apply forward-chaining and backward-chaining inferences to respect Transitivity (similar to how a Prolog engine or rule-based system would derive all consequences, to avoid missing a logical link).
5.3. Sketch Proofs and Invariant Preservation
5.3.1. Minimal Contracts (Implementation Assumptions Checked by the Verifier)
- (C1)
- Total interpretation of in-scope expressions. For every expression , is defined (possibly mapping to an explicit “unknown/placeholder” concept with provenance).
- (C2)
- Total lexicalization of active concepts. For every concept is defined (using controlled fallback labels when needed).
- (C3)
- Contextual disambiguation and canonical naming. Under fixed ω, is single-valued after disambiguation, and is injective over active concept identifiers (using explicit sense tags if necessary).
- (C4)
- Canonicalization and unification. yields a stable normal form for the chosen representation, and a unification step merges duplicated concepts that are equivalent under the adopted equivalence relation (e.g., same id or same canonical signature).
- (C5)
- Operator admissibility and closure. Each executed operator is well-typed and returns a well-formed output. For any invoked two-step chain , either (i) a corresponding direct operator is available, or (ii) the system defines a direct transformation instance as the canonicalized composition, ensuring Equations (8) and (9) for that instance.
5.3.2. Lemmas (Axioms Hold for Any Verified State)
5.3.3. Proposition (Invariant Preservation in Algorithm 1)
6. Reference Architecture and Implementation Blueprint for a DIKWP-TRIZ and Semantic Mathematics Enhanced LLM
6.1. Reference Architecture Overview
6.1.1. Intermediate Representations and Data Structures
- Context ω: A context record capturing domain, dialog state, time, and user constraints that index disambiguation and validity checks.
- DIKWP state : A typed frame in which each element holds a set (or list) of items with provenance.
- Item schema: Each item is represented as a tuple (id, content, element, confidence, provenance, and constraints), where provenance links to source evidence or prior transformations.
- Trace graph : A directed graph storing transformation applications as edges (transform type , TRIZ principle id, context , timestamp). This supports auditability and explainability.
- Constraint set : A set of context-indexed consistency constraints derived from Section 5 (Existence/Expressibility, Contextual Uniqueness, and Transitivity (Compositional Trace Consistency)) that are checkable on and .
6.1.2. Module Interfaces
6.1.3. Reference Pipeline Algorithm
- A Planner (controlled by Purpose) that decides which transformation (expert) to invoke next.
- Multiple Expert Modules (one per DIKWP transform or group of transforms) that carry out specific cognitive tasks and report back results and confidence.
- A Global Knowledge Base (including semantic rules, ontology, and facts) accessible to all modules for consistency checks.
6.2. Illustrative Comparison and Evaluation Protocol
6.2.1. Knowledge Representation
- Standard LLM: Knowledge is implicit and distributed across model parameters, without an explicit, inspectable knowledge base [69]. As a result, the model may rely on pattern completion when facing missing or uncertain facts, which can lead to factual inaccuracies or unsupported assertions [5]. The absence of transparent provenance also makes it difficult to verify where a claim originates or to systematically correct errors.
- DIKWP–TRIZ and Semantic Mathematics Enhanced LLM: Knowledge is represented explicitly in a multi-element form aligned with DIKWP semantics, and can be linked to structured repositories such as knowledge graphs, ontologies, or curated databases. Factual content can be retrieved on demand and validated against semantic constraints. This separation between stored knowledge and language generation improves traceability and enables systematic verification, thereby reducing unsupported generations and improving factual robustness.
6.2.2. Reasoning Process
- Standard LLM: Reasoning is typically realized as an end-to-end generation process that maps an input prompt directly to an output response. Intermediate reasoning states are not inherently structured or externally verifiable. For multi-step tasks, errors introduced early may propagate into the final answer, and the lack of explicit checkpoints makes it difficult to detect contradictions or missing steps before the response is produced [74,75].
- DIKWP-TRIZ- and Semantic-Mathematics-Enhanced LLM: Reasoning is structured and iterative, progressing through DIKWP stages with explicit intermediate artifacts (e.g., extracted data items, inferred relations, generalized knowledge statements, value-aware judgments, and purpose-aligned decisions). A semantic mathematics component performs constraint-based validation (e.g., consistency checking and compatibility with predefined axioms) at each stage. When contradictions or trade-offs are encountered, TRIZ principles provide transformation operators that support creative reframing and resolution (e.g., decomposing conflicts, separating constraints, or restructuring the problem). This staged workflow functions as a semantic supervision loop, reducing error accumulation and improving coherence in complex reasoning tasks.
6.2.3. Context and Purpose
- Standard LLM: The model primarily responds to the immediate prompt and local context, with no explicit internal representation of a persistent long-term objective. In extended interactions, this can result in topic drift, inconsistent decisions, or weakened adherence to constraints [76], especially when the user’s intent must be maintained across multiple turns [77].
- DIKWP-TRIZ and Semantic-Mathematics-Enhanced LLM: The Purpose dimension provides a persistent objective that guides the selection of relevant information, prioritizes memory and context, and filters candidate reasoning paths. Purpose-driven control enables planning-like behavior: intermediate conclusions can be evaluated against the goal and constraints, and revised when misalignment is detected. This mechanism improves long-horizon coherence in multi-turn dialog by reducing drift and enforcing goal-consistent reasoning across turns.
6.2.4. Handling “3-No” Inputs
- Standard LLM: A standard LLM typically lacks explicit mechanisms for detecting and resolving incomplete, inconsistent, or imprecise inputs (the “3-No” issues) [70]. When a user prompt omits key details, contains internal contradictions, or is underspecified, the model often relies on implicit priors learned during training to fill gaps. In practice, it may gloss over ambiguity or introduce unstated assumptions without explicitly signaling uncertainty [78]. This behavior can yield vague or internally conflicting outputs, particularly when the prompt includes hidden inconsistencies that remain unrecognized [79]. As a result, ambiguity may be propagated rather than resolved, increasing the likelihood of logical errors or conflicts with earlier dialog turns.
- DIKWP–TRIZ and Semantic-Mathematics-Enhanced LLM: The proposed approach explicitly manages “3-No” issues through structured reasoning and constraint-aware verification. At the Data → Information stage, the system can identify missing entities, underspecified constraints, or unclear references and respond by initiating targeted clarification, retrieving supporting facts, or constructing multiple plausible interpretations under explicit constraints. During Information → Knowledge conversion, the system performs consistency checks to detect conflicts among facts, assumptions, and constraints; when contradictions emerge, it can invoke TRIZ-guided transformations to restructure the problem or separate competing requirements. Imprecision is further addressed at the Wisdom and Purpose dimensions by refining intent, prioritizing constraints, and enforcing goal-consistent interpretations. Throughout this pipeline, semantic mathematics provides constraint-based validation to reduce uncontrolled assumption propagation. Consequently, the system either resolves uncertainty via structured transformations or makes uncertainty explicit, improving robustness under underspecified or contradictory queries.
6.2.5. Alignment and Values Integration
- Standard LLM: In many standard settings, alignment with human values and safety requirements is introduced primarily through post hoc techniques such as preference-based fine-tuning and output filtering [80,81]. While these methods can reduce undesirable responses, ethical and value-related constraints are not necessarily represented as first-class elements within the model’s internal reasoning process. As a result, responses may be technically plausible yet contextually inappropriate or insufficiently sensitive to user intent and constraints, especially when the prompt does not explicitly specify value-based requirements.
- DIKWP–TRIZ and Semantic-Mathematics-Enhanced LLM: The proposed model integrates value-sensitive reasoning within its cognitive pipeline. The Wisdom dimension explicitly evaluates candidate conclusions and actions under normative constraints (e.g., safety, responsibility, and fairness) alongside task objectives. Instead of applying alignment only as a final filter, the model checks whether intermediate outcomes are not only correct with respect to knowledge, but also appropriate under value constraints and purpose requirements. Semantic mathematics supports this process by formalizing policy-like constraints as verifiable conditions, while TRIZ-guided transformations help search for alternatives that satisfy both technical goals and normative boundaries. This design encourages responses that are goal-aligned, context-sensitive, and responsibly constrained by construction.
6.2.6. Explainability and Transparency
- Standard LLM: Standard LLMs provide limited transparency because the internal decision process is distributed across high-dimensional parameters and does not naturally yield an inspectable reasoning trace [82]. When asked to justify an answer, the model may generate a plausible post hoc explanation, but it is not guaranteed to correspond to verifiable intermediate reasoning states [83]. This opacity complicates debugging, auditing, and error attribution, particularly in settings that require traceability and governance [84].
- DIKWP-TRIZ and Semantic-Mathematics-Enhanced LLM: The proposed system is designed to improve explainability by producing explicit intermediate artifacts at each DIKWP element. For example, it can expose: (i) Data elements (entities, constraints, and evidence candidates), (ii) Information (structured interpretations and relations), (iii) Knowledge (generalizations, rules, or causal links), (iv) Wisdom (value-aware assessments and trade-offs), and (v) Purpose (goal definitions and acceptance criteria). These intermediate results form an auditable reasoning trace that can be inspected to determine where errors or misalignments originate. In addition, semantic mathematics enables explicit constraint checks whose outcomes can be logged, supporting systematic verification and targeted correction at the stage where a violation is detected. This structured trace improves interpretability and facilitates debugging, governance, and trust calibration.
- Factual Reliability: Explicit knowledge representations and semantic validation are intended to help reduce unsupported claims and to improve verifiability, compared with purely implicit parameter-based recall.
- Deep Multi-Step Reasoning: Structured reasoning with intermediate checks is intended to help limit error accumulation and to improve coherence on multi-step tasks.
- Context and Goal Persistence: Purpose-driven control improves long-horizon consistency in multi-turn interactions by prioritizing goal-relevant information and filtering candidate reasoning paths.
- Robustness to Ambiguity (“3-No”): Structured detection and resolution of incompleteness, inconsistency, and imprecision reduces uncontrolled assumption propagation and encourages explicit clarification when needed.
- Value-Aware Reasoning: Integrating value constraints within the Wisdom and Purpose stages enables responsible decision-making during inference rather than relying solely on post hoc filtering.
- Transparency: DIKWP element intermediate artifacts provide an auditable reasoning trace that supports inspection, debugging, and governance.
7. Discussion
- Human-Like Creativity and Innovation.
- 2.
- Cognitive Completeness and Explainability.
- 3.
- Purposeful and Ethical AI Behavior.
- 4.
- Limitations and Future Work.
8. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Appendix A
| Principle No. | TRIZ Inventive Principle |
|---|---|
| 1 | Segmentation |
| 2 | Extraction (Taking out) |
| 3 | Local Quality |
| 4 | Asymmetry |
| 5 | Consolidation (Merging) |
| 6 | Universality |
| 7 | Nesting (Matrioshka) |
| 8 | Counterweight (Anti-weight) |
| 9 | Prior Counteraction (Preliminary anti-action) |
| 10 | Prior Action (Preliminary action) |
| 11 | Cushion in Advance (Beforehand cushioning) |
| 12 | Equipotentiality |
| 13 | Do It in Reverse (The other way round) |
| 14 | Spheroidality (Curvature) |
| 15 | Dynamicity (Dynamics) |
| 16 | Partial or Excessive Action |
| 17 | Transition Into a New Dimension (Another dimension) |
| 18 | Mechanical Vibration |
| 19 | Periodic Action |
| 20 | Continuity of Useful Action |
| 21 | Rushing Through (Skipping) |
| 22 | Convert Harm Into Benefit (Blessing in disguise) |
| 23 | Feedback |
| 24 | Mediator (Intermediary) |
| 25 | Self-service |
| 26 | Copying |
| 27 | Dispose (Cheap short-lived objects) |
| 28 | Replacement of Mechanical System (Mechanics substitution) |
| 29 | Pneumatic or Hydraulic Constructions (Pneumatics and hydraulics) |
| 30 | Flexible Membranes or Thin Films (Flexible shells and thin films) |
| 31 | Porous Material (Porous materials) |
| 32 | Changing the Color (Color changes) |
| 33 | Homogeneity |
| 34 | Rejecting and Regenerating Parts (Discarding and recovering) |
| 35 | Transformation of Properties (Parameter changes) |
| 36 | Phase Transition (Phase transitions) |
| 37 | Thermal Expansion |
| 38 | Accelerated Oxidation (Strong oxidants) |
| 39 | Inert Environment (Inert atmosphere) |
| 40 | Composite Materials |
| TRIZ Principle (No., Name) | Operator-Level Interpretation in DIKWP/LLM Setting | Example Instantiation (Cell-Level) |
|---|---|---|
| 40, Composite Materials | Hybridization/composition: Combine heterogeneous resources (multi-source evidence, multi-view models, symbolic + neural components) into a single representation. | D → W: fuse heterogeneous data streams into a higher-level insight; K → W: merge multiple knowledge fragments into a trade-off judgment. |
| 37, Thermal Expansion | Constraint-range expansion/contraction: Adapt the feasible region or search space by relaxing/strengthening soft constraints while preserving hard constraints. | P → P: Rescale goals/constraints under changing context; P → I: Expand information requirements when purpose broadens. |
| 38, Accelerated Oxidation | Accelerated deprecation: Intentionally phase out obsolete assumptions/sub-goals when new evidence arrives; increase the update rate of a stale state. | P → P: retire outdated sub-goals; K → K: rapidly revise a rule base when contradictions are detected. |
| 15, Dynamicity | Adaptive structure: Allow representations and rules to change with context; support dynamic reconfiguration instead of fixed pipelines. | K → W: adapt value-aware judgment as constraints change; I → K: update generalizations when new cases appear. |
| 6, Universality | General-purpose representation: Translate a goal into a reusable information requirement or shared intermediate schema. | P → I: derive a unified information schema from a purpose specification to avoid ad hoc interpretation. |
| 1, Segmentation | Decomposition: Break a problem/state into separable subparts for localized repair, retrieval, or reasoning. | D → D/I → I: segment inputs; K → K: split conflicting rules into cases; P → P: decompose a global objective into sub-goals. |
References
- Peykani, P.; Ramezanlou, F.; Tanasescu, C.; Ghanidel, S. Large language models: A structured taxonomy and review of challenges, limitations, solutions, and future directions. Appl. Sci. 2025, 15, 8103. [Google Scholar] [CrossRef]
- Shen, S.; Logeswaran, L.; Lee, M.; Lee, H.; Poria, S.; Mihalcea, R. Understanding the capabilities and limitations of large language models for cultural commonsense. arXiv 2024, arXiv:2405.04655. [Google Scholar] [CrossRef]
- Suzuki, Y.; Banaei-Kashani, F. Universe of Thoughts: Enabling Creative Reasoning with Large Language Models. arXiv 2025, arXiv:2511.20471. [Google Scholar] [CrossRef]
- Bender, E.M.; Gebru, T.; McMillan-Major, A.; Shmitchell, S. On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency; Association for Computing Machinery: New York, NY, USA, 2021; pp. 610–623. [Google Scholar]
- Ji, Z.; Lee, N.; Frieske, R.; Yu, T.; Su, D.; Xu, Y.; Ishii, E.; Bang, Y.J.; Madotto, A.; Fung, P. Survey of hallucination in natural language generation. ACM Comput. Surv. 2023, 55, 1–38. [Google Scholar] [CrossRef]
- Maatouk, A.; Piovesan, N.; Ayed, F.; De Domenico, A.; Debbah, M. Large language models for telecom: Forthcoming impact on the industry. IEEE Commun. Mag. 2024, 63, 62–68. [Google Scholar] [CrossRef]
- Shanahan, M. Talking about large language models. Commun. ACM 2024, 67, 68–79. [Google Scholar] [CrossRef]
- Urlana, A.; Kumar, C.V.; Singh, A.K.; Garlapati, B.M.; Chalamala, S.R.; Mishra, R. LLMs with Industrial Lens: Deciphering the Challenges and Prospects--A Survey. arXiv 2024, arXiv:2402.14558. [Google Scholar]
- Kim, J.; Podlasek, A.; Shidara, K.; Liu, F.; Alaa, A.; Bernardo, D. Limitations of large language models in clinical problem-solving arising from inflexible reasoning. Sci. Rep. 2025, 15, 39426. [Google Scholar] [CrossRef]
- Achiam, J.; Adler, S.; Agarwal, S.; Ahmad, L.; Akkaya, I.; Aleman, F.L.; Almeida, D.; Altenschmidt, J.; Altman, S.; Anadkat, S. Gpt-4 technical report. arXiv 2023, arXiv:2303.08774. [Google Scholar] [CrossRef]
- Lake, B.M.; Ullman, T.D.; Tenenbaum, J.B.; Gershman, S.J. Building machines that learn and think like people. Behav. Brain Sci. 2017, 40, e253. [Google Scholar] [CrossRef] [PubMed]
- Pearl, J. Theoretical impediments to machine learning with seven sparks from the causal revolution. arXiv 2018, arXiv:1801.04016. [Google Scholar] [CrossRef]
- Coecke, B.; Sadrzadeh, M.; Clark, S. Mathematical foundations for a compositional distributional model of meaning. arXiv 2010, arXiv:1003.4394. [Google Scholar] [CrossRef]
- Al’tshuller, G.S. The Innovation Algorithm: TRIZ, Systematic Innovation and Technical Creativity; Technical Innovation Center, Inc.: Worcester, MA, USA, 1999. [Google Scholar]
- Altshuller, G.S. Creativity as an Exact Science; CRC Press: Boca Raton, FL, USA, 1984. [Google Scholar]
- Wu, K.; Duan, Y. DIKWP-TRIZ: A Revolution on Traditional TRIZ Towards Invention for Artificial Consciousness. Appl. Sci. 2024, 14, 10865. [Google Scholar] [CrossRef]
- Mei, Y.; Duan, Y. The DIKWP (Data, Information, Knowledge, Wisdom, Purpose) Revolution: A New Horizon in Medical Dispute Resolution. Appl. Sci. 2024, 14, 3994. [Google Scholar] [CrossRef]
- Sojka, V.; Lepsik, P. Tools of Theory of Inventive Problem Solving Used for Process Improvement—A Systematic Literature Review. Processes 2025, 13, 226. [Google Scholar] [CrossRef]
- Ilevbare, I.M.; Probert, D.; Phaal, R. A review of TRIZ, and its benefits and challenges in practice. Technovation 2013, 33, 30–37. [Google Scholar] [CrossRef]
- Chai, K.-H.; Zhang, J.; Tan, K.-C. A TRIZ-based method for new service design. J. Serv. Res. 2005, 8, 48–66. [Google Scholar] [CrossRef]
- Beckmann, H. Method for transferring the 40 inventive principles to information technology and software. Procedia Eng. 2015, 131, 993–1001. [Google Scholar] [CrossRef]
- Yan, W.; Zanni-Merk, C.; Cavallucci, D.; Collet, P. An ontology-based approach for inventive problem solving. Eng. Appl. Artif. Intell. 2014, 27, 175–190. [Google Scholar] [CrossRef]
- Ghane, M.; Ang, M.C.; Cavallucci, D.; Kadir, R.A.; Ng, K.W.; Sorooshian, S. TRIZ trend of engineering system evolution: A review on applications, benefits, challenges and enhancement with computer-aided aspects. Comput. Ind. Eng. 2022, 174, 108833. [Google Scholar] [CrossRef]
- Ghane, M.; Ang, M.C.; Cavallucci, D.; Kadir, R.A.; Ng, K.W.; Sorooshian, S. Semantic TRIZ feasibility in technology development, innovation, and production: A systematic review. Heliyon 2024, 10, e23775. [Google Scholar] [CrossRef]
- Jiang, S.; Li, W.; Qian, Y.; Zhang, Y.; Luo, J. AutoTRIZ: Automating engineering innovation with TRIZ and large language models. Adv. Eng. Inform. 2025, 65, 103312. [Google Scholar] [CrossRef]
- Ni, X.; Samet, A.; Cavallucci, D. Similarity-based approach for inventive design solutions assistance. J. Intell. Manuf. 2022, 33, 1681–1698. [Google Scholar] [CrossRef]
- Yan, W.; Liu, H.; Zanni-Merk, C.; Cavallucci, D. IngeniousTRIZ: An automatic ontology-based system for solving inventive problems. Knowl.-Based Syst. 2015, 75, 52–65. [Google Scholar] [CrossRef]
- Anderson, J.R. ACT: A simple theory of complex cognition. Am. Psychol. 1996, 51, 355. [Google Scholar] [CrossRef]
- Anderson, J.R.; Bothell, D.; Byrne, M.D.; Douglass, S.; Lebiere, C.; Qin, Y. An integrated theory of the mind. Psychol. Rev. 2004, 111, 1036. [Google Scholar] [CrossRef] [PubMed]
- Laird, J.E.; Newell, A.; Rosenbloom, P.S. Soar: An architecture for general intelligence. Artif. Intell. 1987, 33, 1–64. [Google Scholar] [CrossRef]
- Laird, J.E. The Soar Cognitive Architecture; MIT press: Cambridge, MA, USA, 2019. [Google Scholar]
- Chong, H.-Q.; Tan, A.-H.; Ng, G.-W. Integrated cognitive architectures: A survey. Artif. Intell. Rev. 2007, 28, 103–130. [Google Scholar] [CrossRef]
- Kotseruba, I.; Tsotsos, J.K. 40 years of cognitive architectures: Core cognitive abilities and practical applications. Artif. Intell. Rev. 2020, 53, 17–94. [Google Scholar] [CrossRef]
- Zhang, Z.; Dai, Q.; Bo, X.; Ma, C.; Li, R.; Chen, X.; Zhu, J.; Dong, Z.; Wen, J.-R. A survey on the memory mechanism of large language model-based agents. ACM Trans. Inf. Syst. 2025, 43, 1–47. [Google Scholar] [CrossRef]
- Töberg, J.-P.; Ngonga Ngomo, A.-C.; Beetz, M.; Cimiano, P. Commonsense knowledge in cognitive robotics: A systematic literature review. Front. Robot. AI 2024, 11, 1328934. [Google Scholar] [CrossRef]
- Collins, K.M.; Sucholutsky, I.; Bhatt, U.; Chandra, K.; Wong, L.; Lee, M.; Zhang, C.E.; Zhi-Xuan, T.; Ho, M.; Mansinghka, V. Building machines that learn and think with people. Nat. Hum. Behav. 2024, 8, 1851–1863. [Google Scholar] [CrossRef]
- Sukhobokov, A.; Belousov, E.; Gromozdov, D.; Zenger, A.; Popov, I. A universal knowledge model and cognitive architectures for prototyping AGI. Cogn. Syst. Res. 2024, 88, 101279. [Google Scholar] [CrossRef]
- Sumers, T.; Yao, S.; Narasimhan, K.; Griffiths, T. Cognitive architectures for language agents. Trans. Mach. Learn. Res. 2023. [Google Scholar] [CrossRef]
- Bisk, Y.; Holtzman, A.; Thomason, J.; Andreas, J.; Bengio, Y.; Chai, J.; Lapata, M.; Lazaridou, A.; May, J.; Nisnevich, A. Experience grounds language. arXiv 2020, arXiv:2004.10151. [Google Scholar] [CrossRef]
- Marcus, G. Deep learning: A critical appraisal. arXiv 2018, arXiv:1801.00631. [Google Scholar] [CrossRef]
- Xu, Q.; Peng, Y.; Nastase, S.A.; Chodorow, M.; Wu, M.; Li, P. Large language models without grounding recover non-sensorimotor but not sensorimotor features of human concepts. Nat. Hum. Behav. 2025, 9, 1871–1886. [Google Scholar] [CrossRef] [PubMed]
- Huang, L.; Yu, W.; Ma, W.; Zhong, W.; Feng, Z.; Wang, H.; Chen, Q.; Peng, W.; Feng, X.; Qin, B. A survey on hallucination in large language models: Principles, taxonomy, challenges, and open questions. ACM Trans. Inf. Syst. 2025, 43, 1–55. [Google Scholar] [CrossRef]
- Farkaš, I.; Vavrečka, M.; Wermter, S. Will multimodal large language models ever achieve deep understanding of the world? Front. Syst. Neurosci. 2025, 19, 1683133. [Google Scholar] [CrossRef]
- Yin, S.; Fu, C.; Zhao, S.; Li, K.; Sun, X.; Xu, T.; Chen, E. A survey on multimodal large language models. Natl. Sci. Rev. 2024, 11, nwae403. [Google Scholar] [CrossRef]
- Peng, Z.; Wang, W.; Dong, L.; Hao, Y.; Huang, S.; Ma, S.; Wei, F. Kosmos-2: Grounding multimodal large language models to the world. arXiv 2023, arXiv:2306.14824. [Google Scholar] [CrossRef]
- Wheeler, D.; Tripp, E.E.; Natarajan, B. Semantic communication with conceptual spaces. IEEE Commun. Lett. 2022, 27, 532–535. [Google Scholar] [CrossRef]
- Yu, D.; Yang, B.; Liu, D.; Wang, H.; Pan, S. A survey on neural-symbolic learning systems. Neural Netw. 2023, 166, 105–126. [Google Scholar] [CrossRef]
- Gardenfors, P. Conceptual Spaces: The Geometry of Thought; MIT press: Cambridge, MA, USA, 2004. [Google Scholar]
- d’Avila Garcez, A.S.; Lamb, L.C.; Gabbay, D.M. Neural-Symbolic Cognitive Reasoning; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
- Lamb, L.C.; Garcez, A.; Gori, M.; Prates, M.; Avelar, P.; Vardi, M. Graph neural networks meet neural-symbolic computing: A survey and perspective. arXiv 2020, arXiv:2003.00330. [Google Scholar]
- Garcez, A.d.A.; Lamb, L.C. Neurosymbolic ai: The 3 rd wave. Artif. Intell. Rev. 2023, 56, 12387–12406. [Google Scholar] [CrossRef]
- Bhuyan, B.P.; Ramdane-Cherif, A.; Tomar, R.; Singh, T. Neuro-symbolic artificial intelligence: A survey. Neural Comput. Appl. 2024, 36, 12809–12844. [Google Scholar] [CrossRef]
- Van Meter, H.J. Revising the DIKW pyramid and the real relationship between data, information, knowledge, and wisdom. Law Technol. Hum. 2020, 2, 69–80. [Google Scholar] [CrossRef]
- Calzati, S. An ecosystemic view on information, data, and knowledge: Insights on agential AI and relational ethics. AI Ethics 2025, 5, 3763–3776. [Google Scholar] [CrossRef]
- Duan, Y. Bridging the gap between purpose-driven frameworks and artificial general intelligence. Appl. Sci. 2023, 13, 10747. [Google Scholar] [CrossRef]
- Wu, X.; Zhang, W.; Wang, Y.; Zhang, H.; Bai, S. DIKWP Framework: Intelligentization, Green Innovation, and Servitization for Manufacturing Sustainability in Circular Economy. Sage Open 2026, 16, 21582440251415300. [Google Scholar] [CrossRef]
- Zhan, S.; Tang, F.; Li, Y.; Mei, Q.; Lu, Z.; Li, X.-S.; Zhang, B. Construction of Index System of International Economic Cooperation Innovation Model in Digital Era-based on TIF Theory and DIKWP Model. In 2023 IEEE International Conference on High Performance Computing & Communications, Data Science & Systems, Smart City & Dependability in Sensor, Cloud & Big Data Systems & Application (HPCC/DSS/SmartCity/DependSys); IEEE: New York, NY, USA, 2023; pp. 809–812. [Google Scholar]
- Wei, H.; Kong, Q.; Wang, X.; He, H.; Wu, H.; Li, X. DIKWP construction of public hospital performance evaluation index system-based on TIF model domain wealth theory. In 2023 IEEE International Conference on High Performance Computing & Communications, Data Science & Systems, Smart City & Dependability in Sensor, Cloud & Big Data Systems & Application (HPCC/DSS/SmartCity/DependSys); IEEE: New York, NY, USA, 2023; pp. 821–828. [Google Scholar]
- Yao, L.; Liu, L.; Wei, X. A Study of Educational Curriculum Data Utilization Paths Based on the DIKWP Model. In 2023 IEEE International Conference on High Performance Computing & Communications, Data Science & Systems, Smart City & Dependability in Sensor, Cloud & Big Data Systems & Application (HPCC/DSS/SmartCity/DependSys); IEEE: New York, NY, USA, 2023; pp. 1140–1147. [Google Scholar]
- Liu, L.; Liu, R.; Liu, X. Application of DIKWP Model to Optimize the Allocation of Postgraduate Education Funds. In 2023 IEEE International Conference on High Performance Computing & Communications, Data Science & Systems, Smart City & Dependability in Sensor, Cloud & Big Data Systems & Application (HPCC/DSS/SmartCity/DependSys); IEEE: New York, NY, USA, 2023; pp. 1153–1160. [Google Scholar]
- Mei, Y.; Duan, Y. Bidirectional Semantic Communication Between Humans and Machines Based on Data, Information, Knowledge, Wisdom, and Purpose Artificial Consciousness. Appl. Sci. 2025, 15, 1103. [Google Scholar] [CrossRef]
- Mei, Y.; Duan, Y. A Review of Personalized Semantic Secure Communications Based on the DIKWP Model. Electronics 2025, 14, 3671. [Google Scholar] [CrossRef]
- Li, Y.; Duan, Y.; Maamar, Z.; Che, H.; Spulber, A.-B.; Fuentes, S. Swarm Differential Privacy for Purpose-Driven Data-Information-Knowledge-Wisdom Architecture. Mob. Inf. Syst. 2021, 2021, 6671628. [Google Scholar] [CrossRef]
- Vu, S.; Duan, Y.; Pham, U.; Song, M.; Guo, Z.; Mei, Y.; Nguyen, H.D. The Role of DIKWP Assets in Shaping Economic Systems and Implications for an Intelligent Consultant System in Real Estate Investment. J. Cases Inf. Technol. (JCIT) 2024, 26, 1–27. [Google Scholar] [CrossRef]
- Wu, K.; Duan, Y. Modeling and Resolving Uncertainty in DIKWP Model. Appl. Sci. 2024, 14, 4776. [Google Scholar] [CrossRef]
- Russo, D.; Spreafico, C. TRIZ 40 Inventive principles classification through FBS ontology. Procedia Eng. 2015, 131, 737–746. [Google Scholar] [CrossRef][Green Version]
- Yao, S.; Yu, D.; Zhao, J.; Shafran, I.; Griffiths, T.; Cao, Y.; Narasimhan, K. Tree of thoughts: Deliberate problem solving with large language models. Adv. Neural Inf. Process. Syst. 2023, 36, 11809–11822. [Google Scholar]
- Creswell, A.; Shanahan, M.; Higgins, I. Selection-inference: Exploiting large language models for interpretable logical reasoning. arXiv 2022, arXiv:2205.09712. [Google Scholar] [CrossRef]
- Lewis, P.; Perez, E.; Piktus, A.; Petroni, F.; Karpukhin, V.; Goyal, N.; Küttler, H.; Lewis, M.; Yih, W.-t.; Rocktäschel, T. Retrieval-augmented generation for knowledge-intensive nlp tasks. Adv. Neural Inf. Process. Syst. 2020, 33, 9459–9474. [Google Scholar]
- Min, S.; Michael, J.; Hajishirzi, H.; Zettlemoyer, L. AmbigQA: Answering Ambiguous Open-domain Questions. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP); Association for Computational Linguistics: Stroudsburg, PA, USA, 2020; pp. 5783–5797. [Google Scholar]
- Shen, Y.; Song, K.; Tan, X.; Li, D.; Lu, W.; Zhuang, Y. Hugginggpt: Solving ai tasks with chatgpt and its friends in hugging face. Adv. Neural Inf. Process. Syst. 2023, 36, 38154–38180. [Google Scholar]
- Schick, T.; Dwivedi-Yu, J.; Dessì, R.; Raileanu, R.; Lomeli, M.; Hambro, E.; Zettlemoyer, L.; Cancedda, N.; Scialom, T. Toolformer: Language models can teach themselves to use tools. Adv. Neural Inf. Process. Syst. 2023, 36, 68539–68551. [Google Scholar]
- Mialon, G.; Dessi, R.; Lomeli, M.; Nalmpantis, C.; Pasunuru, R.; Raileanu, R.; Roziere, B.; Schick, T.; Dwivedi-Yu, J.; Celikyilmaz, A. Augmented Language Models: A Survey. arXiv 2023, arXiv:2302.07842. [Google Scholar] [CrossRef]
- Wei, J.; Wang, X.; Schuurmans, D.; Bosma, M.; Xia, F.; Chi, E.; Le, Q.V.; Zhou, D. Chain-of-thought prompting elicits reasoning in large language models. Adv. Neural Inf. Process. Syst. 2022, 35, 24824–24837. [Google Scholar]
- Kojima, T.; Gu, S.S.; Reid, M.; Matsuo, Y.; Iwasawa, Y. Large language models are zero-shot reasoners. Adv. Neural Inf. Process. Syst. 2022, 35, 22199–22213. [Google Scholar]
- Liu, N.F.; Lin, K.; Hewitt, J.; Paranjape, A.; Bevilacqua, M.; Petroni, F.; Liang, P. Lost in the middle: How language models use long contexts. Trans. Assoc. Comput. Linguist. 2024, 12, 157–173. [Google Scholar] [CrossRef]
- Park, J.S.; O’Brien, J.; Cai, C.J.; Morris, M.R.; Liang, P.; Bernstein, M.S. Generative agents: Interactive simulacra of human behavior. In Proceedings of the 36th Annual Acm Symposium on User Interface Software and Technology; Association for Computing Machinery: New York, NY, USA, 2023; pp. 1–22. [Google Scholar]
- Kuhn, L.; Gal, Y.; Farquhar, S. Clam: Selective clarification for ambiguous questions with generative language models. arXiv 2022, arXiv:2212.07769. [Google Scholar]
- Mündler, N.; He, J.; Jenko, S.; Vechev, M. Self-contradictory hallucinations of large language models: Evaluation, detection and mitigation. arXiv 2023, arXiv:2305.15852. [Google Scholar]
- Ouyang, L.; Wu, J.; Jiang, X.; Almeida, D.; Wainwright, C.; Mishkin, P.; Zhang, C.; Agarwal, S.; Slama, K.; Ray, A. Training language models to follow instructions with human feedback. Adv. Neural Inf. Process. Syst. 2022, 35, 27730–27744. [Google Scholar]
- Bai, Y.; Kadavath, S.; Kundu, S.; Askell, A.; Kernion, J.; Jones, A.; Chen, A.; Goldie, A.; Mirhoseini, A.; McKinnon, C. Constitutional ai: Harmlessness from ai feedback. arXiv 2022, arXiv:2212.08073. [Google Scholar] [CrossRef]
- Luo, H.; Specia, L. From understanding to utilization: A survey on explainability for large language models. arXiv 2024, arXiv:2401.12874. [Google Scholar] [CrossRef]
- Turpin, M.; Michael, J.; Perez, E.; Bowman, S. Language models don’t always say what they think: Unfaithful explanations in chain-of-thought prompting. Adv. Neural Inf. Process. Syst. 2023, 36, 74952–74965. [Google Scholar]
- Mökander, J.; Schuett, J.; Kirk, H.R.; Floridi, L. Auditing large language models: A three-layered approach. AI Ethics 2024, 4, 1085–1115. [Google Scholar] [CrossRef]
- Bubeck, S.; Chandrasekaran, V.; Eldan, R.; Gehrke, J.; Horvitz, E.; Kamar, E.; Lee, P.; Lee, Y.T.; Li, Y.; Lundberg, S. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv 2023, arXiv:2303.12712. [Google Scholar] [CrossRef]
- Guo, T.; Yang, Q.; Wang, C.; Liu, Y.; Li, P.; Tang, J.; Li, D.; Wen, Y. Knowledgenavigator: Leveraging large language models for enhanced reasoning over knowledge graph. Complex Intell. Syst. 2024, 10, 7063–7076. [Google Scholar] [CrossRef]


| Aspect | Traditional TRIZ | DIKWP-TRIZ |
|---|---|---|
| Framework | Hierarchical; focuses on technical and physical contradictions | Networked DIKWP structure; interactions among Data–Purpose |
| Problem Scope | Clear, well-defined technical problems (complete data) | Uncertain cognitive problems (incomplete and inconsistent data) |
| Innovation Method | 40 fixed principles applied deterministically | 25 DIKWP × DIKWP transformations; adaptive and context-driven |
| Handling Uncertainty | Assumes data is complete and consistent | Explicitly addresses “3-No” problems via semantic transformations and cross-dimension fixes |
| Outcome Alignment | No built-in ethical or purpose check | Integrates Wisdom and Purpose for ethical, goal-aligned solutions |
| Application Domain | Engineering design, manufacturing, etc. | AI reasoning, LLM outputs, decision-making with semantics |
| From\To | D | I | K | W | P |
|---|---|---|---|---|---|
| D | 1, 2, 5, 10, 12, 18, 26, 30 | 3, 5, 9, 17, 28, 35 | 6, 24 | 40 | 4, 11, 15, 29 |
| I | 10, 22 | 13, 17 | 15, 24 | 23, 32 | 16, 32 |
| K | 8, 9, 25, 27 | 3, 13 | 22, 34 | 15, 40 | 25, 31, 35 |
| W | 6, 24, 25, 35 | 16, 22 | 3, 23 | 15, 34 | 10, 15, 20, 33, 39 |
| P | 10, 19, 21, 23 | 6 | 2, 15 | 36, 40 | 1, 14, 37, 38 |
| 3-No | Input Challenge | Output Challenge | DIKWP-TRIZ Strategy |
|---|---|---|---|
| Incompleteness | Missing or insufficient data and context in user queries | Partial or under-specified answers (gaps in reasoning) | Cross-dimension enrichment: Use Data → Information conversion to supplement missing details; integrate external knowledge bases to fill gaps. The system detects missing pieces and iteratively gathers or infers data until a complete answer can be formed. |
| Inconsistency | Conflicting or contradictory statements in input | Self-contradictory or factually incorrect responses | Consistency checks: Cross-validate facts and claims via Knowledge and Wisdom dimensions; apply Wisdom to resolve contradictions and enforce coherence. The framework uses semantic logic to detect internal conflicts, prompting transformations (e.g., revise Knowledge using Data evidence) to eliminate inconsistencies. |
| Imprecision | Vague or ambiguous problem descriptions | Overly general or ambiguous answers that lack clarity | Semantic clarification: Refine and disambiguate concepts via Semantic Space transformations; utilize Purpose to focus on the specific user intent. The model will rephrase or ask questions (using DIKWP interactions) to narrow down meaning and produce a precise, context-appropriate response. |
| Input/Output | Incomplete Output | Inconsistent Output | Imprecise Output |
|---|---|---|---|
| Incomplete Input | Type 1: Gaps propagate → missing steps/details; D → I → K enrichment (retrieve/ask). | Type 2: Gap-filling assumptions → contradictions; W-check + clarify/uncertainty (I → D). | Type 3: Under-specified → generic answer; P-guided scoping (P → I) + concretize. |
| Inconsistent Input | Type 4: Unresolved conflict → partial coverage; K → D evidence triage + consistent subset. | Type 5: Contradictions preserved/echoed; I → K consolidation + reconcile (separate/merge). | Type 6: Conflict → hedging/vagueness; W/P arbitration + case split. |
| Imprecise Input | Type 7: Ambiguous intent → off-target/partial; P-guided disambiguation (ask or cover branches). | Type 8: Mixed senses → internal inconsistency; I → D concretize with examples + branch separately. | Type 9: Vagueness mirrored; K → W refinement + canonical terms. |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Guo, Z.; Duan, Y. Bridging Cognitive and Expression Spaces in Creative AI by Integrating DIKWP-TRIZ and Semantic Mathematics. Electronics 2026, 15, 963. https://doi.org/10.3390/electronics15050963
Guo Z, Duan Y. Bridging Cognitive and Expression Spaces in Creative AI by Integrating DIKWP-TRIZ and Semantic Mathematics. Electronics. 2026; 15(5):963. https://doi.org/10.3390/electronics15050963
Chicago/Turabian StyleGuo, Zhendong, and Yucong Duan. 2026. "Bridging Cognitive and Expression Spaces in Creative AI by Integrating DIKWP-TRIZ and Semantic Mathematics" Electronics 15, no. 5: 963. https://doi.org/10.3390/electronics15050963
APA StyleGuo, Z., & Duan, Y. (2026). Bridging Cognitive and Expression Spaces in Creative AI by Integrating DIKWP-TRIZ and Semantic Mathematics. Electronics, 15(5), 963. https://doi.org/10.3390/electronics15050963
