DIKWP Semantic Judicial Reasoning: A Framework for Semantic Justice in AI and Law
Abstract
1. Introduction
2. Background and Related Work
2.1. Traditional Conceptual Representations in Legal AI
2.2. DIKWP Model: Semantic Mathematics and AI Applications
3. Methodology
3.1. Formal Definitions
- is a data graph with nodes representing atomic data elements from the case (e.g., a raw fact, a numeric measurement, a date, a text snippet of a law) and edges representing relationships or identity links among data (e.g., linking an evidentiary item to a source or linking duplicate data points).
- is an information graph with nodes representing information units (each encoding a meaningful distinction or comparison) and edges capturing relations like “difference”, “similarity”, or contextual connections between information units. Each information node is typically derived from one or more data nodes in . We establish a surjective mapping indicating which information nodes arise from which data nodes (for example, a data node “BAC = 0.10” might map to an information node “BAC exceeds legal limit”).
- is a knowledge graph where includes nodes representing legal concepts or rules (e.g., a specific regulation, a legal term like “LicenseRevocationCriteria”) and includes edges representing logical or ontological relations (such as “is_a”, “has_element”, “leads_to”). Knowledge nodes can also represent instantiated propositions like “Violation X is present” or “Condition Y is met in this case,” which are derived by applying general knowledge to specific information nodes. We define a mapping to indicate which knowledge nodes are activated or informed by which information nodes. For instance, the information “BAC exceeds limit” might map to a knowledge node representing the condition of a drunk-driving law.
- is a wisdom graph with nodes denoting higher-level constructs such as principles, heuristics, or experiential knowledge. Edges represent influence or dependency relations among these principles. In a legal setting, nodes might include “PublicSafety” (as a principle), “Deterrence”, “Proportionality”, or “PastCasePatternX”. These often do not directly connect to data, but we define a mapping where certain knowledge nodes (like a rule) are linked to wisdom nodes that justify or contextualize them (like the principle behind the rule, or an area of discretion).
- is a purpose graph with nodes representing goals or intents. In the content context, purpose nodes could represent the objectives of laws or the overarching goal of the proceeding (e.g., “EnsureFoodSafety”, “ResolveDisputeFairly”). Edges capture hierarchies or associations among purposes (for example, “EnsureFoodSafety” is part of the broader “ProtectPublicHealth”). A mapping connects wisdom to purpose, indicating which purposes are served by which principles.
- , where includes data elements as perceived or provided by the stakeholder. This can include personal data (e.g., “I filed the application on Jan 1”, “I have 5 years of compliance history”) or evidence from their perspective (sometimes overlapping with content data, sometimes additional data only they know).
- , with information nodes capturing the stakeholder’s interpretation or emphasis on differences. For example, the stakeholder might highlight “the difference between my case and typical cases” as an information node. The mapping ties their data to information.
- , where includes the stakeholder’s knowledge and beliefs. This may involve their understanding of the law (which could be correct or mistaken), their knowledge of facts, or even normative beliefs (like “I did nothing wrong” or “the agency must consider X by law”). It can also include knowledge of past experiences (“last time a similar situation happened, only a warning was issued”). This layer is essentially a cognitive model of the stakeholder’s reasoning. maps their information to their knowledge.
- , containing Alice’s principles or values. For an individual, this might include notions of fairness, economic necessity (e.g., “if I lose my license, I lose my livelihood”), or moral considerations. For an agency stakeholder, wisdom might include internal policies or enforcement philosophies (“we prioritize safety over cost”). maps knowledge to wisdom for the stakeholder (e.g., the stakeholder knows a regulation exists but wisdom might say “that regulation is outdated and usually leniently enforced” as a principle they hold).
- , the stakeholder’s goals and purposes. For the appellant, the purpose is likely “get my license back” or more generally “achieve a fair outcome” or “continue operations”. There can be sub-goals like clearing one’s reputation, minimizing financial loss, etc. For the agency, the purpose might be “enforce compliance to ensure health” or similar. links their principles to their ultimate goals (for instance, a fairness principle in W could link to the goal of a fair outcome in P).
- —a function (or procedure) that takes elements of the content DIKWP graph and finds semantically corresponding elements in the stakeholder’s DIKWP graph.
- —a function that maps elements from the stakeholder’s graph to corresponding elements or structures in the content graph.
- A piece of data in the content graph (like a violation record in DG) might correspond to a data node in the stakeholder graph (the stakeholder’s acknowledgment of that violation, or perhaps their own evidence contradictory to it).
- An information node “violation is minor” in the content graph might correspond to “this violation is not a big deal” in the stakeholder’s information graph (essentially the same claim in different words).
- A knowledge node that is a legal rule in the content graph might correspond to a knowledge node in the stakeholder graph if the stakeholder is aware of that rule. If the stakeholder is not aware, there may be no corresponding node, which is an important case of mismatch.
- A wisdom node like “Enforcement should be proportional” in the content might correspond to a stakeholder’s wisdom node “I expect to be treated fairly and leniently for minor issues”. They are phrased differently but semantically related by the concept of proportional enforcement.
- A purpose node “Protect public health” in the content might correspond to the stakeholder’s purpose “Keep my restaurant open safely”—these can be aligned as not identical but as compatible purposes in an ideal resolution.
- maps data to data (e.g., the agency’s recorded inspection date corresponds to the date the owner remembers the inspection).
- maps information to information (e.g., “violation count = 3, which is high” might map to the stakeholder’s “I only had 3 minor issues, which I consider low”—here perhaps a conflict in interpretation that needs resolution).
- for knowledge (mapping formal rules to stakeholder’s understanding or lack thereof).
- for wisdom (mapping principles).
- for purposes/goals.
- A node is aligned with a node if for the appropriate layer X, and the semantic content (values, meaning) of those nodes are equivalent or compatible. For instance, if is a numeric value 3 and is also the numeric 3 for violation count, they are aligned; if is “severity = high” and is “severity = low”, they map but are not compatible—this indicates a conflict in evaluation.
- We call a conflict pair if they refer to the same real-world aspect but have substantially different values or interpretations. “Substantially” here means that differences are large enough to significantly affect stakeholder decisions, interpretations, or outcomes, rather than minor variations. Conflict pairs can occur at the data level (factual dispute), information level (different contextual framing), knowledge level (disagreement on which rule applies or how), wisdom level (differing principles prioritized), or purpose level (goal misalignment). When two nodes from different layers use different types of values, such as numeric values at the data level versus symbolic labels (e.g., “low”, “high”) at the information or knowledge level, we first normalize these values into a standard semantic scale (e.g., numeric values to qualitative labels) to facilitate meaningful comparison. For example, stakeholder might prioritize “economic survival” whereas law’s purpose is “public safety”; if a decision can satisfy both, great; if not, there is a conflict that must be adjudicated by priority or compromise.
- We will denote the set of all conflict pairs identified by the mapping by . The goal of reasoning will often be to resolve or minimize —ideally to zero for a fully agreeable outcome, but more realistically to explain why certain conflicts are resolved in favor of one side.
- In the knowledge graph, classical logical or ontological inference works (if all conditions nodes are present, infer the conclusion node, or use graph search to find applicable laws). Additionally, abductive inference [39] (inference to the best explanation) is particularly valuable in legal contexts, where it helps identify the most plausible interpretations of incomplete or ambiguous factual scenarios.
- In the wisdom graph, analogical or heuristic inference might apply (if a principle is triggered, prefer certain interpretations).
- In the purpose graph, one might propagate a goal backward (means–end reasoning: to achieve purpose P, which W nodes and K nodes could be activated?).
3.2. Transformation Pipeline Overview
- Content Ingestion and Graph Construction: Legal content (statutes, regulations, case facts, evidence, prior decisions) is ingested and segmented into DIKWP layers. This might involve NLP to extract data and information (e.g., named entity recognition to identify key facts, comparisons to highlight what is unusual in this case) and using legal knowledge bases to populate the knowledge graph (e.g., linking identified statutes to a network of legal concepts). The outcome is . For example, in our case study scenario, this stage takes the administrative record (inspection reports, legal provisions cited for revocation, etc.) and produces data nodes for each relevant fact (dates, violation types, etc.), information nodes capturing salient comparisons (e.g., “3 violations within 6 months, which is above average”), knowledge nodes encoding the regulation (“if 3 serious violations then license revocation is authorized”), wisdom nodes (perhaps “public health risk is significant if >2 violations/year” as a principle, drawn from guidelines), and purpose nodes (“protect diners’ health”).
- Stakeholder Input and Graph Construction: In parallel, the stakeholder’s (e.g., the restaurant owner’s) perspective) is captured. This could be through direct input (testimony, appeal letter, etc.) or via a cognitive user model. We construct from this, possibly using techniques similar to those used for the content graph (e.g., semantic extraction, node classification), but specifically tuned to handle subjective stakeholder inputs. For instance, the owner might provide data like “cleaning logs” or personal circumstances (“invested $50k in this business”), which become data nodes; information nodes might capture differences the owner emphasizes (“all violations were minor and quickly fixed”); knowledge nodes might include the owner’s references to rules (“the law says I should get a warning first”) or possibly misunderstandings; wisdom nodes could reflect their principles (“I always prioritize cleanliness” or “punishment should fit the harm”); and purpose nodes clearly include “Keep my license” and “Maintain livelihood” along with an implied shared purpose of public health (“I also want safe food because it’s my business reputation”).
- Initial Mapping (Alignment): We then perform an initial pass of and to align the graphs. This involves matching identical or equivalent items, such as recorded violations or cited regulations. Specifically, we might apply string matching for direct textual correspondences (e.g., matching violation descriptions exactly), ontology alignment to handle controlled vocabularies or structured semantic equivalences, or vector embeddings to identify semantic similarities in more nuanced expressions (e.g., aligning stakeholder phrases like “minor violations” with content descriptions like “low severity”). When aligning nodes across different data types or abstraction levels—such as numeric scores at the data layer versus qualitative labels (“low”, “high”) at higher layers—we first standardize these different representations into a common semantic space, facilitating meaningful comparison and matching. The output of this stage is a set of tentative mappings , and conflicts are identified. For example, the stakeholder graph might lack the knowledge node stating “3 violations mandate revocation”, indicating a knowledge conflict, or the stakeholder’s information node “violations minor” directly conflicts with the content node “violations serious.”
- Cognitive Alignment and Reasoning: This is where the actual semantic reasoning happens to reconcile the two perspectives. In Figure 1 (center), we illustrate this as transformations within cognitive, concept, and semantic spaces. In practical terms, the system (or judge) will carry out the following:
- Enter a cognitive space where the identified data and information are processed: this includes interpreting what each piece means, perhaps probabilistically confirming certain data as cognitive objects. Here any discrepancies in raw data might be resolved (e.g., if the owner contests a fact, the adjudicator decides which data to accept).
- Move to a concept space, where the relevant legal concepts and rules (knowledge layer) are organized. In this space, one figures out how the case fits into the legal framework: What rule applies? What definitions matter? This step involves mapping the specific scenario to the abstract rule structure. During this, the stakeholder’s knowledge is integrated: if the stakeholder raises an argument about a rule or brings up a different rule, this is considered in concept space. The mapping helps here: if the stakeholder’s knowledge node was not in the content, concept space reasoning might bring it in (e.g., “Stakeholder says rule Q should apply; is that relevant? let’s consider it”—possibly adding to content knowledge graph if valid).
- Then enter the semantic space, where actual semantic networks, specifically referring to the interconnected DIKWP graphs that integrate semantic relationships across data, information, knowledge, wisdom, and purpose, are considered. These semantic networks extend traditional semantic networks or knowledge graphs by explicitly modeling higher-level cognitive dimensions such as values and purposes. Within semantic space, nuances of language or context are resolved. For instance, understanding that “minor violation” from stakeholder and “Grade 3 violation” in content are referring to the same concept with different wording; such resolution is key to ensure that there is no mere semantic misunderstanding. This might employ ontology mapping or definitions.
- These transformations between spaces are purpose-driven: at each stage, the system is guided by the ultimate purposes (both the law’s and the stakeholder’s) in selecting how to reconcile differences. Purpose acts like a heuristic or weighting factor: if the purpose is public safety, cognitive/conceptual ambiguities are resolved in favor of interpretations that favor safety unless that would unjustly hurt the stakeholder’s purpose without corresponding safety gain (in which case maybe the stakeholder’s purpose influences a different interpretation that still satisfies safety minimally). We formalize this via the purpose graph influencing the reasoning path, e.g., if multiple knowledge rules could apply, the one that better serves the purpose nodes is favored.
- As a result of this reasoning process, certain conflict pairs in are resolved. For example, the conflict “serious vs minor” might be resolved by clarifying that the violations were of types that are serious under the code (so the stakeholder’s labeling of “minor” is incorrect), or perhaps by concluding that they were minor and the agency over-labeled them—depending on evidence, etc. Resolved means one side’s position is chosen but justified in terms of the semantic framework (with purpose often providing the justification).
- Semantic Fusion and Graph Update: After reasoning, we perform a fusion of the graphs: effectively updating both and to reflect a common understanding post-reasoning. If the process added a rule the stakeholder pointed out, that becomes part of the content knowledge graph (with maybe a note that the rule was considered but found not applicable or applicable). If a stakeholder’s misconception was corrected, the stakeholder graph (conceptually, the stakeholder’s understanding) is aligned to content. Of course, in reality, the stakeholder might not agree, but the model can at least represent what the adjudicator believes the stakeholder ought to understand after explanation. The fusion yields an integrated DIKWP representation of the case where, ideally, all remaining differences are in the purpose layer, if any (like simply differing goals, which cannot both be fully achieved—at least that tension is explicit).
- Decision Output and Explanation: Finally, the outcome of the process is a decision or recommendation (e.g., “License revocation is upheld, but with conditions” or “Revocation overturned, replaced by fine”). Because our reasoning occurred in the DIKWP semantic space, we can generate an explanation trace: start from purpose nodes (the decision’s justification in terms of purpose), and follow how that purpose is supported by certain wisdom/principles, which in turn relate to knowledge (specific laws or facts of case), down to the data that are critical. This trace can be presented in natural language as an explanation. For instance, “The decision to [Outcome] was made to fulfill the purpose of [Public Health Protection] (Purpose). In reaching this decision, the adjudicator considered the principle of [Proportional Enforcement] (Wisdom) and concluded that, under Regulation Y (Knowledge), although [three violations occurred] (Information from Data), they were all minor and promptly corrected (Wisdom balancing, as advocated by the appellant’s perspective). Therefore, a lesser sanction achieves compliance without undermining health goals (alignment of Purpose).” This corresponds to an explanation path that can be marked on the DIKWP graph. Indeed, an advantage of our approach is that the explanation is essentially a walk through the graph from data up to purpose, which is inherently interpretable.
4. Case Study: Administrative Law Scenario
- Data: inspection reports, number and type of violations, compliance history.
- Information: whether “three violations in a year” is abnormal, whether these count as “serious” or “repeated” under the regulation.
- Knowledge: the text of Regulation 5.4; any city guidelines on enforcement; past cases if available.
- Wisdom: principles like public health protection, fairness to small businesses, deterrence vs. education in enforcement.
- Purpose: the purpose of the health code (prevent foodborne illness) and the purpose of the business (to operate safely and profitably), and the judicial purpose of reaching a fair resolution.
4.1. DIKWP (Content) Graph Construction for the Case
- D1: “3 inspections in last 12 months for GoodFood Bistro” (with dates Jan 10, Jun 5, Dec 1).
- D2: “Inspection on Dec 1 found 3 violations” (with details such as Violation A: hot soup at 50 °C (below required 60 °C); Violation B: slicer not fully sanitized; Violation C: some logs incomplete).
- D3: “Previous inspections also had violations” (maybe Jan 10 had 2 minor violations; Jun 5 had 1 moderate violation; data can be each count and type).
- D4: “Notice of revocation issued Dec 5 citing Regulation 5.4.”
- D5: The text of Regulation 5.4 (or relevant excerpt).
- D6: Any known policy memo or guideline (suppose there is a Health Dept Guideline that says, “Enforcement actions: 1st time minor violations = warning, repeated serious violations = suspension/revocation”).
- D7: (If accessible) outcome data from similar cases (e.g., maybe a reference that 5 other restaurants had licenses suspended in the last year for repeated violations).
- D8: “No reported foodborne illness incidents at GoodFood Bistro in last year.”
- D9: “GoodFood Bistro’s owner submitted correction proof within 2 days after each inspection.”
- These are all factual pieces, many of which appear in documents (inspection reports, the notice, possibly internal records).
- Edges in might link, for example, each violation detail to the date of inspection.
- I1: “Three violations were found in the last inspection” (a simple restatement, but important as a summary).
- I2: “This is the third consecutive inspection with violations”—highlights a repeated pattern.
- I3: “Number of violations in last year = 3 + 2 + 1 = 6 total; number of inspections with any violations = 3/3 (100%)”—quantifies repetition rate.
- I4: “All three violations on Dec 1 were categorized as ‘serious’ by inspector”—if the inspector or code classifies them (assuming the code or inspector did mark severity).
- I5: “Violations corrected immediately”—from D9, the fact that corrections were made promptly, meaning issues were resolved.
- I6: “No illnesses occurred”—from D8, implies harm was potential, not actual.
- I7: “Policy says repeated serious violations may justify revocation”—gleaned from D6 possibly.
- I8: “Policy suggests first-time minor issues get warning”—also from D6.
- I9: “GoodFood Bistro has 5-year operation history” (if gleaned from context, maybe not directly in provided data but could add if known, though not mentioned explicitly, skip if not in record).
- : the raw counts yield those info points.
- : inspector’s categorization is typically part of D2 details.
- .
- : reading the policy memo yields those info guidelines.
- K1: Regulation 5.4 (Revocation rule): likely structured as “IF (establishment has serious or repeated violations) THEN (agency may suspend/revoke license).”
- K2: Definition of “serious violation” (maybe in code, e.g., any violation that poses immediate health hazard, like improper temperature).
- K3: Definition of “repeated violations” (e.g., violations in 3 consecutive inspections might qualify).
- K4: Agency’s enforcement guideline (if D6 is formal, make it a knowledge node: “Guideline: 1st minor → warning, repeated serious → revoke”).
- K5: Administrative law principle: “Agency has discretion in enforcement actions” (like a general knowledge that revocation is discretionary, not automatic).
- K6: Procedural rule: “Licensee has right to appeal” (for completeness, but less substantive to outcome).
- K7: Precedent cases or past decisions (if any, though maybe not in record).
- K8: The concept of “license revocation” itself as an action/outcome node.
- I4 (serious violations present) triggers knowledge K2 (definition: each of the three qualifies as serious maybe). Also triggers part of K1’s condition (“serious violations present”).
- I2/I3 (repeated pattern) triggers knowledge K3 (definition of repeated: clearly yes, repeated).
- So the conditions for Regulation 5.4 (serious or repeated) are satisfied. That would allow the conclusion “may revoke license” to be activated.
- I7, I8 connect to K4 (policy guideline).
- If K4 guideline exists, it might conflict or interplay with K1: K4 says first-time minor → warning (not exactly our case, since not first time), but implies maybe a progressive enforcement concept.
- K5 (discretion principle) is background knowledge connecting to K1’s “may” (not mandatory).
- K8 (license revocation outcome) might be considered a knowledge node or could be considered in wisdom/purpose as well, but we treat it as the specific action knowledge node that is the result of applying K1.
- W1: “Protect public health”—a core principle behind health regulations.
- W2: “Enforcement should ensure compliance”—a principle guiding why to punish (to induce compliance).
- W3: “Proportionality”—enforcement actions should be proportional to the violation severity (maybe not explicitly stated by agency, but it is a general legal principle; however some agencies follow “zero tolerance”, which is opposite of proportionality).
- W4: “Consistency and deterrence”—ensure consistent application to deter others (agency principle possibly).
- W5: Perhaps “Support local business while ensuring safety”—some balance principle if present in policy rhetoric (if not, agency might not consider this).
- W6: “Due process/fairness”—a generic principle in any adjudication (though might be more on judge’s side).
- The agency likely prioritizes W1, W2, and W4. The ALJ (adjudicator) inherently will consider W3 and W6 as well.
- Regulation 5.4 exists to serve W1 (public health) and W2 (compliance).
- We can link K1 → W1, W2.
- The guideline K4 implies a principle of proportional response: link K4 → W3 (since it literally distinguishes actions by severity).
- K4 also reflects W2 (compliance, because warnings escalate to stronger measures if not complied).
- K5 (discretion) links to W3 or W6, as it allows judgment.
- If a zero tolerance stance was present, that would link to W4 (deterrence).
- We include W3 proportionality because the presence of a guideline implies someone considered it, but it might be contested as applied here.
- P1: “Prevent foodborne illness and protect public health” (the statutory purpose of health regulations).
- P2: “Ensure sanitary conditions in food establishments” (more specific version of P1, or part of P1).
- P3: “Uphold rule of law/ regulatory compliance” (a general purpose of having enforcement).
- P4: “Fair and orderly administration” (may be the purpose of the appeals system).
- Possibly P5: “Economic vitality of community” (some cities have this as a general goal, but probably not part of the health department’s mandate, so maybe not explicitly).
- The adjudicative body might also have purpose “deliver a just outcome” (could include under P4 or separate).
- Connect wisdom to purpose:
- –
- W1 (public health) directly serves P1.
- –
- W2 (ensure compliance) serves P1 and P3 (compliance as intermediate to health).
- –
- W4 (consistency/deterrence) serves P3 (rule of law).
- –
- W3 (proportionality) serves both P1 (because overly lenient might fail health, overly harsh might exceed what is needed for health and conflict with justice) and P4 (fair administration).
- –
- W6 (fairness/due process) serves P4 and arguably serves the societal purpose of justice.
4.2. DIKWP (Stakeholder) Graph Construction (Restaurant Owner’s Perspective)
- D1’: (Corresponds to D1) Alice is aware that three inspections occurred during the past year, although she may frame this fact differently (e.g., “My establishment has been regularly inspected”).
- D2’: Alice acknowledges that the inspection on December 1 identified certain issues, though she disputes some details. Nevertheless, she addressed the issues promptly, indicating recognition of their existence.
- Alice may provide additional contextual information: “All compliance violations were rectified on-site” (while this appears as a mere factual statement in the complaint records, for Alice, it serves as documented evidence of her personal corrective actions).
- D9’: Alice may provide/submit the following evidence: receipts for new thermometers purchased after, cleaning logs, etc.
- Additional data she might bring: “No customer ever complained or got sick at my place” (though content had that too).
- Personal data: “I have run this restaurant for 5 years” (if relevant).
- “This is my livelihood; 10 employees work here” (impact data).
- “I passed all prior inspections until this year” (maybe she had good record before).
- Therefore, Alice’s data nodes () overlap substantially with the content data but explicitly include information about business impact.
- I1’: “Issues were minor”—Alice categorizes them as minor due to the absence of immediate danger from her perspective.
- I2’: “I corrected everything immediately”—highlighting responsiveness.
- I3’: “No harm resulted (no one sick)”—as a point why it is minor.
- I4’: “I’ve improved practices since”—suggesting enhanced compliance efforts.
- I5’: “Past good record (only this year had violations)”—Alice would likely cite this if factually accurate.
- I6’: “Punishment (revocation) is extreme compared to the violations”—this represents Alice’s interpretation/key point of contention: essentially an assessment of the relative relationship between the violations and the severity of punishment.
- I7’: Alice might argue, “Other restaurants usually just get fines for similar issues.”
- I8’: “I was not given a warning or chance before this action”—process-related info difference.
- These map from her data:
- –
- Correction and no harm from D9’ → I2’, I3’.
- –
- Minor vs. serious: Alice may define “minor” based on examples such as “the soup was only slightly below the required temperature but was immediately reheated” or “the slicer was sanitized on the spot”—in her view, these issues are considered minor because they can be quickly rectified. After her interpretation, the details of Clause D2 ultimately lead to the I1’ conclusion.
- –
- Impact on business D(impact) → I6’ (punishment extreme because effect is closing business).
- –
- Alice’s awareness of comparable enforcement cases may lead to assertion I7’ regarding typical penalties applied to similar violations.
- –
- Lack of warning → I8’ from her experience that she never received a formal warning letter prior.
- K1’: Alice may not know specific regulation numbers but is familiar with the general concept of “health code violations” and the need for corrective action in certain circumstances. She might have incomplete knowledge of Regulation 5.4.
- K2’: Alice believes, “If violations are corrected and don’t involve critical items, you normally get a chance to rectify them rather than having your license revoked directly.” (This essentially serves as her personal adjudication principle—whether codified or not. This understanding may stem from industry-wide expectations, or perhaps from compliance advice given by inspectors, such as when they have provided guidance on how to make improvements.)
- K3’: Alice may have knowledge of specific regulatory provisions, such as requirements for immediate closure only under conditions of imminent health hazards.
- K4’: Alice is aware of the appeals process (since she is currently filing an appeal).
- K5’: Alice may invoke “small business protection policies” or seek leniency citing pandemic impacts (provided such government policy inclinations actually exist).
- K6’: Alice is likely fully aware of all the factual violations (such as which specific provisions were breached).
- K7’: Alice may have sought advice and been instructed to cite specific precedents or standards (though this remains uncertain in the current context).
- For mapping: K2’ is basically the knowledge that matches content’s guideline K4, or at least similar.
- If K3’ (imminent hazard rule) exists in law, it would align with something in content perhaps not explicitly mentioned. If not, it is a misunderstanding or a half-truth (some places do that).
- Therefore, : “health code not intended to shutter business for minor things”, “I should have gotten a warning first”, “I fixed everything so compliance achieved”, “I have right to appeal” (that aligns with K6 maybe), “others get fines” (implying a consistency standard).
- K2’ (“chance to correct rule”) aligns with content’s policy K4 (which indeed says warning for does not).
- If Alice explicitly cites a specific regulatory provision, or if she was told that “imminent danger is required to justify immediate closure”, this may correspond to an unstated principle or regulation (unless the cited provision actually contains such stipulation).
- Alignment will be identified during the subsequent mapping stage.
- W1’: “Fairness”—Alice perceives the enforcement action as disproportionate to the violations committed.
- W2’: “Second chance/forgiveness”—this forms the foundational rationale for Alice’s argument that a prior warning should be issued.
- W3’: “My dedication to safety”—Alice may also emphasize her commitment to public safety (value).
- W4’: “Hardship to employees/community”—a moral point, closing hurts innocent parties (the employees, customers losing a beloved place).
- W5’: Potentially includes the perception of being targeted or treated disproportionately, representing a concern regarding inconsistency or unfair treatment.
- These are more emotional/moral, but in formal terms, fairness and proportionality align with content’s wisdom W3, W6.
- The hardship argument appeals to external principles of equity, which certain jurisdictions permit administrative adjudicators to consider.
- W5’ is a hint at inconsistency (if Alice suspects other violators were not subjected to equal treatment, it raises issues of fairness/consistency principles).
- The mapping thus proceeds as follows:
- –
- Fairness (W1’) aligns with W3 (proportionality) and W6 (due process) content.
- –
- Second chance (W2’) is an element of fairness, also aligns with W3.
- –
- Dedication to safety (W3’) interestingly aligns with W1 (public health); Alice is essentially asserting “I share the purpose, I’m not a bad actor”, thereby resonating with the law’s ultimate purpose.
- –
- Hardship (W4’) might align with a general principle of equity or could remain a stakeholder-only concern (though one could tie it to public interest in economic vitality).
- –
- Consistency/harshness (W5’) aligns with content W4, if Alice implies inconsistent enforcement.
- P1’: “Keep my restaurant open (retain license).”—immediate practical goal.
- P2’: “Maintain my livelihood and my employees’ jobs.”—underlying purpose.
- P3’: “Serve safe food to community.”—Alice genuinely prioritizes consumer protection; when explicitly articulated, this demonstrates alignment with legislative intent (to strengthen her position, she might affirm, “Of course I want safety too, I’ve always complied as best as I can.”).
- P4’: “Be treated fairly and with respect by authorities.”—a more abstract goal, but it is something stakeholders often want (acknowledgment of fairness).
- P5’: “Avoid closure-induced community impact” (like some restaurants say “we contribute to community, closure hurts more than helps”).
- Among Alice’s objectives, P1’ directly conflicts with the agency’s immediate enforcement objective; however, her objective P3’ positively aligns with the overarching public health purpose of the applicable regulations.
- Map:
- –
- P3’ (serve safe food) aligns with content P1 (protect public health).
- –
- P1’ (keep open) does not align with any content purpose except maybe indirectly if we consider “encourage business compliance without needless closure” but that is not explicitly in content.
- –
- P2’ and P5’ (livelihood, community) are not considered in the health code purpose, so those might not align to any content purpose (they are external interests).
- –
- P4’ (treated fairly) aligns with content P4 (fair administration).
4.3. Bidirectional Mapping and Semantic Integration
- Inspection facts: She acknowledges three violations on the last inspection (so content D2 vs. stakeholder D2’ align, except classification maybe).
- The existence of prior violations: Content had Jan and Jun with issues; she said she had a good record until this year. Maybe she admits to those too but considers them minor. Let us assume she does not deny them, she just downplays them. So the number of violations aligns, but significance differs.
- Data on no illnesses: Both agree none occurred.
- Correction data: Content not explicitly listing that but we gleaned from context; she explicitly says it. We will align that as it is factual that she did fix them (maybe the inspector report even notes “corrected on site”—then it was content data too).
- Regulation text: Content has it; stakeholder might not have read the text, but she knows the gist. There might not be a direct node, except her knowledge K2’ implicitly references it.
- She has data on impact (employees, etc.) not in content (agency likely did not consider that because it is legally not relevant to health). That remains a stakeholder-only data point.
- “Violations serious vs minor”: Conflict. Content Info I4 says serious. Stakeholder I1’ says minor. This is a key conflict. To resolve, we need to see how “serious” is defined. If the regulation has defined certain violations as critical, perhaps at least one of hers (food temp out of safe range) is indeed considered a critical violation by code (which would justify calling it serious). If others are not as severe, nuance is needed. The inspector may have labeled all as serious, but maybe one was critical, and others moderate. So who is right? We might find partially both. We will likely learn that at least some were legitimately serious (for instance improper temperature can cause illness).
- “Repeated violations vs one-time”—Content sees pattern (I2: each inspection had something). Stakeholder might emphasize they were all different issues and minor (maybe she thinks they do not count as “repeated same violation”). There is an interpretation difference: if “repeated” means a repeat of the same issue or just any issues repeatedly. Regulation likely means any recurring issues count. That is a knowledge nuance. She might have thought repeated means “I keep doing same wrong thing, which I didn’t”.
- “Punishment extreme or not”: Stakeholder info I6’ explicitly says it is disproportionate; content did not have a node explicitly praising punishment, but content expectation might be that revocation is justified because of serious threats. This is more evaluative info which ties to wisdom conflict.
- “Others get fines”: May be true and content might have no info on others. If she is right, then there is an inconsistency. If she is wrong or her case is worse, clarification is needed. Content may not have considered others because each case is separate, but to her, it is inconsistent. This can be touched on in wisdom (consistency).
- “No warning given”: Indeed, content did not mention any prior formal warning, just immediate move to revoke after the third time. Possibly, the policy K4 suggests a warning on first minor, but maybe the agency considered these not minor. Conflict: she expected progressive discipline, agency acted swiftly. We will see that in knowledge/wisdom.
- Regulation 5.4 (K1 vs. K1’): She may not contest it exists but contest interpretation. She probably does not deny the rule “may revoke for serious/repeated”; she might just argue her case did not meet that threshold (contrary to agency view). So knowledge node exists, but triggered condition is disputed. This is tied to “serious/repeated” definitions (K2, K3 vs. her understanding).
- Policy guideline (K4 vs. K2’): Good alignment—her belief “I should have gotten a warning for first issues” is basically what is in policy. So both have a concept of progressive enforcement. Likely she is invoking that guideline exactly. So K4 in content and K2’ in stakeholder align strongly.
- But the conflict is that the agency might argue that guideline does not apply if violations are serious enough, or that she already had multiple chances (since three inspections). She may think she never received an official warning letter (maybe she received inspection reports though).
- Discretion (K5): She might not articulate it explicitly, but her argument implies “they had a choice to not revoke, they should have used it”.
- She might bring any knowledge like local laws about hearing procedures, etc., but that does not change outcome directly.
- Fairness/Proportionality (W3 content, W1’ stakeholder): Aligned in concept. Both would agree in principle punishment should fit crime. The conflict is whether that principle is being followed here or not. Agency might say “we are proportional because repeated serious issues warrant revocation.” She says “this is not proportional because issues were minor”. So they share the principle but disagree on fact classification under it. So W3 maps to W1’ (and W2’ second chance).
- Public safety (W1 vs. W3’): Aligned—she cares too, at least claims. That is a positive alignment: both ultimately want safe food. This is good for compromise scenario because one can argue ensuring compliance (safety) without closure might achieve both purposes.
- Strict enforcement vs. leniency (W4 vs. W2’): Conflict. Agency might lean to deterrence/strictness, but she wants leniency. This needs reconciliation. Possibly judge will lean that since no actual harm, leniency is okay while still ensuring compliance.
- Hardship principle (W4’): Agency had no node for considering economic impact (not their mandate). The judge might consider it indirectly as part of fairness but health law often does not weigh that explicitly. However, in equitable discretion, a judge could consider it in deciding remedy. It is not a mapped alignment, it is an extra concern. We might see it as connecting to fairness too (it is unfair to destroy a business if not necessary).
- Due process (W6 vs. P4’ fairness goal): They align conceptually. She wants fair treatment; law wants fair process.
- P1 (public health) vs. P3’ (serve safe food): Aligned. Both sides share that.
- P3 (compliance/rule of law) vs. her P1’ (keep license): Direct tension, because compliance from agency view might mean penalizing violators (to uphold rules), whereas her goal is to avoid penalty. However, these can be balanced if compliance can be achieved in another way. Perhaps by imposing strict conditions or monitoring rather than closure, one can satisfy compliance while allowing her to operate.
- P4 (fair process) vs. P4’ (treated fairly): Aligned.
- Her P2’ (livelihood) vs. no corresponding content purpose: This is an external interest, but a judge might consider public interest in not unnecessarily harming livelihoods. Not in health dept’s goals, but the appeal judge might weigh it generally as part of justice.
- In conclusion, there is a potential solution if one can find an outcome that fulfills both sides’ purposes: maintain public health (by ensuring she fixes issues, maybe a probation period) and allow her to continue business (serves her purpose). That compromise outcome could be as follows: instead of revocation, impose a short suspension and fine, require training, with warning that next time it is revocation for sure. That would align with proportionate enforcement principle and still uphold law’s purpose.
- Data conflicts: Not much—facts are mainly agreed, just interpreted differently.
- Info conflicts: Resolved by referencing code definitions. Suppose the code defines “critical violation” as something causing imminent risk (like temperature violation might qualify as critical because it can cause illness if not fixed). The equipment cleanliness might be moderate, record-keeping minor. If inspector labeled all as “serious”, maybe they have categories: critical vs. general violation. It could be that any critical violation at an inspection escalates enforcement.
- –
- The adjudicator might parse one critical (soup temperature), which was corrected immediately, and two lesser ones. So “serious” might technically apply because a critical violation was found; thus, an inspection with a critical violation is considered serious overall.
- –
- She called them minor because in effect nothing bad happened and they were fixed. It is a perspective difference. The judge would likely accept the code’s classification (so yes, a critical violation is serious by definition), but also note that it was swiftly mitigated.
- Knowledge reasoning:
- –
- Regulation 5.4 conditions are met: Repeated violations (three inspections in a row had issues). So legally, agency may revoke.
- –
- The policy guideline, though, says it is usually progressive discipline. Did the agency skip a step? Possibly, earlier inspections should have triggered something like a warning letter. If they did not formally warn, that might weigh in her favor (agency jumped to revoke without a formal intermediate sanction).
- –
- But maybe they gave verbal warnings in each report. If formal policy not followed (e.g., maybe they were supposed to issue a written warning after second inspection but did not), she can argue procedure not followed.
- –
- The judge sees that the agency had discretion. The guideline suggests revocation is typically for severe repeated issues posing real risk. In this case, while repeated, actual harm has not occurred and she showed willingness to correct. The principle of proportionality suggests considering a lesser penalty that still ensures compliance (like heavy fine, mandated training, frequent re-inspections).
- Wisdom resolution:
- –
- Public health vs. fairness: Both need to be satisfied. The judge likely thinks public health can be protected if the restaurant fixes issues and is monitored; fairness suggests not destroying the business for first-time (in a year) compliance troubles.
- –
- Strictness vs. leniency: Based on no harm and improvements, lean towards leniency but with caution.
- –
- The judge perhaps also considers deterrence: Imposing no penalty could lead to lax compliance among other practitioners. Thus, a fine or short-term suspension can serve as an effective deterrent without being as disproportionately severe as permanent license revocation.
- –
- Hardship: An unnecessary closure hurts livelihood—that principle, while not in law, could be implicitly considered under fairness.
- Purpose alignment:
- –
- Find outcome that satisfies P1 (health) and as much of P1’ (keep open) as possible.
- –
- Perhaps a compromise: The license is reinstated conditionally. Outcome: Overturn revocation, instead impose a one-week suspension and USD X fine, require proof of corrective measures, and stipulate that any future serious violation will result in immediate revocation with no further appeal. This holds her accountable (serves compliance and deterrence) but gives her a chance (aligns with second chance and business survival).
- –
- That outcome is not directly an option in the content knowledge unless discretionary. But given “may revoke” implies “may choose lesser too”, the judge can decide a lesser sanction is sufficient.
- Purpose: The judge’s decision is driven by protecting public health (P1) and ensuring fair enforcement (P4). The solution aims to secure safety without an unduly harsh outcome.
- Wisdom: The principle of proportional enforcement (W3) guided the outcome, balancing strict compliance (W4) with fairness/second chance (W2’). The judge recognized that immediate revocation, while legally permissible, was not strictly necessary to achieve compliance given the owner’s demonstrated cooperation. Instead, a conditional approach was taken, aligning with enforcement guidelines that emphasize graduated responses.
- Knowledge: Under Regulation 5.4 (K1), the agency had discretion to revoke for repeated serious violations. However, the agency’s own guideline (K4) indicates lesser measures for initial or minor infractions. The judge noted that GoodFood Bistro’s violations, though technically “serious” under the code (improper temperature being a critical issue), resulted in no harm and were promptly corrected (information from the record). Furthermore, the prior inspections, while not perfect, did not lead to any formal warning or intermediate sanction, which the guideline would have suggested. This context invokes the knowledge that enforcement discretion (K5) should be exercised in line with both the letter and spirit of the law. The judge thus chose to exercise discretion by imposing a penalty short of revocation, which is still within the scope of the regulation (“may suspend or revoke” implicitly allows lesser penalties).
- Information: Key information considered includes that all violations were remedied on-site and no illness occurred (content I5, I6, stakeholder I2’, I3’ align on this) and that the establishment had never before faced such an enforcement action (stakeholder I5’: first revocation threat). This suggested the situation was not an egregious, willful flouting of rules but rather compliance issues that could be corrected. The pattern of repeated violations (content I2, I3) was acknowledged, but the nature of those violations (stakeholder’s perspective that they were minor, content’s evidence that at least one was critical but mitigated) led to an interpretation that the restaurant was not hopelessly negligent, just in need of improvement.
- Data: The factual record supporting the decision included the inspection reports (violations details, D2), the timeline of inspections (D1), and evidence of corrective action (D9, e.g., receipts or proof of training) provided during the appeal. No contrary factual evidence was presented by the agency beyond what was in the reports, and the owner’s facts (like no illnesses, corrections made) were not disputed. Thus, the data points used in reasoning were largely agreed upon, which allowed the dispute to center on their meaning and implications rather than what happened.
5. Evaluation
5.1. Conceptual Coverage and Rigor
- Data nodes provide evidence;
- This leads to certain information nodes (findings of fact);
- These trigger knowledge nodes (applicable rules);
- These, under the influence of wisdom nodes (principles), produce certain conclusions (like an outcome node or decision);
- This is justified by purpose nodes (goals achieved by that conclusion).
- Substantive law (through knowledge of regulations);
- Evidence/facts (data and information);
- Procedural or discretionary aspects (knowledge of guidelines, principles in wisdom);
- Normative goals (purpose).
5.2. Explainability
- Understanding: She hears that the judge recognized her efforts and situation (embedding her perspective in the explanation), and also made clear what she must do going forward and why (embedding law’s perspective). This kind of explanation has been argued to improve perceived fairness and trust. Psychological research on XAI (e.g., Derek Leben 2023 [41]) suggests people accept decisions better when explanations connect to fairness evidence, which in our explanation, they do (we gave reasons tied to principles of fairness and public good, not just rule citations).
- Transparency: Every factor that influenced the outcome is visible: the rule, the guidelines, the corrections, etc. If something was incorrect or disputed, it could be challenged. For example, if the agency disagrees and appeals, they can see exactly that the judge’s reasoning hinged on an interpretation of policy and fairness; they could counter-argue if needed in those terms (maybe “the judge gave too much weight to economic hardship, which is not a legally valid factor under the statute”—at least the debate is now on clear terms).
- Completeness: No obvious aspect of the case is left unaddressed. Often parties feel frustrated if a judgment does not mention something they raised. Our model by mapping stakeholder input tries to ensure all major points are either aligned or explicitly resolved. In the scenario, the owner’s points (we fixed it, others received warnings, it is too harsh) were all touched in the explanation. This thoroughness improves the quality of legal justification.
5.3. Fairness Implications
- In cases where strict rule application yields an intuitively unjust outcome (like extremely harsh for minor fault), our system is likely to soften it via purpose-driven reasoning (because the mismatch between outcome and purpose would be detected).
- Conversely, in cases where leniency would undermine the law’s purpose (like a stakeholder asks for no penalty but there is significant risk), the system will identify that stakeholder purpose conflicts irreconcilably with the law’s purpose, and thus justify a strict outcome, but crucially, it will explain it (which contributes to fairness by at least acknowledging the loser’s position before rejecting it).
5.4. Generality and Scalability
- Criminal law: DIKWP could model evidence (data), legal elements of crimes (knowledge), mitigating/aggravating factors (wisdom), and the purposes of sentencing (purpose). The bidirectional mapping would ensure a defendant’s story is considered alongside the legal requirements. This could yield similar benefits in fairness (e.g., accounting for personal circumstances in sentencing while ensuring public safety).
- Civil litigation: Issues of liability and damages similarly involve facts, legal standards, and principles (like reasonableness, equity, deterrence vs. compensation goals). DIKWP can capture the policy purpose of tort law (compensation and deterrence) and ensure, say, that an award is in line with those and with the plaintiff’s harm (and not punishing beyond purpose).
- Contract or commercial cases: Here purpose might be party intentions or business norms, which DIKWP could incorporate to interpret the contract beyond literal text, aligning with doctrines like good faith (wisdom principle) and purpose of the contract.
- A rule-based expert system (1980s-style): It would have given a single recommendation (revoke) because conditions matched a rule, and it would not incorporate the additional semantic nuance we did. Thus, it would have failed to identify the fair solution. It also would explain minimally (“rule triggered”).
- A modern machine learning model (like a black box that learned from data): It might give some prediction (maybe it sees the pattern of not many revocations for such cases and predicts “no revoke”). But it would not explain why, nor ensure alignment with legal principles—which is dangerous (it might be right for the wrong reasons). Our approach provides an explicit rationale and can be audited, which a black box cannot.
- An argumentation model (like reason with pro/con arguments): This is closest in spirit; it would list arguments: “For revocation: repeated violations, risk to public. Against: corrections made, no harm, harsh outcome.” That is good, but argumentation frameworks often leave choosing the outcome to a meta-level preference ordering (like which argument wins). DIKWP provides ordering via purpose—it tells us which concerns are primary. So it is like argumentation with a built-in value analysis. In this case, purpose nodes (public health vs. fairness/economic survival) had to be balanced; our model achieved that balancing explicitly by finding a solution that satisfied the primary purpose without sacrificing the secondary more than necessary. An argumentation model would require the designer to input that one value outweighs another or find a compromise outside the formalism.
5.5. Limitations
- Knowledge Engineering Effort: Our method requires constructing and maintaining DIKWP graphs. This is knowledge-intensive. If performed manually for each case, it could be time-consuming. However, we envision partial automation using NLP for fact extraction and established legal ontologies (such as LKIF or LegalRuleML) as the knowledge base for laws. Such ontologies standardize the representation of legal rules, concepts, and relationships, which can be integrated with DIKWP graphs through semantic mapping. Over time, domain-specific DIKWP graphs could reuse these ontology-based representations, significantly reducing the manual effort for modeling new scenarios.
- Quality of Mapping: The benefits rely on a correct and thorough mapping between content and stakeholder semantics. If a stakeholder raises an issue that the system fails to map (perhaps because it is not in its ontology or it is an implicit cultural concern), the risk is the system might still ignore something. Our framework mitigates by design (looking for any unmatched nodes), but it is only as good as the ability to identify those nodes from inputs. Advanced NLP or structured input forms might be needed to capture stakeholder perspectives fully. In a practical setting, one might have the user explicitly input their points in a structured way to help the system map them.
- Resolution of Value Conflicts: In some cases, purposes will conflict with no obvious compromise (e.g., a stakeholder’s goal directly negates the law’s goal). In such cases, the model will basically show that conflict and the decision will favor one side (usually the law’s purpose, since that is the mandate). While the model will be transparent about it, the stakeholder might still feel unfairly treated if their purpose was totally unachievable. Our framework does not magically resolve all conflicts; it just clarifies them. However, even then, explanation helps (procedural justice).
- Judicial Acceptance: If this were used in a court or agency, would decision-makers accept the DIKWP model’s suggestions? This is more a social question. Many judges reason intuitively, not in a structured graph manner. Our approach might at first seem overkill or foreign. But perhaps as a decision-support tool, it could prompt judges to consider things they might otherwise overlook. The acceptance would grow if it demonstrably leads to fewer appeals or higher satisfaction. We cannot fully evaluate that without deployment, but our case study hints that including more semantics likely leads to decisions less likely to be overturned (because they pre-emptively address equity which appeals courts often impose if the first instance did not consider it).
5.6. Summary of Evaluation
- The framework is conceptually robust, encoding all relevant aspects of reasoning and allowing formal inference.
- Explanations derived from the framework are richer and aligned with what human stakeholders expect, likely improving trust in AI-assisted judgments.
- The framework actively engages with fairness by mapping stakeholder viewpoints and balancing them with legal requirements, rather than operating on law alone.
- While the overhead in building semantic models is significant, particularly given the known limitations of symbolic AI (such as brittleness and high knowledge maintenance costs), we envision mitigating this through partial automation using NLP, integration with standardized legal ontologies (e.g., LKIF or LegalRuleML), and modular reuse of knowledge structures. These measures aim to make implementation robust, scalable, and maintainable, thereby justifying the initial investment for critical, high-stakes judicial decisions.
6. Discussion
6.1. Enhancing Explainability and Transparency in Legal AI
- A factual narrative (data→information explanation);
- A legal rule application explanation (knowledge layer: citing statutes and conditions);
- A policy/principle explanation (wisdom layer: why that statute leads us to that result in view of principles);
- A teleological explanation (purpose layer: how the outcome serves justice or policy goals).This corresponds well to how human judges write opinions (often, there is a section on facts, a section on law, and sometimes a section on the broader implications or purposes, especially in higher courts). An AI that can work similarly would be more readily accepted in legal contexts.
6.2. Fairness, Bias, and “Semantic Justice”
6.3. Integration with AI Technologies
- Natural Language Processing (NLP): Laws and case files can be processed with NLP to populate the data and knowledge layers. There is ongoing work on converting legal texts to knowledge graphs. Our approach could plug into that: use an NLP pipeline to identify relevant statutes and facts and fill the initial DIKWP (Content) graph. Similarly, stakeholder statements (perhaps given in a hearing or written appeal) can be NLP-analyzed to extract their key arguments (we might use argument mining techniques to identify “claims” and “evidence” in their narrative).
- Large Language Models (LLMs): An LLM (like GPT-based models) could be used as a component to suggest semantic alignments. For example, if a stakeholder says “I think this punishment is too harsh,” an LLM could interpret that and find related concepts like “proportionality” or “leniency policy” in the context. It could effectively translate plain language into the formal nodes we have (a kind of semantic parsing). LLMs have vast knowledge, including likely knowledge of common law principles and even specific case precedents; they might assist in populating wisdom nodes (“Based on the context, principles of X might apply”). We have to be careful—LLMs sometimes hallucinate or err in legal specifics—but within a controlled system, they could be used to propose elements which a human or a smaller rule-checker can verify.
- Knowledge Graph Databases: Storing and querying DIKWP graphs would require a robust graph database. We might use RDF triple stores or property graph DBs (like Neo4j) to represent nodes and edges. This would allow querying like “find all knowledge nodes connected to this purpose” which is useful in reasoning (like finding laws that serve a given purpose).
- Automated Reasoning Engines: Some parts of the graph (especially the knowledge layer with rules) could be fed into a logical reasoner (like a Prolog engine or a description logic reasoner) to derive conclusions. Meanwhile, optimization or decision analysis techniques could be applied at the wisdom/purpose level (e.g., multi-objective decision-making to find an outcome that maximizes purpose satisfaction). This multi-paradigm approach (logic + optimization) could operationalize the reasoning. Our pseudocode was high-level; an implementation might break it into a logical inference step and a value balancing step.
6.4. Toward a General Framework for Semantic Judicial AI
- Extensive Knowledge Base: A wide coverage of laws and regulations in DIKWP form, possibly a global legal knowledge graph with integrated purpose tags. This is challenging due to the volume of law, but starting domain by domain (like building one for administrative law, one for criminal, etc.) is feasible. Prior work on ontology and knowledge graphs in law is a foundation.
- Standardization: Perhaps a standardized schema for DIKWP in legal context, so that tools and researchers can share models.
- Validation on Real Cases: To generalize, we need to test on many cases. By encoding past landmark cases in DIKWP, we can check if the outcomes align and where the model might have predicted differently. If differences arise, that can show either a gap in the model or perhaps an inconsistency in jurisprudence itself.
- Evolution and Learning: AI systems improve by learning. We might eventually allow the system to learn from decisions. For example, if judges in practice always favor a certain purpose over another in certain contexts, the system could adjust weighting of principles accordingly (learned wisdom priorities). This could be performed via machine learning on a set of resolved DIKWP graphs (cases). However, it is critical that any learned component still yields explainable rules (no black box here.
6.5. Future Work and Extensions
- Empirical Testing: As noted, applying this to a corpus of cases is future work. We might start with a narrow domain and attempt to encode say 50 past cases and see how our system’s recommended outcomes and explanations compare to actual. This would be a strong test of validity.
- User Studies: It would be valuable to conduct experiments with legal professionals using semantic explanations vs. traditional ones, to measure differences in understanding and perceived legitimacy. For example, give two sets of readers two versions of an AI-generated decision (one DIKWP-rich, one minimal) and survey their reactions.
- Refinement of Formalism: We gave definitions, but there is room to formalize further, perhaps in a logical language. For instance, one could formalize mapping constraints (e.g., every knowledge node in content should either find a supporting purpose or be labeled as dormant if the purpose is not present—ensuring no rule is applied without rationale). Formal verification tools might then check the consistency of a DIKWP decision.
- Integration with LegalTech systems: The framework could be integrated with e-discovery or case management systems. For example, as evidence is gathered, it could automatically populate parts of the graph and highlight needed evidence for certain claims (like “to prove this purpose or principle, you need data of type X; none is present!” which could alert attorneys to gather more evidence on, say, harm or compliance efforts).
- Ethical and Legal Considerations: As we move toward AI in judiciary, questions of accountability and acceptance arise. An interesting aspect of our approach is that by making AI reasoning closer to human reasoning, it might be easier to fit into existing legal procedures. For instance, if an AI provides a DIKWP rationale, a higher court can review it similarly to how they review a human’s rationale (because it is in a familiar structure). We might need to ensure the AI does not introduce its own values beyond what law and inputs provide—basically keep it constrained. Our approach tries to do that by grounding purposes in either the law or explicit stakeholder input, not inventing new ones.
6.6. Conclusion of Discussion
7. Conclusions
- For AI researchers and developers: It provides a blueprint for building AI systems that require high degrees of explainability and fairness. The formal definitions, pseudocode, and pipeline we provided can inform the architecture of next-generation legal expert systems or decision-support tools. We demonstrated that it is feasible to embed ethical and purposive reasoning into a formal model without sacrificing computational tractability.
- For the legal community: Our approach offers a way to leverage AI while preserving the nuance of legal reasoning. Rather than replacing human judgment, a DIKWP-based system can serve as an augmentation tool—ensuring that judges and lawyers consider all dimensions of a case and providing second opinions that are backed by semantically rich justifications. This can enhance consistency and reduce oversight, as well as increase trust in AI recommendations because the reasoning is laid bare.
- For interdisciplinary understanding: By marrying concepts from knowledge representation, cognitive science (the idea of conceptual vs. semantic spaces), and legal theory (jurisprudential principles and teleological interpretation), we created a holistic model that could serve as a common language for computer scientists, cognitive scientists, and legal scholars to discuss how decisions are made and justified. This kind of interdisciplinary semantic framework can also aid in education and communication: for instance, teaching law students or AI systems about legal reasoning in terms of DIKWP layers may clarify why certain arguments win or lose (because they fail at a purpose level, because the data does not support the claimed information, etc.).
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Lai, J.; Gan, W.; Wu, J.; Qi, Z.; Yu, P.S. Large language models in law: A survey. AI Open 2024, 5, 181–196. [Google Scholar] [CrossRef]
- Kattnig, M.; Angerschmid, A.; Reichel, T.; Kern, R. Assessing trustworthy AI: Technical and legal perspectives of fairness in AI. Comput. Law Secur. Rev. 2024, 55, 106053. [Google Scholar] [CrossRef]
- Cao, Y.; Zhou, Z.; Tang, S.; Ning, P.; Chen, M. On the Robustness of Belief-Rule-Based Expert Systems. IEEE Trans. Syst. Man Cybern. Syst. 2023, 53, 6043–6055. [Google Scholar] [CrossRef]
- Gärtner, A.E.; Göhlich, D. Automated requirement contradiction detection through formal logic and LLMs. Autom. Softw. Eng. 2024, 31, 49. [Google Scholar] [CrossRef]
- Gómez, A.P. Rule-Based Expert Systems for Automated Legal Reasoning and Contract Analysis: A Case Study in Knowledge Representation. Adv. Comput. Syst. Algorithms Emerg. Technol. 2022, 7, 19–34. [Google Scholar]
- Lam, J.; Chen, Y.; Zulkernine, F.; Dahan, S. Legal Text Analytics for Reasonable Notice Period Prediction. J. Comput. Cogn. Eng. 2025, 3, 1–13. [Google Scholar]
- Zimmermann, A.; Lee-Stronach, C. Proceed with Caution. Can. J. Philos. 2022, 52, 6–25. [Google Scholar] [CrossRef]
- Gaur, M.; Faldu, K.; Sheth, A. Semantics of the Black-Box: Can Knowledge Graphs Help Make Deep Learning Systems More Interpretable and Explainable? IEEE Internet Comput. 2021, 25, 51–59. [Google Scholar] [CrossRef]
- Parycek, P.; Schmid, V.; Novak, A.S. Artificial Intelligence (AI) and Automation in Administrative Procedures: Potentials, Limitations, and Framework Conditions. J. Knowl. Econ. 2024, 15, 8390–8415. [Google Scholar] [CrossRef]
- Ghosh, M.E.; Naja, H.; Abdulrab, H.; Khalil, M. CriMOnto: A generalized domain-specific ontology for modeling procedural norms of the Lebanese criminal law. Data Knowl. Eng. 2025, 158, 102419. [Google Scholar] [CrossRef]
- Constant, A. A Bayesian model of legal syllogistic reasoning. Artif. Intell. Law 2024, 32, 441–462. [Google Scholar] [CrossRef]
- Duan, Y.; Sun, X.; Che, H.; Cao, C.; Li, Z.; Yang, X. Modeling data, information and knowledge for security protection of hybrid IoT and edge resources. IEEE Access 2019, 7, 99161–99176. [Google Scholar] [CrossRef]
- Mei, Y.; Duan, Y. The DIKWP (Data, Information, Knowledge, Wisdom, Purpose) Revolution: A New Horizon in Medical Dispute Resolution. Appl. Sci. 2024, 14, 3994. [Google Scholar] [CrossRef]
- Mei, Y.; Duan, Y.; Yu, L.; Che, H. Purpose Driven Biological Lawsuit Modeling and Analysis Based on DIKWP. In Proceedings of the 18th EAI International Conference on Collaborative Computing: Networking, Applications and Worksharing, Hangzhou, China, 1–15 December 2022; pp. 250–267. [Google Scholar]
- Li, Y.; Li, Z.; Duan, Y.; Spulber, A.B. Physical artificial intelligence (PAI): The next-generation artificial intelligence. Front. Inf. Technol. Electron. Eng. 2023, 24, 1231–1238. [Google Scholar] [CrossRef]
- Li, X.; Dai, J.; Zhu, X.; Li, J.; He, J.; Huang, Y.; Liu, X.; Shen, Q. Mechanism of attitude, subjective norms, and perceived behavioral control influence the green development behavior of construction enterprises. Humanit. Soc. Sci. Commun. 2023, 10, 266. [Google Scholar] [CrossRef]
- Lyu, D.; Yang, F.; Kwon, H.; Dong, W.; Yilmaz, L.; Liu, B. TDM: Trustworthy Decision-Making Via Interpretability Enhancement. IEEE Trans. Emerg. Top. Comput. Intell. 2022, 6, 450–461. [Google Scholar] [CrossRef]
- Gabriel, I. Artificial Intelligence, Values, and Alignment. Minds Mach. 2020, 30, 411–437. [Google Scholar] [CrossRef]
- Palchunov, D.E. Logical Methods for Modeling Reasoning, Concepts and Representations Based on the Partial Model Theory. In Proceedings of the 2024 IEEE International Multi-Conference on Engineering, Computer and Information Sciences (SIBIRCON), Novosibirsk, Russia, 9–11 September 2024; pp. 339–344. [Google Scholar]
- Garzo, G.; Palumbo, A. Human-in-the-Loop: Legal Knowledge Formalization in Attempto Controlled English. In Proceedings of the 2025 13th International Symposium on Digital Forensics and Security (ISDFS), Boston, MA, USA, 24–25 April 2025; pp. 1–6. [Google Scholar]
- Sarabi, S.; Han, Q.; Vries, B.; Romme, A.G.L.; Almassy, D. The Nature-Based Solutions Case-Based System: A hybrid expert system. J. Environ. Manag. 2022, 324, 116413. [Google Scholar] [CrossRef]
- Jones, M.; Blaxter, M. TaxMan: A taxonomic database manager. BMC Bioinform. 2006, 7, 536. [Google Scholar] [CrossRef]
- Farajollahi, M.; Baradaran, V. Expert System Application in law: A review of research and applications. Int. J. Nonlinear Anal. Appl. 2024, 15, 107–114. [Google Scholar]
- Ashley, K.D. Reasoning with cases and hypotheticals in HYPO. Int. J. Man Mach. Stud. 1991, 34, 753–796. [Google Scholar] [CrossRef]
- Bruninghaus, S.; Ashley, K.D. Predicting outcomes of case based legal arguments. In Proceedings of the 9th International Conference on Artificial Intelligence and Law (ICAIL ’03), Edinburgh, UK, 24–28 June 2003; Association for Computing Machinery: New York, NY, USA, 2003; pp. 233–242. [Google Scholar]
- Dahl, M.; Magesh, V.; Suzgun, M.; Ho, D.E. Large Legal Fictions: Profiling Legal Hallucinations in Large Language Models. J. Leg. Anal. 2024, 16, 64–93. [Google Scholar] [CrossRef]
- Ebrahimi Nejad, S.; Carey, J.P.; McMurtry, M.S.; Hahn, J. Model-based cardiovascular disease diagnosis: A preliminary in-silico study. Biomech. Model Mechanobiol. 2017, 16, 549–560. [Google Scholar] [CrossRef] [PubMed]
- Cabitza, F.; Fogli, D.; Lanzilotti, R.; Piccinno, A. Rule-based tools for the configuration of ambient intelligence systems: A comparative user study. Multimed. Tools Appl. 2017, 76, 5221–5241. [Google Scholar] [CrossRef]
- Boulot, E.; Sterlin, J. Steps Towards a Legal Ontological Turn: Proposals for Law’s Place beyond the Human. Transnatl. Environ. Law 2022, 11, 13–38. [Google Scholar] [CrossRef]
- Ye, Z.; Kumar, Y.J.; Sing, G.O.; Song, F.; Wang, J. A Comprehensive Survey of Graph Neural Networks for Knowledge Graphs. IEEE Access 2022, 10, 75729–75741. [Google Scholar] [CrossRef]
- Graves, M. Apprehending AI moral purpose in practical wisdom. AI Soc. 2024, 39, 1335–1348. [Google Scholar] [CrossRef]
- Greenstein, S. Preserving the rule of law in the era of artificial intelligence (AI). Artif. Intell. Law 2022, 30, 291–323. [Google Scholar] [CrossRef]
- Kosov, P.; Kadhi, N.E. Cecilia Zanni-Merk, Latafat Gardashova, Advancing XAI: New properties to broaden semantic-based explanations of black-box learning models. Procedia Comput. Sci. 2024, 246, 2292–2301. [Google Scholar] [CrossRef]
- Gahrn-Andersen, R. Concrete Concepts in Basic Cognition. Philosophia 2022, 50, 1093–1116. [Google Scholar] [CrossRef]
- Achaa-Amankwaa, P.; Kushnereva, E.; Miksch, H.; Stumme, J.; Heim, S.; Ebersbach, M. Multilingualism is associated with small task-specific advantages in cognitive performance of older adults. Sci. Rep. 2023, 13, 16912. [Google Scholar] [CrossRef]
- Yang, Y.; Zhuang, Y.; Pan, Y. Multiple knowledge representation for big data artificial intelligence: Framework, applications, and case studies. Front. Inf. Technol. Electron. Eng. 2021, 22, 1551–1558. [Google Scholar] [CrossRef]
- Bratianu, C.; Bejinaru, R. From Knowledge to Wisdom: Looking beyond the Knowledge Hierarchy. Knowledge 2023, 3, 196–214. [Google Scholar] [CrossRef]
- Marras, C.; Fereshtehnejad, S.M.; Berg, D.; Bohnen, N.I.; Dujardin, K.; Erro, R.; Espay, A.J.; Halliday, G.; Van Hilten, J.J.; Hu, M.T.; et al. Transitioning from Subtyping to Precision Medicine in Parkinson’s Disease: A Purpose-Driven Approach. Mov. Disord. 2024, 39, 462–471. [Google Scholar] [CrossRef]
- Medianovskyi, K.; Pietarinen, A.-V. On Explainable AI and Abductive Inference. Philosophies 2022, 7, 35. [Google Scholar] [CrossRef]
- Borg, A.; Bex, F. A Basic Framework for Explanations in Argumentation. IEEE Intell. Syst. 2021, 36, 25–35. [Google Scholar] [CrossRef]
- Leben, D. Explainable AI as evidence of fair decisions. Front. Psychol. 2023, 14, 1069426. [Google Scholar] [CrossRef] [PubMed]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Mei, Y.; Duan, Y. DIKWP Semantic Judicial Reasoning: A Framework for Semantic Justice in AI and Law. Information 2025, 16, 640. https://doi.org/10.3390/info16080640
Mei Y, Duan Y. DIKWP Semantic Judicial Reasoning: A Framework for Semantic Justice in AI and Law. Information. 2025; 16(8):640. https://doi.org/10.3390/info16080640
Chicago/Turabian StyleMei, Yingtian, and Yucong Duan. 2025. "DIKWP Semantic Judicial Reasoning: A Framework for Semantic Justice in AI and Law" Information 16, no. 8: 640. https://doi.org/10.3390/info16080640
APA StyleMei, Y., & Duan, Y. (2025). DIKWP Semantic Judicial Reasoning: A Framework for Semantic Justice in AI and Law. Information, 16(8), 640. https://doi.org/10.3390/info16080640