Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (174)

Search Parameters:
Keywords = claims rules

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
30 pages, 2472 KB  
Article
Energy Consumption Prediction for an Electric Vehicle Using Machine Learning: A Comparative Study of Regression, Ensemble, and LSTM-Based Models
by Juan Diego Valladolid and Juan P. Ortiz
Vehicles 2026, 8(5), 99; https://doi.org/10.3390/vehicles8050099 - 1 May 2026
Abstract
Accurate energy consumption prediction is fundamental for enhancing range estimation and trip planning in battery electric vehicles (BEVs) under real-world conditions. This study develops a route-level benchmark utilizing 1 Hz data acquired via ECU/OBD-II interfaces (CAN 500 kbps) across ten diverse real-world driving [...] Read more.
Accurate energy consumption prediction is fundamental for enhancing range estimation and trip planning in battery electric vehicles (BEVs) under real-world conditions. This study develops a route-level benchmark utilizing 1 Hz data acquired via ECU/OBD-II interfaces (CAN 500 kbps) across ten diverse real-world driving routes. The input feature set comprises vehicle speed, longitudinal acceleration, estimated motor torque, road altitude, and accelerator pedal position. Ground truth energy consumption was derived from battery voltage and current, integrated via the trapezoidal rule. We performed a comparative analysis between five memoryless regressors (FNN, SVR, GPR, QRNN, and Bagged Trees) and three sequence models (LSTM, GRU, and BiLSTM) trained on 20-second temporal windows. The results indicate that the GRU model achieved the highest overall performance (mean RMSE = 0.1142 kWh, R2 = 0.9545 and MAE = 0.072 kWh), while Bagged Trees emerged as the most robust static model (mean RMSE = 0.1587 kWh). Temporal models outperformed static ones on routes with high dynamic variability, whereas Bagged Trees excelled in five specific scenarios. These findings provide a controlled within-route benchmark for time-resolved cumulative energy estimation and highlight the need for chronological and cross-route validation before drawing deployment-oriented generalization claims. Full article
(This article belongs to the Special Issue Application of Machine Learning in Electric Vehicles)
Show Figures

Graphical abstract

16 pages, 880 KB  
Article
Integer-State Dynamics in Quantized Spiking Neural Networks: Implications for Hardware-Oriented Design
by Lei Zhang
Electronics 2026, 15(8), 1756; https://doi.org/10.3390/electronics15081756 - 21 Apr 2026
Viewed by 190
Abstract
Spiking neural networks (SNNs) support energy-efficient machine intelligence because event-driven computation and sparse activity map naturally to low-power digital hardware. In practical implementations, however, membrane states, synaptic weights, and thresholds are represented with finite-precision integer arithmetic. Quantization, clipping, and overflow can therefore alter [...] Read more.
Spiking neural networks (SNNs) support energy-efficient machine intelligence because event-driven computation and sparse activity map naturally to low-power digital hardware. In practical implementations, however, membrane states, synaptic weights, and thresholds are represented with finite-precision integer arithmetic. Quantization, clipping, and overflow can therefore alter network dynamics rather than merely approximate a higher-precision model. This paper adopts an integer-state dynamical perspective, modeling a quantized SNN with a hardware-relevant update rule as a deterministic map on a bounded integer lattice. Rather than claiming recurrence itself as a new property, we focus on how finite-precision representation and implementation semantics shape observed recurrent regimes and activity patterns. We introduce a shift-based update rule with integer-valued states and investigate its behaviour through simulation-based analysis with network sizes N=30–130, connection densities 0.1–0.9, and bit widths 1 to 16 over T = 1000 steps. The results show bounded and recurrent temporal structure with strong quantization sensitivity. The observed regimes depend heavily on the semantics of representation and the scaling choices. These findings suggest that numerical precision can act as a dynamical design variable and provide useful implications for hardware-oriented SNN design, while motivating future work on attractor analysis and FPGA/ASIC validation. Full article
(This article belongs to the Special Issue Hardware Acceleration for Machine Learning)
Show Figures

Figure 1

20 pages, 977 KB  
Article
An Enhanced Multi-Task Deep Learning Framework for Joint Prediction of Customer Churn and Downsell
by Qiang Zhang, Lihong Zhang and Yanfeng Chai
Appl. Sci. 2026, 16(8), 4014; https://doi.org/10.3390/app16084014 - 21 Apr 2026
Viewed by 262
Abstract
Customer churn refers to the termination of a customer’s business relationship with a bank, representing a direct loss of future revenue. Product downsell manifests as a reduction in the number of financial products held or a downgrade in service tier, often signaling early [...] Read more.
Customer churn refers to the termination of a customer’s business relationship with a bank, representing a direct loss of future revenue. Product downsell manifests as a reduction in the number of financial products held or a downgrade in service tier, often signaling early customer disengagement. Accurately identifying customers at risk of these two behaviors has become a cornerstone of profitable growth in the competitive retail banking industry as downsell frequently serves as a precursor to total churn. However, the existing research typically treats these highly correlated behaviors as independent prediction tasks, overlooking their intrinsic link and failing to address the critical challenges of class imbalance and regulatory demands for model interpretability. To tackle these problems, we propose an enhanced multi-task learning network (EMTL-Net), a deep learning framework specifically designed to capture the nuanced interplay between churn and downsell behaviors. EMTL-Net introduces an explicit feature interaction module to enhance the modeling of high-order feature relationships and utilizes a shared representation layer to extract universal customer risk patterns, enabling the joint prediction of churn and downsell. Furthermore, we employ Focal Loss as the training objective to dynamically adjust sample weights, effectively mitigating the class imbalance problem. Critically, to meet financial compliance requirements, we implement a SHAP-based interpretation mechanism that is compatible with multi-task outputs, providing preliminary insights into feature importance. Formal validation of interpretability claims remains an important direction for future research. The experimental results on a publicly available pedagogical bank customer benchmark dataset demonstrate that EMTL-Net achieves excellent performance on both tasks. For churn prediction, the model achieves an AUC of 0.8259, an accuracy of 0.8361, and an F1-score of 0.6235, significantly outperforming the existing baseline models. For downsell prediction (noting that the downsell label is rule-derived from the number of products held), the model achieves an AUC of 0.8932, an accuracy of 0.8571, and an F1-score of 0.7504. Ablation studies confirm the critical contributions of the explicit feature interaction module, Focal Loss, and the residual structure to model performance. Crucially, the interpretability analysis corroborates business intuition by identifying customer age, account balance, and product holdings as dominant churn drivers—a consistency that reinforces the model’s credibility and practical utility in high-stakes financial environments. Full article
Show Figures

Figure 1

24 pages, 627 KB  
Article
Vehicle-Conditional Split-Conformal Calibration for Risk-Budgeted Sub-Second Proxy-Triggered Vehicle Instability Warnings from Past-Only Sensor Slices
by Jinzhe Yang, Jianzheng Liu, Kai Tian, Yier Lin and Junxia Zhang
Sensors 2026, 26(8), 2302; https://doi.org/10.3390/s26082302 - 8 Apr 2026
Viewed by 243
Abstract
Emergency maneuvers can drive vehicles into severe instability regimes within sub-second time scales, motivating last-moment warning interfaces with auditable false-alarm budgets. We study a proxy-triggered imminent-recognition setting: given a 0.1 s past-only slice of onboard signals, decide whether a conservative physics-defined instability proxy [...] Read more.
Emergency maneuvers can drive vehicles into severe instability regimes within sub-second time scales, motivating last-moment warning interfaces with auditable false-alarm budgets. We study a proxy-triggered imminent-recognition setting: given a 0.1 s past-only slice of onboard signals, decide whether a conservative physics-defined instability proxy will trigger within the next τ=0.2 s. The contribution is, therefore, a calibrated warning for a safety-relevant surrogate event, not a claim of predicting crashes or true instability outcomes directly. Because the corpus is terminal-phase aligned, the default causal monitor (w=d=0.1 s, k=2) is warnable on only 18.3% of event runs; we, therefore, report run-level effectiveness both overall and conditional on warnability. We learn a lightweight hazard scorer and convert its scores into an operator-facing alarm rule via split-conformal calibration on held-out negative slices, exposing a slice-level false-alarm budget α with finite-sample, one-sided control of the marginal slice-level false positive rate (FPR) on exchangeable negatives. To address fleet heterogeneity, we additionally calibrate vehicle-conditioned (Mondrian) thresholds, enabling per-vehicle risk budgeting without retraining separate models. On the held-out test split at τ=0.2 s, the scorer achieves AUPRC 0.251 against a base rate of 0.638%, AUROC 0.986, and ECE 0.034. After calibration at α=5%, realized slice-level FPR concentrates near the prescribed budget while slice-level TPR on imminent positives remains high (≈0.982). We explicitly separate this slice-level guarantee from empirical run-level metrics such as FARrun, EWR on warnable runs, and lead time, and we report dependence and shift diagnostics to delineate where the guarantee may degrade. The reported μ-sensitivity analyses concern run-level descriptor perturbation and omission rather than validation of a within-run friction estimator with temporal lag. The result is a transparent, risk-budgeted monitoring primitive for last-moment vehicle-stability warning under clearly stated exchangeability assumptions. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

17 pages, 811 KB  
Article
The Neuro–Cardio–Renal Stress Index (NCR-SI): A Pragmatic Composite Framework for Characterizing Multisystem Burden in Multimorbid Patients
by Ana Trandafir, Oceane Colasse, Marc Cristian Ghitea, Evelin Claudia Ghitea, Timea Claudia Ghitea, Roxana Daniela Brata and Alexandru Daniel Jurca
Diagnostics 2026, 16(8), 1120; https://doi.org/10.3390/diagnostics16081120 - 8 Apr 2026
Viewed by 375
Abstract
Background: Multimorbidity frequently involves overlapping neuro-psychic, cardiometabolic, and renal disturbances, yet clinical assessment often relies on diagnosis-based comorbidity counts that may not fully capture cumulative physiological stress. We developed the Neuro–Cardio–Renal Stress Index (NCR-SI) as a pragmatic composite framework to describe multisystem [...] Read more.
Background: Multimorbidity frequently involves overlapping neuro-psychic, cardiometabolic, and renal disturbances, yet clinical assessment often relies on diagnosis-based comorbidity counts that may not fully capture cumulative physiological stress. We developed the Neuro–Cardio–Renal Stress Index (NCR-SI) as a pragmatic composite framework to describe multisystem burden using routinely available clinical data. Methods: This cross-sectional study analyzed electronic medical record data from adult patients with chronic conditions. NCR-SI integrates three domains: neuro-psychic burden (text-derived indicators and psychotropic medication use), cardiometabolic stress (triglyceride–glucose index and cardiometabolic diagnoses), and renal function (MDRD-estimated eGFR staging). Importantly, this study is not intended to demonstrate incremental predictive value over individual components or established comorbidity indices. Rather, it presents NCR-SI as a transparent, domain-based descriptive framework and reports its internal coherence and distribution across clinically recognizable multimorbidity contexts. Results: A total of 148 patient records were screened; 143 patients met complete-case criteria and were included in the main NCR-SI analyses. NCR-SI ranged from 0 to 10 (median 5). Higher scores were observed in renometabolic profiles. NCR-SI showed expected structural associations with declining renal function (eGFR; ρ ≈ −0.71), moderately with the TyG index (ρ ≈ 0.42), and weakly with medication burden. Correlation with age-adjusted CCI was minimal (ρ ≈ 0.09), indicating limited overlap with diagnosis-based comorbidity counts. Domain-specific correlations were consistent with predefined score construction rules, particularly between the renal domain and eGFR, and between the cardiometabolic domain and TyG. Conclusions: NCR-SI provides a pragmatic, integrative descriptor of neuro-cardio-renal stress using routinely collected clinical data. Rather than replacing established comorbidity indices, NCR-SI may complement them by summarizing multidimensional physiological burden patterns. NCR-SI is proposed as a research-oriented, hypothesis-generating descriptive framework. External validation in independent cohorts and longitudinal evaluation against clinically meaningful outcomes (e.g., hospitalization, mortality, functional status, healthcare utilization) are required before any claims of clinical performance can be made. Full article
(This article belongs to the Section Clinical Diagnosis and Prognosis)
Show Figures

Graphical abstract

16 pages, 269 KB  
Article
Civil Liability Odds in Information Leaks: Controversial Legal Debates and Emerging Judicial Doctrines in Jordan
by Ahmed M. Khawaldeh
Laws 2026, 15(2), 26; https://doi.org/10.3390/laws15020026 - 3 Apr 2026
Viewed by 460
Abstract
Cyberattacks and data breaches expose individuals and firms to liability in civil courts. Despite regulators’ efforts to standardize cybersecurity laws, judges, justices and attorneys have offered a plethora of interpretations to the same laws, causing a great deal of confusion. The current investigation [...] Read more.
Cyberattacks and data breaches expose individuals and firms to liability in civil courts. Despite regulators’ efforts to standardize cybersecurity laws, judges, justices and attorneys have offered a plethora of interpretations to the same laws, causing a great deal of confusion. The current investigation utilizes the Jordanian civil code to illustrate how complex liability becomes in data breaches cases. Through a comprehensive examination of liability rules 256–291 within the civil code, the Supreme Courts’ liability precedents, and the new personal data protection law, this analysis finds that liability could be established under strict conditions. Liability claims in Jordanian courts must satisfy the standing doctrine, the presence of injury requiring compensation, and causality, and must demonstrate the clear links between data breaches and the harm/injury suffered. The novelty of the personal data protection law in Jordan is likely to impact how liability is interpreted and established in cybersecurity cases. Full article
55 pages, 669 KB  
Systematic Review
Microlearning in Software Engineering Education: A Systematic Review of Initiatives and Curriculum Modernization
by Franklin Parrales-Bravo
Educ. Sci. 2026, 16(3), 487; https://doi.org/10.3390/educsci16030487 - 20 Mar 2026
Viewed by 483
Abstract
This systematic review maps the landscape of microlearning research within software engineering education, critically examining how this pedagogical approach is being applied to develop the multifaceted competencies required of modern software professionals. Following PRISMA-ScR guidelines, the review synthesized 21 empirical studies from 2015 [...] Read more.
This systematic review maps the landscape of microlearning research within software engineering education, critically examining how this pedagogical approach is being applied to develop the multifaceted competencies required of modern software professionals. Following PRISMA-ScR guidelines, the review synthesized 21 empirical studies from 2015 to 2026, analyzing their pedagogical approaches, technological integrations, curriculum coverage, and evidence of effectiveness. The findings reveal a field marked by creative experimentation yet significant fragmentation: while microlearning effectively engages students and conveys discrete programming and project management knowledge through gamified, mobile, and project-based formats, its application remains narrowly concentrated on introductory coding, leaving advanced competencies such as software architecture, requirements engineering, and testing strategies virtually unexplored. The review further exposes critical gaps in the evidence base, including the absence of longitudinal and transfer studies, the conflation of platform engagement with learning, and methodologically fragile claims of effectiveness. Enthusiasm for microcredentials and AI-personalized learning considerably outstrips empirical support, with implemented systems relying on rule-based logic rather than adaptive intelligence and credentialing frameworks lacking validation of employer recognition or employment outcomes. This review concludes that while microlearning holds genuine potential for just-in-time skill development in a rapidly evolving discipline, its role in software engineering education must be strategic and supplemental rather than comprehensive. The field must urgently move from promotional advocacy toward rigorous, comparative, and longitudinal research that assesses higher-order competencies and authentic professional capability, lest its promise remain unfulfilled. Full article
(This article belongs to the Special Issue Technology-Enhanced Education for Engineering Students)
Show Figures

Figure 1

51 pages, 1961 KB  
Systematic Review
From Recommendations to Delegation: A Systematic Review Mapping Agentic AI in E-Commerce and Its Consumer Effects
by Stefanos Balaskas
Information 2026, 17(3), 222; https://doi.org/10.3390/info17030222 - 25 Feb 2026
Viewed by 1400
Abstract
Agentic AI is increasingly framed as enabling consumers to delegate commerce decisions and actions to digital assistants, yet consumer-facing evidence still centers on assistive chatbots and recommender-like systems, with scarce evaluation of execution-level delegation. This study provides an evidence-mapping review of empirical work [...] Read more.
Agentic AI is increasingly framed as enabling consumers to delegate commerce decisions and actions to digital assistants, yet consumer-facing evidence still centers on assistive chatbots and recommender-like systems, with scarce evaluation of execution-level delegation. This study provides an evidence-mapping review of empirical work on agentic commerce and synthesizes determinants and outcomes of delegation across three questions: (RQ1) how systems are operationalized (autonomy, task scope, interaction mode, and transaction capability/evidence realism), (RQ2) what facilitates or inhibits delegation, and (RQ3) what downstream outcomes follow for marketing performance and consumer experience. We searched Scopus and Web of Science for English-language, peer-reviewed primary studies (2015–2026) and applied conservative coding rules that distinguish claimed capability from simulated or demonstrated execution. The mapped literature is concentrated in text-based, low-autonomy assistants focused on recommendation and post-purchase support; coverage drops sharply for workflow-level autonomy, cart building, checkout/payment execution, and negotiation. Across studies, findings cluster into two motifs: a utility/assurance pathway in which performance cues and interaction quality increase perceived usefulness, satisfaction, and trust, and a governance pathway in which autonomy cues and system-initiated control trigger reactance/powerlessness and reduce acceptance unless mitigated by safeguards; urgency can attenuate governance resistance. Because most outcomes are intention- or vignette-based, calibration, verification, and error-recovery behaviors remain under-measured. Overall, delegation appears to depend less on maximizing autonomy than on coupling capability with user governance (consent, oversight, recourse, accountability), and we outline measurement priorities for evaluating execution-capable agents. Full article
(This article belongs to the Section Information Applications)
Show Figures

Graphical abstract

20 pages, 1913 KB  
Article
Development and Internal Evaluation of an Interpretable AI-Based Composite Score for Psychosocial and Behavioral Screening in Dental Clinics Using a Mamdani Fuzzy Inference System
by Alexandra Lavinia Vlad, Florin Sandu Blaga, Ioana Scrobota, Raluca Ortensia Cristina Iurcov, Gabriela Ciavoi, Anca Maria Fratila and Ioan Andrei Țig
Medicina 2026, 62(2), 412; https://doi.org/10.3390/medicina62020412 - 21 Feb 2026
Viewed by 469
Abstract
Background and Objectives: Psychosocial symptoms and oral behaviors can complicate routine dental care, yet available screeners yield multiple separate scores. Explainable artificial intelligence offers a pragmatic way to integrate such multidomain measures into a single, auditable output that can support screening-oriented stratification and [...] Read more.
Background and Objectives: Psychosocial symptoms and oral behaviors can complicate routine dental care, yet available screeners yield multiple separate scores. Explainable artificial intelligence offers a pragmatic way to integrate such multidomain measures into a single, auditable output that can support screening-oriented stratification and standardized documentation (non-diagnostic). Therefore, we aimed to develop an interpretable, deterministic Mamdani fuzzy inference system (FIS) integrating GAD-7, PHQ-9, and OBC-21 into a 0–10 psychobehavioral composite score (PCS) to support screening-oriented stratification and standardized documentation (non-diagnostic). Materials and Methods: Cross-sectional multicenter study in 18 private dental clinics in Romania (October 2024–March 2025; n = 460). A rule-based Mamdani Type-1 FIS was specified a priori (48 rules; triangular membership functions; centroid defuzzification) without supervised training. Internal evaluation assessed coherence across severity strata, robustness to predefined input perturbations (±1 point; ±5%) and membership-function variation (±10%), and benchmarking against linear composites (Z-mean; PCA PC1). Results: Median PCS was 2.30 (IQR 2.03–3.56). PCS correlated with GAD-7 (Spearman ρ = 0.886), PHQ-9 (ρ = 0.792), and OBC-21 (ρ = 0.687) (all p < 0.001), increased monotonically across anxiety and depression severity strata, and was higher in high OBC-21 risk. Robustness was excellent under input perturbations (ICC(3,1) = 0.983 for ±1 point; 0.992 for ±5%) and high under ±10% membership-function variation (ICC(3,1) = 0.959). Concordance with linear baselines was high (Spearman ρ = 0.956 for Z-mean; 0.955 for PCA PC1), with a small systematic nonlinearity at higher scores. Conclusions: PCS provides a fully auditable, rule-based integration of three patient-reported measures with coherent internal behavior and robustness to plausible measurement noise and specification changes. This study reports internal evaluation of a deterministic, rule-based aggregation; external clinical validation against independent outcomes is required before any clinical utility claims. Full article
Show Figures

Figure 1

18 pages, 990 KB  
Perspective
From Network Governance to Real-World-Time Learning: A High-Reliability Operating Model for Rare Cancers
by Bruno Fuchs, Anna L. Falkowski, Ruben Jaeger, Barbara Kopf, Christian Rothermundt, Kim van Oudenaarde, Ralph Zacchariah, Philip Heesen, Georg Schelling and Gabriela Studer
Cancers 2026, 18(4), 643; https://doi.org/10.3390/cancers18040643 - 16 Feb 2026
Viewed by 623
Abstract
Background: Rare cancers combine low incidence with high biological heterogeneity and multi-institutional care trajectories. These features make single-center learning structurally incomplete and render pathway fragmentation a dominant driver of preventable harm, variability, and waste. In this context, care quality is best understood as [...] Read more.
Background: Rare cancers combine low incidence with high biological heterogeneity and multi-institutional care trajectories. These features make single-center learning structurally incomplete and render pathway fragmentation a dominant driver of preventable harm, variability, and waste. In this context, care quality is best understood as a property of pathway integrity across routing, diagnostics (imaging/biopsy planning), multidisciplinary intent-setting, definitive treatment, and surveillance—rather than as a department-level attribute. Objective: To define a pragmatic, transferable operating blueprint for a rare-cancer Learning Health System (LHS) that turns routine care into continuous, auditable learning under explicit governance, while maintaining claims discipline and protecting measurement validity. Approach: We synthesize an implementation-oriented operating model using the Swiss Sarcoma Network (SSN) as an exemplar. The blueprint couples clinical governance (Integrated Practice Unit logic, hub-and-spoke routing, auditable multidisciplinary team decision systems) with an interoperable real-world-time data backbone designed for benchmarking, pathway mapping, and feedback. The operating logic is expressed as a closed-loop control cycle: capture → harmonize → benchmark → learn → implement → re-measure, with explicit owners, minimum requirements, and failure modes. Results/Blueprint: (i) The model specifies a minimal set of data primitives—time-stamped and traceable decision points covering baseline and tumor characteristics, pathway timing, treatment exposure, outcomes and complications, and feasible longitudinal PROMs and PREMs; (ii) a VBHC-ready, multi-domain measurement backbone spanning outcomes, harms, timeliness, function, process fidelity, and resource stewardship; and (iii) two non-negotiable validity guardrails: explicit applicability (“N/A”) rules and mandatory case-mix/complexity stratification. Implementation is treated as a governed step with defined workflow levers, fidelity criteria, balancing measures, and escalation thresholds to prevent “dashboard medicine” and surrogate-driven optimization. Conclusions: This perspective contributes an operating model—not a platform or single intervention—that enables credible improvement science and establishes prerequisites for downstream causal learning and minimum viable digital twins. By distinguishing enabling infrastructure from the governed clinical system as the primary intervention, the blueprint supports scalable, learnable excellence in rare-cancer care while protecting against gaming, inequity, and inference drift. Distinct from generic LHS or VBHC frameworks, this blueprint specifies validity gates required for rare-cancer benchmarking—explicit applicability (“N/A”) rules, denominator integrity/capture completeness disclosure, anti-gaming safeguards, and escalation governance. These elements are critical in rare cancers because small denominators, high heterogeneity, and multi-institutional pathways otherwise make benchmarking prone to artifacts and unsafe inferences. Full article
Show Figures

Graphical abstract

34 pages, 2420 KB  
Article
Exploring Artificial Intelligence and Machine Learning Approaches to Legal Reasoning
by Wullianallur Raghupathi
AppliedMath 2026, 6(2), 32; https://doi.org/10.3390/appliedmath6020032 - 12 Feb 2026
Viewed by 878
Abstract
Modeling legal reasoning with artificial intelligence and machine learning presents formidable challenges. Legal decisions emerge from a complex interplay of factual circumstances, statutory interpretation, case precedent, jurisdictional variation, and human judgment—including the behavioral characteristics of judges and juries. This paper takes an exploratory [...] Read more.
Modeling legal reasoning with artificial intelligence and machine learning presents formidable challenges. Legal decisions emerge from a complex interplay of factual circumstances, statutory interpretation, case precedent, jurisdictional variation, and human judgment—including the behavioral characteristics of judges and juries. This paper takes an exploratory approach to investigating how contemporary ML techniques might capture aspects of this complexity. Using pharmaceutical patent litigation as an illustrative domain, we develop a multi-layer analytical pipeline integrating text mining, clustering, topic modeling, and classification to analyze 698 U.S. federal district court decisions spanning January 2016 through December 2018, comprising substantive validity and infringement rulings under the Hatch-Waxman regulatory framework. Results demonstrate that the pipeline achieves 85–89% prediction accuracy—substantially exceeding the 42% baseline majority-class rate and comparing favorably with prior legal prediction studies—while producing interpretable intermediate outputs: clusters that correspond to recognized doctrinal categories (Abbreviated New Drug Application—ANDA litigation, obviousness, written description, claim construction) and topics that capture recurring legal themes. We discuss what these findings reveal about both the possibilities and limitations of computational approaches to legal reasoning, acknowledging the significant gap between statistical prediction and genuine legal understanding. Full article
Show Figures

Figure 1

27 pages, 2038 KB  
Article
Demonstrating an Ontological Framework for Sustainable PVC Material Science: A Holistic Study Combining Granta EduPack, Bibliometric Analysis, Thematic Analysis, Content Analysis, and Protégé
by Alexander Chidara, Kai Cheng and David Gallear
Appl. Sci. 2026, 16(4), 1677; https://doi.org/10.3390/app16041677 - 7 Feb 2026
Viewed by 416
Abstract
Addressing the growing need for sustainable innovation in PVC materials, this study presents an illustrative framework that develops and demonstrates an ontological system that integrates lifecycle simulation using Granta EduPack, systematic literature analysis (including bibliometric, thematic, and content analytics) of peer-reviewed publications, and [...] Read more.
Addressing the growing need for sustainable innovation in PVC materials, this study presents an illustrative framework that develops and demonstrates an ontological system that integrates lifecycle simulation using Granta EduPack, systematic literature analysis (including bibliometric, thematic, and content analytics) of peer-reviewed publications, and Protégé-based semantic reasoning, and their combination, in a holistic manner. Material and use-phase data for PVC, HDPE, PP, PET, and FRP cooling-tower components were sourced from ANSYS Granta EduPack Level-3 Polymer Sustainability 2023 R2 Version; 23.2.1, and a systematic analysis of the literature was then encoded as ontology classes, properties, and individuals following the Seven-Step ontology development method. Eco-audit simulations, standardised to a functional unit of 1 kg cooling tower fill material, reveal that the use phase dominates environmental impact (67 MJ primary energy, ~80% of total lifecycle), while material production and end-of-life recycling contribute ~15% and credits of ~900 MJ and 28 kg CO2 via recycling offsets. Ontology reasoning with corrected SWRL rules and SPARQL queries classifies VirginPVCRef and PVC10ES as strong structural materials (tensile strength ≥ 40 MPa), identifies PVCRH40 as high-moisture-risk (water absorption > 0.10 g/g), and ranks hydro-thermal dechlorination (recyclability 0.90) over mechanical recycling (0.55). A systematic analysis of 40 Scopus-indexed publications (2015–2025) highlighted key themes in recycling technologies, LCA emissions, additive toxicity, ontology frameworks, machine learning integration, circular economy policy, and cooling-tower applications. Demonstrated via a simulation-based cooling-tower case study, hybrid PVC-FRP designs yield the highest justified Material Sustainability Performance Index (MSPI), outperforming PVC-only and FRP-only alternatives. This framework provides a conceptual decision-support tool for exploring PVC material optimisation, illustrating pathways to enhancing circularity and environmental responsibility in industrial applications. The proposed framework is, therefore, not intended as a validated decision-support tool, nor does it claim analytical optimisation or predictive performance but rather serves as a method of illustration that shows how domain knowledge can be formally structured using ontology principles linked to simulation representations, and that was examined for internal logical consistency. Full article
(This article belongs to the Section Materials Science and Engineering)
Show Figures

Figure 1

27 pages, 8232 KB  
Article
Cognitive Misalignment Among Stakeholders and Governance Strategies in the Li River Karst Scene–Village System: A Q Methodology Study
by Bing Lin, Jiani Chen, Guoshu Bin and Lisha Zhu
Sustainability 2026, 18(3), 1569; https://doi.org/10.3390/su18031569 - 4 Feb 2026
Viewed by 338
Abstract
This study addresses the intensifying conflict between conservation and tourism development in global natural World Heritage sites by exploring how cognitive misalignments among stakeholders obstruct scene–village symbiosis and by proposing governance strategies grounded in cognitive coordination to enhance sustainable governance effectiveness. Focusing on [...] Read more.
This study addresses the intensifying conflict between conservation and tourism development in global natural World Heritage sites by exploring how cognitive misalignments among stakeholders obstruct scene–village symbiosis and by proposing governance strategies grounded in cognitive coordination to enhance sustainable governance effectiveness. Focusing on three representative villages located in the overlapping area of the Li River World Heritage protection zone and the scenic tourism area, which represent the consolidation/maturity, emerging incubation, and potential cultivation stages of tourism development, the study employs Q methodology to identify stakeholder cognitive clusters and their interactive logics. Four cognitive clusters are revealed: utilitarian landscape instrumentalism, livelihood entitlement-oriented, nostalgic disciplinary gaze, and institutional risk aversion. Their presence and combinations vary across different development stages, forming distinct cognitive configurations. These clusters exhibit both couplings and tensions in value preferences, benefit claims, and action logic, which shape rule acceptance and willingness to collaborate. By overcoming the limitations of conventional surveys in capturing latent perceptions, this study proposes an integrated “cognitive differences—strategic interactions—policy mechanisms” framework. The findings offer transferable insights for managing multi-stakeholder heritage destinations, particularly in ecologically fragile areas facing overtourism pressures and sustainability challenges. Full article
Show Figures

Figure 1

16 pages, 368 KB  
Article
A Physical Framework for Algorithmic Entropy
by Jeff Edmonds
Entropy 2026, 28(1), 61; https://doi.org/10.3390/e28010061 - 4 Jan 2026
Viewed by 579
Abstract
This paper does not aim to prove new mathematical theorems or claim a fundamental unification of physics and information, but rather to provide a new pedagogical framework for interpreting foundational results in algorithmic information theory. Our focus is on understanding the profound connection [...] Read more.
This paper does not aim to prove new mathematical theorems or claim a fundamental unification of physics and information, but rather to provide a new pedagogical framework for interpreting foundational results in algorithmic information theory. Our focus is on understanding the profound connection between entropy and Kolmogorov complexity. We achieve this by applying these concepts to a physical model. Our work is centered on the distinction, first articulated by Boltzmann, between observable low-complexity macrostates and unobservable high-complexity microstates. We re-examine the known relationships linking complexity and probability, as detailed in works like Li and Vitányi’s An Introduction to Kolmogorov Complexity and Its Applications. Our contribution is to explicitly identify the abstract complexity of a probability distribution K(ρ) with the concrete physical complexity of a macrostate K(M). Using this framework, we explore the “Not Alone” principle, which states that a high-complexity microstate must belong to a large cluster of peers sharing the same simple properties. We show how this result is a natural consequence of our physical framework, thus providing a clear intuitive model for understanding how algorithmic information imposes structural constraints on physical systems. We end by exploring concrete properties in physics, resolving a few apparent paradoxes, and revealing how these laws are the statistical consequences of simple rules. Full article
Show Figures

Figure 1

25 pages, 513 KB  
Article
Regulatory Risk in Green FinTech: Comparative Insights from Central Europe
by Simona Heseková, András Lapsánszky, János Kálmán, Michal Janovec and Anna Zalcewicz
Risks 2026, 14(1), 8; https://doi.org/10.3390/risks14010008 - 4 Jan 2026
Viewed by 1241
Abstract
Green fintech merges sustainable finance with data-intensive innovation, but national translations of EU rules can create regulatory risk. This study examines how such risk manifests in Central Europe and which policy tools mitigate it. We develop a three-dimension framework—regulatory clarity and scope, supervisory [...] Read more.
Green fintech merges sustainable finance with data-intensive innovation, but national translations of EU rules can create regulatory risk. This study examines how such risk manifests in Central Europe and which policy tools mitigate it. We develop a three-dimension framework—regulatory clarity and scope, supervisory consistency, and innovation facilitation—and apply a comparative qualitative design to Hungary, Slovakia, Czechia, and Poland. Using a common EU baseline, we compile coded national snapshots from primary legal texts, supervisory documents, and recent scholarship. Results show material cross-country variation in labelling practice, soft-law use, and testing infrastructure: Hungary combines central-bank green programmes with an innovation hub/sandbox; Slovakia aligns with ESMA and runs hub/sandbox, though the green-fintech pipeline is nascent; Czechia applies a principles-based safe harbour and lacks a national sandbox; and Poland relies on a virtual sandbox and binding interpretations with limited soft law. These choices shape approval timelines, retail penetration, and cross-border portability of green-labelled products. We conclude with a policy toolkit: labelling convergence or explicit safe harbours, a cross-border sandbox federation, ESRS/ESAP-ready proportionate disclosures, consolidation of recurring interpretations into soft law, investment in suptech for green-claims analytics, and inclusion metrics in sandbox selection. Full article
Show Figures

Figure 1

Back to TopTop