Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,263)

Search Parameters:
Keywords = computer games

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 619 KB  
Article
A Generalized Nash Equilibrium Approach to the Inverse Eigenvector Centrality Problem
by Mauro Passacantando and Fabio Raciti
Games 2026, 17(2), 20; https://doi.org/10.3390/g17020020 - 7 Apr 2026
Abstract
Eigenvector-based centrality captures recursive notions of importance in networks. While the direct problem computes centrality from given edge weights, the inverse eigenvector centrality problem seeks edge weights that reproduce a prescribed centrality profile; for directed multigraphs, this inverse task is typically non-unique and [...] Read more.
Eigenvector-based centrality captures recursive notions of importance in networks. While the direct problem computes centrality from given edge weights, the inverse eigenvector centrality problem seeks edge weights that reproduce a prescribed centrality profile; for directed multigraphs, this inverse task is typically non-unique and depends on the admissible arc structure. We study the direct and inverse problems on directed multigraphs and derive an explicit linear characterization of the set of admissible edge-weight vectors that are compatible with a given centrality target. On this feasible set, we formulate a generalized Nash equilibrium problem with shared centrality constraints, in which multiple agents select edge weights to maximize economically interpretable payoffs that incorporate arc-level competition effects. We provide conditions under which the induced game admits a concave potential function, yielding equilibrium existence and, under standard strict concavity assumptions, uniqueness. Finally, we illustrate the model on an airport network where nodes represent airports and parallel arcs represent airline-specific routes, showing that equilibrium selection produces a feasible and interpretable weight configuration that preserves the prescribed centrality. Full article
Show Figures

Figure 1

20 pages, 899 KB  
Article
Proximity-Aware VM Placement in Multi-Layer Fog Computing for Efficient Resource Management: Performance Evaluation Under a Gaming Application Scenario
by Sreebha Bhaskaran and Supriya Muthuraman
Computers 2026, 15(4), 225; https://doi.org/10.3390/computers15040225 - 3 Apr 2026
Viewed by 232
Abstract
The rapid proliferation of mobile devices, particularly smartphones and tablets, has transformed digital entertainment, with mobile gaming emerging as one of the fastest-growing digital segments. Such applications are inherently latency-sensitive and require effective resource management and seamless mobility support. To overcome these issues, [...] Read more.
The rapid proliferation of mobile devices, particularly smartphones and tablets, has transformed digital entertainment, with mobile gaming emerging as one of the fastest-growing digital segments. Such applications are inherently latency-sensitive and require effective resource management and seamless mobility support. To overcome these issues, this paper suggests a four-layered infrastructure that combines edge, fog, and cloud computing with Software-Defined Networking (SDN) and is assisted by a lightweight proximity-aware heuristic placement strategy and mobility management. The suggested structure follows a microservices contained breakdown of the gaming functionality and uses clustering algorithms to permit coordinated access to resources by edge and fog nodes. A dynamic lightweight proximity-aware virtual machine placement algorithm is presented to deploy application modules nearer to the users depending on the availability and mobility of the resources. The proposed work is simulated using IFogSim2. The proposed model reduces the latency by up to 73 percent and the rate of task completion by 25 percent relative to baseline configurations in the case of dynamic mobility of users. These results indicate that the suggested strategy can be effective in improving the latency-sensitive mobile gaming applications performance in the edge-fog networks. Full article
(This article belongs to the Section Cloud Continuum and Enabled Applications)
Show Figures

Figure 1

18 pages, 262 KB  
Entry
Assessment Analytics in Digital Assessments
by Okan Bulut and Seyma N. Yildirim-Erbasli
Encyclopedia 2026, 6(4), 81; https://doi.org/10.3390/encyclopedia6040081 - 2 Apr 2026
Viewed by 285
Definition
The rapid expansion of digital and technology-enhanced assessments has enabled the capture of far more than final responses or total scores. As learners navigate traditional formats, such as multiple-choice, short-answer, and performance tasks, digital delivery platforms routinely capture response times, response revisions, navigation [...] Read more.
The rapid expansion of digital and technology-enhanced assessments has enabled the capture of far more than final responses or total scores. As learners navigate traditional formats, such as multiple-choice, short-answer, and performance tasks, digital delivery platforms routinely capture response times, response revisions, navigation patterns, and item-level metadata. More advanced formats, including interactive simulations, scenario-based tasks, and game-based assessments, further record fine-grained actions such as mouse clicks, keystrokes, hint requests, sequence of operations, and decision pathways. These increasingly rich data streams provide a multidimensional view of test-taker behavior, offering evidence about cognitive processes, strategy use, persistence, and motivation that goes beyond what correctness alone can reveal. Assessment analytics refers to the systematic collection, integration, and analysis of such data generated during the assessment process. In practice, this emerging field combines principles from psychometrics, learning analytics, data science, and human-computer interaction to evaluate the quality, validity, and fairness of assessments in digital environments. The ultimate goal of assessment analytics is to produce actionable evidence about how assessments measure what they intend to measure in contemporary, technology-rich educational contexts. Full article
(This article belongs to the Section Social Sciences)
23 pages, 2351 KB  
Article
A Spatio-Temporal Attention-Based Multi-Agent Deep Reinforcement Learning Approach for Collaborative Community Energy Trading
by Sheng Chen, Yong Yan, Jiahua Hu and Changsen Feng
Energies 2026, 19(7), 1730; https://doi.org/10.3390/en19071730 - 1 Apr 2026
Viewed by 248
Abstract
The high penetration of distributed energy resources (DERs) poses numerous challenges to community energy management, including intense source-load stochasticity, synchronized load surges triggered by multi-agent gaming, and potential privacy breaches. To tackle these issues, this paper proposes a coordinated energy trading framework driven [...] Read more.
The high penetration of distributed energy resources (DERs) poses numerous challenges to community energy management, including intense source-load stochasticity, synchronized load surges triggered by multi-agent gaming, and potential privacy breaches. To tackle these issues, this paper proposes a coordinated energy trading framework driven by an intermediate market-rate pricing mechanism. Within this framework, a novel Multi-Agent Transformer Proximal Policy Optimization (MATPPO) algorithm is developed, adopting an LSTM–Transformer hybrid architecture and the centralized training with decentralized execution (CTDE) paradigm. During centralized training, an LSTM network extracts temporal evolution features from source-load data to handle environmental uncertainty, while a Transformer-based self-attention mechanism reconstructs the dynamic agent topology to capture spatial correlations. In the decentralized execution phase, prosumers make independent decisions using only local observations. This eliminates the need to upload internal device states, significantly enhancing the privacy of sensitive local information during the online execution phase. Additionally, a parameter-sharing mechanism enables agents to share policy networks, significantly enhancing algorithmic scalability. Simulation results demonstrate that MATPPO effectively mitigates power peaks and reduces the transformer capacity pressure at the main grid interface. Furthermore, it significantly lowers total community electricity costs while maintaining high computational efficiency in large-scale scenarios. Full article
Show Figures

Figure 1

20 pages, 8535 KB  
Article
The Emergent Rhythms of a Robot Vacuum Cleaner—An Empirically Grounded Account of Agential Realism
by Linus de Petris, Siamak Khatibi and Yuan Zhou
Multimodal Technol. Interact. 2026, 10(4), 36; https://doi.org/10.3390/mti10040036 - 1 Apr 2026
Viewed by 123
Abstract
This article builds on the argument that design for complex interactive systems should shift from creating linear transactional interactions toward organizing relational complexity. Grounded in Karen Barad’s agential realism, we argue that a designer’s role can benefit from not predefining interactions but from [...] Read more.
This article builds on the argument that design for complex interactive systems should shift from creating linear transactional interactions toward organizing relational complexity. Grounded in Karen Barad’s agential realism, we argue that a designer’s role can benefit from not predefining interactions but from curating the material-discursive conditions under which meaningful relations can emerge. To explore the empirical and temporal dimensions of this practice, we conducted an exploratory workshop setting the conditions for emergent gameplay dynamics and discussions on agential realist anticipation. Participants utilized a custom-designed game and built their own physical controllers to anticipate and adapt to shifting gameplay conditions. Our results demonstrate how alterations in relational constraints, rather than explicit pre-programmed goals, drove the emergence of non-predefined gameplay rhythms. The findings provide empirical grounding for an agential realist understanding of anticipation, showing that an interactive system’s identity lies in its unfolding processual patterns rather than a static final state. Based on these findings, we propose three design principles for further exploration: Design for Relational Emergence, Design for Re-membering, and Design for Emergent Patterns. Consequently, we conclude by outlining a conceptual approach for non-linear computational architectures, drawing on principles from Enactive AI and reservoir computing. Full article
Show Figures

Figure 1

45 pages, 8329 KB  
Article
HRV-Based Multimodal Physiological Signal Monitoring Using Wearable Biosensors in Human–Computer Interaction: Cognitive Load in Real-Time Strategy Games
by Yunlong Shi, Muyesaier Kuerban, Yiyang Jin, Chaoyue Wang and Lu Chen
Sensors 2026, 26(7), 2181; https://doi.org/10.3390/s26072181 - 1 Apr 2026
Viewed by 422
Abstract
Real-time strategy (RTS) games provide a cognitively demanding and ecologically valid context for investigating workload dynamics in human–computer interaction (HCI). This multimodal study (HRV, NASA-TLX, behavior, interviews) examined multitasking, visual complexity, and decision pressure in 36 novice RTS players. High multitasking significantly increased [...] Read more.
Real-time strategy (RTS) games provide a cognitively demanding and ecologically valid context for investigating workload dynamics in human–computer interaction (HCI). This multimodal study (HRV, NASA-TLX, behavior, interviews) examined multitasking, visual complexity, and decision pressure in 36 novice RTS players. High multitasking significantly increased subjective workload (total raw-TLX: from 22.50 ± 14.65 to 36.47 ± 20.19, p < 0.001) and prolonged completion time (from 317.17 ± 37.26 s to 354.92 ± 50.70 s, p < 0.001). Decision pressure elevated subjective workload (total raw-TLX: from 20 to 28, p = 0.008) without affecting performance. Although HRV did not consistently differentiate experimental conditions at the group level, it showed stable individual-level associations with perceived workload—both in expected directions (e.g., LF power positively correlated with total raw-TLX across four experiments, r = 0.28–0.53, all p < 0.05) and in inverse relationships that deviate from conventional stress models (e.g., stress index negatively correlated with total raw-TLX, r = −0.34 to −0.40, all p < 0.01). These findings suggest that autonomic responses in complex interactive environments may reflect dynamic engagement processes rather than uniform stress activation, supporting multimodal cognitive load assessment and offering transferable insights for interface design and workload evaluation in demanding HCI contexts. Full article
(This article belongs to the Special Issue Human–Computer Interaction in Sensor Systems)
Show Figures

Graphical abstract

27 pages, 1134 KB  
Article
TC-HUR: A Tri-Phase Cauchy-Assisted Hunger Games Search and Unified Runge–Kutta Optimizer for Robust DNA Data Storage
by Beyza Öztürk, Ayşenur İgit, Aylin Kaya, Zeynep Tuğsem Çamlıca, Selen Arıcı and Muhammed Faruk Şahin
Int. J. Mol. Sci. 2026, 27(7), 3134; https://doi.org/10.3390/ijms27073134 - 30 Mar 2026
Viewed by 369
Abstract
Although DNA-based data storage theoretically provides an information density of 2 bits per nucleotide, biochemical constraints transform sequence design into a high-dimensional constrained combinatorial optimization problem. The high computational cost and low encoding efficiency of conventional rule-based approaches make metaheuristic methods an effective [...] Read more.
Although DNA-based data storage theoretically provides an information density of 2 bits per nucleotide, biochemical constraints transform sequence design into a high-dimensional constrained combinatorial optimization problem. The high computational cost and low encoding efficiency of conventional rule-based approaches make metaheuristic methods an effective alternative. This study proposes the TC-HUR hybrid algorithm to simultaneously optimize information density and conflicting biophysical constraints, including homopolymer (HP) length, GC content, melting temperature (Tm), and reverse-complement (RC) similarity. The method escapes local optima using Cauchy jump-enhanced Hunger Games Search (HGS), performs high-precision exploitation via Runge–Kutta (RUN) operators, and refines constraint violations at the nucleotide level through an adaptive intensive mutation mechanism. The algorithm is evaluated on a complex dataset of 1853 nucleotides under different noise regimes. TC-HUR outperforms RUN by 2.5% and HGS by 16.7% in average fitness. While maintaining homopolymer length near the ideal threshold, it reduces reverse-complement similarity to 19.10%, ensuring high sequence diversity. Under high-noise conditions, TC-HUR achieves a normalized edit distance of 0.1290, reducing insertion–deletion (indel) errors by approximately 14%. The results demonstrate that the proposed model effectively generates biophysically synthesizable and noise-resilient DNA codes. Full article
Show Figures

Figure 1

15 pages, 398 KB  
Article
The Mediating Role of Screen-Based Sedentary Behaviors in the Association of Parental Educational Level and BMI with Preschoolers’ Ultra-Processed Food Consumption
by Aristides M. Machado-Rodrigues, Helder Miguel Fernandes, António Stabelini Neto, Elizabete Alexandre Dos Santos, Josep A. Tur, Cristina Padez and Daniela Rodrigues
Nutrients 2026, 18(7), 1069; https://doi.org/10.3390/nu18071069 - 27 Mar 2026
Viewed by 332
Abstract
Background/Objectives: The mediating role of the diverse range of screen-based sedentary behaviors (SBs) remains understudied, particularly at younger ages. The present study examined the direct and indirect effects of parental BMI and education on ultra-processed food (UPF) consumption among preschoolers, testing the [...] Read more.
Background/Objectives: The mediating role of the diverse range of screen-based sedentary behaviors (SBs) remains understudied, particularly at younger ages. The present study examined the direct and indirect effects of parental BMI and education on ultra-processed food (UPF) consumption among preschoolers, testing the potential mediating role of screen time. Methods: The cross-sectional study sample comprised 919 kindergarten children (484 boys, 52.7%), with ages ranging from 2.2 to 6.8 years (mean: 4.7 ± 1.0 years). Screen-based sedentary behaviors (television viewing, smartphone use, tablet use, computer use, and playing electronic games) were measured by proxy-report fulfilled by parents, separately for weekdays and weekends. UPF consumption (drinks/yogurts, packaged/fast foods, and sweet/salty snacks) was assessed via 24 h recall scales. Path analysis mediation models tested direct effects of maternal/paternal BMI and education on UPF intake, and indirect effects through screen time, controlling for child age and sex. Results: Lower parental education and higher parental BMI were associated with increased mobile device use and UPF consumption (r = 0.10–0.28). Screen-based sedentary behaviors mediated the association between maternal BMI and UPF pathways (15–90% of total effects), particularly for sweet and salty snacks (50–90%). Parental education effects were also mediated by screen time (9–23% indirect effects), with paternal education showing stronger protection against packaged/fast foods. Conclusions: Mobile devices and watching television partially mediate intergenerational transmission of obesogenic dietary patterns from parental BMI/education to preschoolers’ UPF consumption. Findings of the current study support family-centered interventions targeting screen-time limits and UPF exposure, mainly at the weekends, to prevent early obesity trajectories. Full article
(This article belongs to the Special Issue Food Environments, Dietary Behaviors, and Population Health)
Show Figures

Figure 1

20 pages, 596 KB  
Systematic Review
The Effects of Family-Based Programs on Preschool Children’s Screen Time: A Systematic Review
by Idurre Arizmendi Sueiro and Markel Rico-González
Children 2026, 13(4), 446; https://doi.org/10.3390/children13040446 - 25 Mar 2026
Viewed by 363
Abstract
Background: The impact of screen time is having serious adverse effects on people’s lives. Unfortunately, early childhood is the most vulnerable stage in the lifespan, and most children are using television, computers, parents’ and mothers’ mobile phones, or tablets, for longer than recommended. [...] Read more.
Background: The impact of screen time is having serious adverse effects on people’s lives. Unfortunately, early childhood is the most vulnerable stage in the lifespan, and most children are using television, computers, parents’ and mothers’ mobile phones, or tablets, for longer than recommended. For this reason, the interest of the education community in proposing programs for reducing screen time has grown, which could be of interest for families and professionals in early childhood development and care for children adhering to a healthy lifestyle. For this reason, the objective of this study is to compile programs including families that have tried to reduce preschool-aged children’s time in front of screens. Method: The search strategy is designed based on the PICOS framework. A review was conducted in three databases (PubMed, Web of Science, and ProQuest Central) on 11 October 2024, following the PRISMA guidelines. The systematic review is registered in PROSPERO. Results: A total of 287 articles were initially found, and 15 met all inclusion criteria. Conclusions: The results reveal that programs based on training parents in addition to performing games with children have positive effects for reducing screen time in children up to six years old, even in a specific population. Full article
(This article belongs to the Section Global Pediatric Health)
Show Figures

Figure 1

27 pages, 3383 KB  
Article
Grouping and Matching: A Two-Stage Dispatch Framework for Reservation-Based Ridesplitting in Mega-Events
by Jiangtao Zhu, Hantong Wang and Zheng Zhu
Appl. Sci. 2026, 16(6), 3003; https://doi.org/10.3390/app16063003 - 20 Mar 2026
Viewed by 212
Abstract
Ridesplitting is a promising strategy to enhance vehicle efficiency in urban mobility services during mega-events. However, designing dispatching algorithms that effectively balance high service rates with acceptable passenger delays under high-demand, reservation-based scenarios remains a significant challenge. To address this issue, this study [...] Read more.
Ridesplitting is a promising strategy to enhance vehicle efficiency in urban mobility services during mega-events. However, designing dispatching algorithms that effectively balance high service rates with acceptable passenger delays under high-demand, reservation-based scenarios remains a significant challenge. To address this issue, this study proposes a novel two-stage dispatch framework: Offline Grouping and Online Matching (OGOM). In the offline stage, the request grouping problem is formulated as a weighted hypergraph maximum matching (WHMM) problem. A sequence inference (SI) method is introduced to accelerate the construction of candidate ridesplitting trips, and the WHMM problem is solved optimally using the Gurobi solver. In the online stage, the dispatch process is completed within an event-based simulation environment built with MATSim. The framework is validated through a comprehensive case study of the Hangzhou Asian Games. The results demonstrate that the proposed OGOM framework achieves a mean service rate of 92.12%, representing an 8.74% improvement over a rolling horizon batching benchmark. Concurrently, the average passenger delay is maintained between 2 and 4 min across all simulation runs. Furthermore, the framework reduces the average request completion distance by over 30% compared to a non-ridesplitting baseline. The proposed SI method also shows a 49.35% reduction in computation time for hypergraph construction compared to conventional methods. These findings confirm that the OGOM framework provides an effective and scalable operational strategy for mega-event ridesplitting services, simultaneously improving service quality through optimized supply–demand matching and controlled passenger delays. Full article
(This article belongs to the Special Issue Advances in Intelligent Transportation and Sustainable Mobility)
Show Figures

Figure 1

29 pages, 2311 KB  
Review
Trust Assessment Methods for Blockchain-Empowered Internet of Things Systems: A Comprehensive Review
by Mostafa E. A. Ibrahim, Yassine Daadaa and Alaa E. S. Ahmed
Appl. Sci. 2026, 16(6), 2949; https://doi.org/10.3390/app16062949 - 18 Mar 2026
Viewed by 302
Abstract
The Internet of things (IoT) is rapidly pervading daily life and linking everything. Although higher connectivity offers many benefits, including higher productivity, robotic processes, and decision-making guided by data, it also poses a number of security dangers. Modern risks to data authenticity and [...] Read more.
The Internet of things (IoT) is rapidly pervading daily life and linking everything. Although higher connectivity offers many benefits, including higher productivity, robotic processes, and decision-making guided by data, it also poses a number of security dangers. Modern risks to data authenticity and confidence are getting harder to handle through typical central safety solutions. In this paper, we present a detailed investigation of the latest innovations and approaches for assessing reputation and confidence in the blockchain-empowered Internet of Things (BIoT) area. A comprehensive literature search was conducted across major electronic databases, including IEEE, Springer, Elsevier, Wiley, MDPI, and top indexed conference proceedings. The publication year was restricted to the period from 2018 to 2025. The methodological quality of a total of 122 studies met the inclusion criteria assessed using predefined quality measures. We figure out existing flaws at each layer of IoT architecture, illustrating how autonomous, transparent, and impenetrable blockchain ledgers address these flaws. Plus, we analytically compare public, private, consortium, and hybrid blockchain networking architectures to emphasize the underlying compromises among security, reliability, and decentralization. We also assess how reputation evaluation techniques evolved over time, moving from classical fuzzy logic and weighted average models to modern mature game theory and machine learning (ML) models, addressing their limitations in terms of computational overhead, scalability, adaptability, and deployment feasibility in IoT systems. Additionally, we outline future directions for BIoT system trust assessment and identify research limitations and potential solutions. Our research indicates that although ML-driven models offer more accurate predictions for identifying illicit node activities, they are still constrained by limited unbalanced data and high processing overhead. Full article
(This article belongs to the Special Issue Advanced Blockchain Technologies and Their Applications)
Show Figures

Figure 1

19 pages, 1291 KB  
Article
Equilibrium-Based Multi-Objective Game Optimization for Coupling Suppression in High-Frequency Communication Networks
by Mohamed Ayari and Saleh M. Altowaijri
Mathematics 2026, 14(6), 1031; https://doi.org/10.3390/math14061031 - 18 Mar 2026
Viewed by 172
Abstract
Coupling interference in densely integrated high-frequency communication architectures leads to significant degradation in transmission efficiency, particularly in modern 5G and GHz-range platforms. From a mathematical perspective, mitigating such interference can be formulated as a multi-criteria optimization problem involving competing design objectives and interacting [...] Read more.
Coupling interference in densely integrated high-frequency communication architectures leads to significant degradation in transmission efficiency, particularly in modern 5G and GHz-range platforms. From a mathematical perspective, mitigating such interference can be formulated as a multi-criteria optimization problem involving competing design objectives and interacting control mechanisms. In this paper, we develop an equilibrium-based optimization framework by modeling coupling suppression as a finite non-cooperative game. Isolation mechanisms are represented as strategic players whose actions are defined over constrained design spaces, while utility functions incorporate coupling minimization, insertion-loss penalties, and fabrication complexity. Under this formulation, stable mitigation strategies are characterized through Nash equilibrium conditions. To address the inherent trade-offs among performance metrics, the equilibrium computation is integrated with a Pareto multi-objective optimization scheme, yielding Nash–Pareto optimal configurations that balance electromagnetic isolation performance with implementation feasibility. Numerical full-wave simulations in the 2–12 GHz frequency band demonstrate that the proposed equilibrium solutions achieve substantial interference suppression, with reductions exceeding 30 dB compared with conventional baseline designs. The proposed framework provides a mathematically structured approach for interference mitigation and offers a generalizable methodology for multi-objective optimization in high-frequency communication systems. Full article
(This article belongs to the Special Issue Computational Intelligence in Communication Networks)
Show Figures

Figure 1

37 pages, 2964 KB  
Article
A Mathematical Framework for Four-Dimensional Chess: Extending Game Mechanics Through Higher-Dimensional Geometry
by Rinaldi (Unciuleanu) Oana and Costin-Gabriel Chiru
AppliedMath 2026, 6(3), 48; https://doi.org/10.3390/appliedmath6030048 - 17 Mar 2026
Viewed by 389
Abstract
This paper develops a rigorous mathematical and computational framework for four-dimensional chess defined on the discrete hypercubic lattice {1,, 8}4. We formalize piece movement using displacement sets in Z4, define adjacency via the [...] Read more.
This paper develops a rigorous mathematical and computational framework for four-dimensional chess defined on the discrete hypercubic lattice {1,, 8}4. We formalize piece movement using displacement sets in Z4, define adjacency via the Chebyshev metric, and analyze the resulting move graphs for rooks, bishops, knights, queens, and kings. We establish exact mobility formulas, parity invariants, and connectivity properties, consolidating known product-graph results for rooks and kings while introducing a boundary-sensitive analysis of the four-dimensional knight verified by exhaustive enumeration. The mathematical framework is complemented by a fully implemented 4D chess engine and interactive visualization environment rendering all 64 (z,w)-slices of the hypercube simultaneously. The system supports full move legality, generalized special rules, multi-king checkmate detection, and reproducible state enumeration. Performance measurements and exploratory branching-factor estimates are obtained through reproducible random playouts using the publicly available implementation. We contextualize this ruleset within existing work on move graphs on Znm, higher-dimensional leapers, spectral properties of grid graphs, toroidal analogs, and multidimensional visualization. Exploratory qualitative feedback (N = 18) is included to examine whether the visualization design is interpretable and navigable in practice, providing feasibility-oriented observations on how slice-based 4D projection and layered board rendering are perceived by non-expert users in an exploratory context. Together, the mathematical results, implemented engine, and visualization form a coherent foundation for the study of strategy, complexity, and human interaction in four-dimensional game systems. The framework provides a basis for future investigations into spectral analysis of move graphs, symmetry-aware search, hierarchical planning, and educational applications in high-dimensional geometry. Full article
(This article belongs to the Section Deterministic Mathematics)
Show Figures

Figure 1

38 pages, 2846 KB  
Article
On Importance Sampling and Multilinear Extensions for Approximating Shapley Values with Applications to Explainable Artificial Intelligence
by Tim Pollmann and Jochen Staudacher
Complexities 2026, 2(1), 7; https://doi.org/10.3390/complexities2010007 - 17 Mar 2026
Viewed by 251
Abstract
Shapley values are the most widely used point-valued solution concept for cooperative games and have recently garnered attention for their applicability in explainable machine learning. Due to the complexity of Shapley value computation, users mostly resort to Monte Carlo approximations for large problems. [...] Read more.
Shapley values are the most widely used point-valued solution concept for cooperative games and have recently garnered attention for their applicability in explainable machine learning. Due to the complexity of Shapley value computation, users mostly resort to Monte Carlo approximations for large problems. We take a detailed look at an approximation method grounded in multilinear extensions proposed in 2021 under the name “Owen sampling”. We point out why Owen sampling is biased and propose unbiased alternatives based on combining multilinear extensions with stratified sampling and importance sampling. Finally, we discuss empirical results of the presented algorithms for various cooperative games, including real-world explainability scenarios. Full article
Show Figures

Figure 1

26 pages, 5212 KB  
Article
A Modular Non-Immersive VR Serious Game Framework for Telerehabilitation: Design and Proof-of-Concept Feasibility Study
by Rodrigo G. Pontes, Eduardo D. Dias, Juliana P. Weingartner, Natalia K. Monteiro, Elisa J. Valenzuela, Renata M. Rosa, Victoria Y. H. Silva, Íbis A. P. Moraes, Talita D. Silva-Magalhães, Carlos B. M. Monteiro and Luciano V. Araújo
Computers 2026, 15(3), 192; https://doi.org/10.3390/computers15030192 - 16 Mar 2026
Viewed by 382
Abstract
There is a growing need for accessible and engaging rehabilitation tools for individuals with neurodevelopmental disorders such as Cerebral palsy (CP), Down syndrome (DS), and Autism spectrum disorder (ASD). Serious games offer a promising approach, yet few are tailor-made to meet the therapeutic [...] Read more.
There is a growing need for accessible and engaging rehabilitation tools for individuals with neurodevelopmental disorders such as Cerebral palsy (CP), Down syndrome (DS), and Autism spectrum disorder (ASD). Serious games offer a promising approach, yet few are tailor-made to meet the therapeutic demands of these populations. A tailor-made, non-immersive virtual reality (VR) serious games framework featuring a basketball task was developed, with therapist-controlled modules for customization and monitoring. Twenty-eight participants (CP: 14; DS: 7; ASD: 7) completed the game across eight sessions, grouped into three practice phases: an initial session, an early adaptation phase, and a consolidated practice phase. Performance metrics included accuracy, reaction time, and number of victories. All groups improved performance across phases, with accuracy increasing significantly in central (p = 0.005) and total positions (p = 0.007). The number of victories also increased from the initial to the early adaptation phase (p = 0.019) and from the initial to the consolidated practice phase (p = 0.008). Participants with ASD showed significantly higher accuracy than the DS group, while CP and DS participants showed a temporary increase in reaction time during the early adaptation phase, followed by a reduction in the consolidated phase, suggesting task adaptation. These findings support the feasibility and short-term effectiveness of a modular, tailor-made serious games platform for telerehabilitation. Full article
Show Figures

Graphical abstract

Back to TopTop