Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (241)

Search Parameters:
Keywords = cluster norm

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
46 pages, 2951 KB  
Article
Topology-Based Machine Learning and Regime Identification in Stochastic, Heavy-Tailed Financial Time Series
by Prosper Lamothe-Fernández, Eduardo Rojas and Andriy Bayuk
Mathematics 2026, 14(7), 1098; https://doi.org/10.3390/math14071098 - 24 Mar 2026
Abstract
Classic machine learning and regime identification methods applied to financial time series lack theoretical guarantees and exhibit systematic failure modes: heavy-tails invalidate moment-based geometry, rendering distances and centroids dominated by extremes or unstable; jumps violate smoothness, destabilizing local regressions, kernel methods, and gradient-based [...] Read more.
Classic machine learning and regime identification methods applied to financial time series lack theoretical guarantees and exhibit systematic failure modes: heavy-tails invalidate moment-based geometry, rendering distances and centroids dominated by extremes or unstable; jumps violate smoothness, destabilizing local regressions, kernel methods, and gradient-based learning; and non-stationarity disrupts neighborhood relations, so distances in classical feature spaces no longer reflect meaningful proximity. To address these challenges, we propose a topology-based machine-learning framework grounded on probabilistic reconstruction of state-space geometry, which replaces moment- and smoothness-dependent representations with deformation-stable summaries of state-space geometry, preserving neighborhoods, adjacency, and topology. The finite-sample validity of homeomorphic state-space reconstruction, required for topology-based machine learning, is assessed through numerical studies on synthetic data with heavy tails, jumps, and known ground-truth regimes. Further diagnostics of local invertibility and bounded geometric distortion quantify when embedding windows are consistent with local diffeomorphic behavior, enabling metric-sensitive, geometry-aware learning. Clustering of Hilbert-space summaries accurately recovers underlying market tail-risk regimes with robust results across selected filtrations. Temporal, feature-space, and cluster-label null tests confirm that topology-based clustering captures genuine topological structure rather than noise or artifacts, and encodes temporal dependencies at local, mesoscopic, and network levels associated with market regimes. Full article
(This article belongs to the Section E: Applied Mathematics)
20 pages, 1426 KB  
Review
Profiling Decision-Making Styles Under Healthcare Resource Scarcity: An Interdisciplinary Clustering Approach
by Micaela Pinho, Fátima Leal and Isabel Miguel
Information 2026, 17(3), 287; https://doi.org/10.3390/info17030287 - 14 Mar 2026
Viewed by 203
Abstract
Scarcity of healthcare resources requires prioritisation decisions that raise complex ethical, economic, and social challenges. While normative frameworks provide guidance on how such decisions ought to be made, growing evidence suggests that individuals differ substantially in how they approach morally charged allocation choices. [...] Read more.
Scarcity of healthcare resources requires prioritisation decisions that raise complex ethical, economic, and social challenges. While normative frameworks provide guidance on how such decisions ought to be made, growing evidence suggests that individuals differ substantially in how they approach morally charged allocation choices. This study investigates heterogeneity in decision-making styles and support for healthcare prioritisation criteria using an interdisciplinary approach that integrates health economics, social psychology, and computational methods to identify latent decision-making profiles among a sample of adults residing in Portugal. Data were collected from adults residing in Portugal using a structured online questionnaire comprising socio-demographic characteristics, decision-making styles, and preferences elicited through twenty hypothetical healthcare rationing scenarios. The results reveal three meaningful decision-making profiles characterised by different combinations of cognitive styles and ethical prioritisation patterns: analytically oriented decision-makers prioritising health gains; intuitive, context-sensitive decision-makers balancing clinical and social criteria; heuristic-driven decision-makers relying on simpler or less differentiated heuristics. These findings demonstrate that, within this sample, healthcare prioritisation preferences are shaped by systematic variations in decision style rather than a single moral or rational framework. By linking behavioural heterogeneity with ethical decision-making, this study contributes to theoretical debates on healthcare rationing and demonstrates the value of clustering techniques for uncovering latent structures in complex decision data. The results provide insights relevant for the design of decision-support systems and rationing policies, which may be adapted to accommodate heterogeneous decision styles in comparable settings. Full article
(This article belongs to the Topic Machine Learning and Data Mining: Theory and Applications)
Show Figures

Figure 1

22 pages, 327 KB  
Article
From Participants to Community Partners: A Novel Community-Based Participatory Research (CBPR) Approach to Autistic-Led Inquiry in Digital and Virtual Environments
by Vivian Darlene Grillo, Margherita Zani, Vittoria Veronesi and Paola Venuti
Healthcare 2026, 14(6), 702; https://doi.org/10.3390/healthcare14060702 - 10 Mar 2026
Viewed by 341
Abstract
Background/Objectives: Autism research has often interpreted autistic sociality through neurotypical norms, limiting ecological accounts of autistic meaning-making and context-sensitive support needs. Social virtual environments (SVEs), such as VRChat, allow modulation of sensory exposure, social distance, and participation pace, potentially enabling autistic-led interaction [...] Read more.
Background/Objectives: Autism research has often interpreted autistic sociality through neurotypical norms, limiting ecological accounts of autistic meaning-making and context-sensitive support needs. Social virtual environments (SVEs), such as VRChat, allow modulation of sensory exposure, social distance, and participation pace, potentially enabling autistic-led interaction with greater autonomy and predictability. This study examined how autistic young adults co-construct meanings around social interaction, identity, and self-regulation in peer-led discussions within an SVE; identified context-sensitive processes relevant to well-being; and evaluated the feasibility and acceptability of SVEs as a participatory research setting. Methods: Sixteen autistic young adults (18–38 years; DSM-5-TR, Level 1) participated in nine remote sessions conducted in VRChat, coordinated via a co-designed Discord server. The peer-led discussions were audio-video recorded, transcribed, and anonymized. Data were analyzed using reflexive thematic analysis, combining inductive session-level coding, cross-session thematic clustering, and participatory refinement with community partners. Results: Autistic experience was framed as a context-dependent negotiation of interpretive risk, interactional workload, masking-related energy costs, and epistemic injustice, alongside future-oriented accounts emphasizing access, dignity, and systemic redesign. Observational memos documented multimodal participation, distributed peer facilitation, and accessibility-relevant sensitivities to environmental stability. Community partners reported positive experiences and supported the acceptability of private-world VRChat sessions. Conclusions: Peer-led discussions in an SVE can support ecologically grounded, participant-centered qualitative research, offering methodological opportunities to study autistic meaning-making under conditions that reduce demands and risks. Full article
26 pages, 579 KB  
Article
A High-Order Parallel Framework for Simultaneous Root-Finding in Nonlinear Systems with Multiple Solutions
by Mudassir Shams and Bruno Carpentieri
AppliedMath 2026, 6(3), 43; https://doi.org/10.3390/appliedmath6030043 - 9 Mar 2026
Viewed by 179
Abstract
Nonlinear systems with multiple roots arise frequently in biomedical and engineering models, yet their reliable numerical solution remains a challenging task. Many classical methods suffer from sensitivity to initial guesses, reduced convergence rates, and loss of accuracy in the presence of multiple or [...] Read more.
Nonlinear systems with multiple roots arise frequently in biomedical and engineering models, yet their reliable numerical solution remains a challenging task. Many classical methods suffer from sensitivity to initial guesses, reduced convergence rates, and loss of accuracy in the presence of multiple or clustered solutions. In addition, the exploitation of parallelism to improve robustness and computational efficiency has received limited attention. In this work, we propose a high-accuracy parallel numerical framework of fourth-order convergence for the simultaneous approximation of all solutions of nonlinear systems with multiple roots. The proposed scheme is derivative-free and structurally decoupled, enabling efficient parallel implementation and robust convergence even when reliable initial approximations are unavailable. The effectiveness of the method is demonstrated on representative biomedical engineering models, including a glucose–insulin–glucagon regulatory network and a multi-compartment pharmacokinetic system, both exhibiting strong nonlinearity and multistability. Numerical experiments confirm stable convergence toward distinct solution clusters, machine-level accuracy, reduced residual norms, and improved computational performance when compared with existing approaches. These results indicate that the proposed framework provides a reliable and efficient alternative for solving nonlinear systems with multiple roots in complex applied settings. Full article
(This article belongs to the Section Computational and Numerical Mathematics)
Show Figures

Figure 1

25 pages, 2067 KB  
Article
Semantic and Engineering-Based Embedding for Classification List Development
by Jadeyn Feng, Allison Lau, Melinda Hodkiewicz, Caitlin Woods and Michael Stewart
Mach. Learn. Knowl. Extr. 2026, 8(3), 61; https://doi.org/10.3390/make8030061 - 4 Mar 2026
Viewed by 383
Abstract
The creation and application of classification category labels are essential tasks for transforming complex information into structured knowledge. Categories are used for summary and reporting purposes and have historically been identified by domain experts based on their past experiences and norms. Our interest [...] Read more.
The creation and application of classification category labels are essential tasks for transforming complex information into structured knowledge. Categories are used for summary and reporting purposes and have historically been identified by domain experts based on their past experiences and norms. Our interest lies in the general case where expert-generated category lists require improvement, and unsupervised learning, on its own, struggles to effectively identify categories for multi-class classification of human-generated texts. We hypothesise that including an annotated knowledge graph (KG) in an embedding process will positively impact unsupervised clustering performance. Our goal is to identify clusters that can be labelled and used for classification. We look at unsupervised clustering of Maintenance Work Order (MWO) texts. MWOs capture vital observations about equipment failures in process and heavy industries. The selected KG contains a mapping of equipment types to their inherent function based on the IEC 81346-2 international standard for classification of objects in industrial systems. Performance is assessed by statistical analysis, subject matter experts, and Normalized Mutual Information score. We demonstrate that Word2Vec Bi-LSTM and Sentence-BERT NN embedding methods can leverage equipment inherent function information in the KG to improve failure mode cluster identification for the MWO. Organisations seeking to use AI to automate assignment of a failure mode code to each MWO currently need test sets classified by humans. The results of this work suggest that a semantic layer containing a knowledge graph mapping equipment types to inherent function, and inherent function to failure modes could assist in quality control for automated failure mode classification. Full article
(This article belongs to the Section Data)
Show Figures

Graphical abstract

21 pages, 1379 KB  
Article
A Transformer-Based Semantic Encoding Framework for Quantitative Analysis of Large-Scale Textual Reviews
by Darjan Karabašević, Aleksandra Vujko, Vuk Mirčetić, Gabrijela Popović and Dragiša Stanujkić
Axioms 2026, 15(3), 175; https://doi.org/10.3390/axioms15030175 - 28 Feb 2026
Viewed by 330
Abstract
Increasing turbulence in contemporary business environments has made the quantitative analysis of unstructured textual data a central methodological challenge for researchers and decision-makers. The increasing availability of large-scale textual data has heightened the need for quantitative frameworks that can transform unstructured language into [...] Read more.
Increasing turbulence in contemporary business environments has made the quantitative analysis of unstructured textual data a central methodological challenge for researchers and decision-makers. The increasing availability of large-scale textual data has heightened the need for quantitative frameworks that can transform unstructured language into analyzable numerical representations. Transformer-based language models address this need by encoding text into high-dimensional semantic embeddings. Yet, these representations are commonly treated as black-box inputs for downstream tasks, with limited examination of their intrinsic numerical and geometric properties. The research in this manuscript addresses this gap by proposing a quantitative framework for analyzing transformer-based semantic embeddings as high-dimensional metric spaces prior to task-specific modeling. We employ an innovative methodological approach, considering vector norms regarding examining the dispersion of vector norms to detect concentration of measure, cosine similarity in the context of evaluating the distribution of pairwise cosines between vectors, and principal component analysis. For the purpose of the research, 3034 visitor-generated reviews related to national park experiences were used. Textual inputs are deterministically mapped into a normalized 384-dimensional embedding space using a transformer-based encoder. The analysis examines numerical stability through vector norm dispersion, semantic organization via cosine similarity distributions, variance structure using principal component analysis, and internal organization through unsupervised clustering validity metrics. Clustering is successful when high separation between clusters and high cohesion within clusters are achieved, which is why a single measure combining separation and cohesion metrics was proposed in the research. The results show almost perfect norm stability, backing up the choice of angular similarity as the right semantic metric. Variance decomposition and clustering results share a continuous high-dimensional semantic structure with no dominant latent components or clearly separable clusters. These results suggest that semantic meaning is best thought of as a continuous metric space rather than discrete categories, highlighting the need for representational diagnostics before predictive modeling. Full article
Show Figures

Figure 1

14 pages, 551 KB  
Article
Improved New Block Preconditioner for Solving 3 × 3 Block Saddle Point Problems
by Xin-Hui Shao and Xin-Yang Liu
Axioms 2026, 15(3), 167; https://doi.org/10.3390/axioms15030167 - 27 Feb 2026
Viewed by 168
Abstract
In order to overcome the computational challenges associated with block preconditioners for Krylov subspace methods, particularly those arising from Schur complement systems, this paper proposes an improved new block (INB) preconditioner for solving 3 × 3 block saddle point problems. A detailed semi-convergence [...] Read more.
In order to overcome the computational challenges associated with block preconditioners for Krylov subspace methods, particularly those arising from Schur complement systems, this paper proposes an improved new block (INB) preconditioner for solving 3 × 3 block saddle point problems. A detailed semi-convergence analysis of the iterative scheme induced by the INB preconditioner is provided. Moreover, the spectral properties of the preconditioned matrix are analyzed, revealing strong eigenvalue clustering around one. Efficient formulas for selecting quasi-optimal parameters are derived based on Frobenius-norm minimization. Extensive numerical experiments demonstrate that the proposed INB preconditioner significantly reduces iteration counts and CPU time compared with several existing block preconditioners. Full article
Show Figures

Figure 1

47 pages, 3245 KB  
Article
DISPEL-GNN: De-Illusion via Spectral Stability and Perturbation Bound-Enforced Learning for Community Detection with Risk-Aware Dynamic Attention in Graph Neural Networks
by Daozheng Qu, Yanfei Ma and Mykhailo Pyrozhenko
Mathematics 2026, 14(4), 602; https://doi.org/10.3390/math14040602 - 9 Feb 2026
Cited by 1 | Viewed by 407
Abstract
Community detection in graphs can be viewed as the estimation of a partition map that remains stable under admissible perturbations of graph topology and node attributes. While modern graph neural networks (GNNs) achieve strong empirical accuracy, they often exhibit severe assignment drift under [...] Read more.
Community detection in graphs can be viewed as the estimation of a partition map that remains stable under admissible perturbations of graph topology and node attributes. While modern graph neural networks (GNNs) achieve strong empirical accuracy, they often exhibit severe assignment drift under minor perturbations, leading to illusory community structures. In this work, we propose DISPEL-GNN, a stability-aware graph learning framework that integrates spectral operator regularization, Bayesian uncertainty modeling, and risk-aware dynamic attention for perturbation-bounded community detection. The model explicitly constrains graph operators through uniform spectral norm bounds, high-frequency energy suppression, and commutator alignment while dynamically modulating message passing based on node-level spectral risk and epistemic uncertainty. We further formalize instability via assignment of drift functional and establish perturbation bounds linking drift to operator norms and spectral gaps, complemented by a PAC-Bayesian generalization guarantee. Extensive experiments on real-world benchmarks including Cora, Citeseer, Pubmed, Cora-Full, and DBLP demonstrate that DISPEL-GNN consistently reduces assignment drift by 18–35% under feature noise and edge perturbations while improving clustering quality with up to +3.0 NMI and +0.04 ARI compared to strong baselines such as GAT and Bayesian GNNs. The normalized mutual information (NMI), adjusted Rand index (ARI), and PAC-Bayesian (PAC) constraints serve as evaluative and theoretical instruments in this study. Additional studies on synthetic graphs with controlled spectral gaps confirm that the proposed method maintains stable community assignments in low-gap regimes where classical spectral and GNN-based methods degrade sharply. These results establish DISPEL-GNN as a mathematically grounded and practically effective framework for robust and interpretable community detection. A metric-wise dominance analysis shows that DISPEL-GNN achieves metric-wise dominance across most accuracy and robustness criteria, with minor tradeoffs in modularity on selected datasets. These results indicate that explicitly modeling stability and uncertainty provides a principled pathway toward reliable and interpretable community detection in noisy graph environments. Full article
(This article belongs to the Special Issue Machine Learning and Graph Neural Networks)
Show Figures

Figure 1

18 pages, 14590 KB  
Article
VTC-Net: A Semantic Segmentation Network for Ore Particles Integrating Transformer and Convolutional Block Attention Module (CBAM)
by Yijing Wu, Weinong Liang, Jiandong Fang, Chunxia Zhou and Xiaolu Sun
Sensors 2026, 26(3), 787; https://doi.org/10.3390/s26030787 - 24 Jan 2026
Viewed by 376
Abstract
In mineral processing, visual-based online particle size analysis systems depend on high-precision image segmentation to accurately quantify ore particle size distribution, thereby optimizing crushing and sorting operations. However, due to multi-scale variations, severe adhesion, and occlusion within ore particle clusters, existing segmentation models [...] Read more.
In mineral processing, visual-based online particle size analysis systems depend on high-precision image segmentation to accurately quantify ore particle size distribution, thereby optimizing crushing and sorting operations. However, due to multi-scale variations, severe adhesion, and occlusion within ore particle clusters, existing segmentation models often exhibit undersegmentation and misclassification, leading to blurred boundaries and limited generalization. To address these challenges, this paper proposes a novel semantic segmentation model named VTC-Net. The model employs VGG16 as the backbone encoder, integrates Transformer modules in deeper layers to capture global contextual dependencies, and incorporates a Convolutional Block Attention Module (CBAM) at the fourth stage to enhance focus on critical regions such as adhesion edges. BatchNorm layers are used to stabilize training. Experiments on ore image datasets show that VTC-Net outperforms mainstream models such as UNet and DeepLabV3 in key metrics, including MIoU (89.90%) and pixel accuracy (96.80%). Ablation studies confirm the effectiveness and complementary role of each module. Visual analysis further demonstrates that the model identifies ore contours and adhesion areas more accurately, significantly improving segmentation robustness and precision under complex operational conditions. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

29 pages, 651 KB  
Article
Public Perceptions of Generative AI in Creative Industries: A Reddit-Based Text Mining Study
by Mitja Bervar, Mirjana Pejić Bach and Tine Bertoncel
Systems 2026, 14(1), 116; https://doi.org/10.3390/systems14010116 - 22 Jan 2026
Viewed by 1393
Abstract
The integration of generative AI into creative industries is reshaping how content is produced, evaluated, and distributed. While recent advancements offer new opportunities for automation and innovation, they also raise questions about authorship, authenticity, and professional identity. This study examines public discourse on [...] Read more.
The integration of generative AI into creative industries is reshaping how content is produced, evaluated, and distributed. While recent advancements offer new opportunities for automation and innovation, they also raise questions about authorship, authenticity, and professional identity. This study examines public discourse on generative AI in creative domains through a text-mining analysis of nearly 4000 Reddit posts and comments. Drawing on six relevant subreddits from 2022 to 2025, the research investigates the structure of user engagement, interaction dynamics, and language patterns. It identifies dominant terms and phrases related to AI creativity, explores thematic clusters, and compares discussion styles across key tools such as Midjourney, ChatGPT, Stable Diffusion, and DALL·E. Additionally, it provides a sentiment overview based on automated classification and narrative interpretation. The findings show that Reddit users engage with generative AI not only as a set of technical tools but as a source of cultural, ethical, and creative negotiation. This study contributes to a deeper understanding of how digital transformation in creative industries is shaped by public perception, platform discourse, and evolving community norms. Full article
Show Figures

Figure 1

34 pages, 11603 KB  
Article
Mapping Co-Creation and Co-Production in Public Administration: A Scientometric Study
by Rok Hržica
Adm. Sci. 2026, 16(1), 55; https://doi.org/10.3390/admsci16010055 - 21 Jan 2026
Viewed by 708
Abstract
This study presents a comprehensive bibliometric and science mapping analysis of research on co-creation and co-production in public administration, based on 819 publications indexed in the Web of Science (WoS). The analysis of scientific production in this field shows sparse early contributions before [...] Read more.
This study presents a comprehensive bibliometric and science mapping analysis of research on co-creation and co-production in public administration, based on 819 publications indexed in the Web of Science (WoS). The analysis of scientific production in this field shows sparse early contributions before 2005, followed by steady growth after 2010 and accelerated expansion from 2016 onward, driven primarily by European and United States research communities. In terms of scholarly influence, the results identify a stable core of highly productive and influential authors, institutions, and countries, with strong concentration in Northern and Western Europe and Anglo-Saxon contexts. To address the intellectual structure of the field, science mapping identifies four dominant thematic clusters: (1) co-production and value creation, (2) participation and public engagement, (3) governance and policy, and (4) knowledge development, lessons learned, and evaluative insights. Examining thematic and keyword evolution over time, the findings indicate a shift from early conceptual and normative discussions toward more applied and implementation-oriented research, with increasing attention to barriers, challenges, and enabling conditions in recent years. Overall, the findings show that research on co-creation and co-production has evolved from conceptual fragmentation toward greater thematic consolidation and analytical maturity, while persistent implementation challenges remain. By systematically mapping these developments, the study provides a structured overview that supports future conceptual integration and informs both research agendas and practice-oriented discussions on co-creation and co-production in public administration. Full article
Show Figures

Figure 1

24 pages, 662 KB  
Article
Between Inclusion and Artificial Intelligence: A Study of the Training Gaps of University Teaching Staff in Spain
by Lina Higueras-Rodríguez, Johana Muñoz-López, Marta Medina-García and Carmen Lucena-Rodríguez
Educ. Sci. 2026, 16(1), 151; https://doi.org/10.3390/educsci16010151 - 19 Jan 2026
Viewed by 442
Abstract
This study analyzes how Spanish universities integrate inclusion, accessibility, digital competence, and artificial intelligence (AI) into the professional development of university teaching staff, in a context marked by rapid digital transformation. The research addresses the lack of comparative evidence on how these key [...] Read more.
This study analyzes how Spanish universities integrate inclusion, accessibility, digital competence, and artificial intelligence (AI) into the professional development of university teaching staff, in a context marked by rapid digital transformation. The research addresses the lack of comparative evidence on how these key dimensions of contemporary higher education are articulated, or remain disconnected, across institutions. Using a mixed-methods design, 83 training courses delivered between 2020 and 2025 in 24 public and private universities were examined through qualitative analysis, coding matrices, and hierarchical cluster analysis. The study adopts an explicitly exploratory and typological approach, aimed at mapping institutional patterns rather than establishing causal explanations. The results reveal a highly heterogeneous and weakly cohesive training landscape. Inclusion appears primarily as a normative discourse with limited pedagogical depth; accessibility is frequently reduced to technical compliance; and AI (particularly generative AI) is incorporated from instrumental, efficiency-oriented approaches. Ethical dimensions, algorithmic bias, and digital accessibility are virtually absent. The hierarchical cluster analysis identifies four institutional profiles: technocentric without inclusion, analogically inclusive, advanced hybrid, and low-density training models. These patterns show how institutional orientations shape the professional development trajectories of university teaching staff. The study highlights the need for comprehensive faculty development strategies that integrate inclusion, accessibility, and responsible AI use, and offers a structured typological baseline for future research assessing impact on teaching practice and student experience. Full article
Show Figures

Figure 1

20 pages, 3982 KB  
Article
AI-Driven Decimeter-Level Indoor Localization Using Single-Link Wi-Fi: Adaptive Clustering and Probabilistic Multipath Mitigation
by Li-Ping Tian, Chih-Min Yu, Li-Chun Wang and Zhizhang (David) Chen
Sensors 2026, 26(2), 642; https://doi.org/10.3390/s26020642 - 18 Jan 2026
Viewed by 332
Abstract
This paper presents an Artificial Intelligence (AI)-driven framework for high-precision indoor localization using single-link Wi-Fi channel state information (CSI), targeting real-time deployment in complex multipath environments. To overcome challenges such as signal distortion and environmental dynamics, the proposed system integrates adaptive and unsupervised [...] Read more.
This paper presents an Artificial Intelligence (AI)-driven framework for high-precision indoor localization using single-link Wi-Fi channel state information (CSI), targeting real-time deployment in complex multipath environments. To overcome challenges such as signal distortion and environmental dynamics, the proposed system integrates adaptive and unsupervised intelligence modules into the localization pipeline. A refined two-stage time-of-flight (TOF) estimation method is introduced, combining a minimum-norm algorithm with a probability-weighted refinement mechanism that improves ranging accuracy under non-line-of-sight (NLOS) conditions. Simultaneously, an adaptive parameter-tuned DBSCAN algorithm is applied to angle-of-arrival (AOA) sequences, enabling unsupervised spatio-temporal clustering for stable direction estimation without requiring prior labels or environmental calibration. These AI-enabled components allow the system to dynamically suppress multipath interference, eliminate positioning ambiguity, and maintain robustness across diverse indoor layouts. Comprehensive experiments conducted on the Widar2.0 dataset demonstrate that the proposed method achieves decimeter-level accuracy with an average localization error of 0.63 m, outperforming existing methods such as “Widar2.0” and “Dynamic-MUSIC” in both accuracy and efficiency. This intelligent and lightweight architecture is fully compatible with commodity Wi-Fi hardware and offers significant potential for real-time human tracking, smart building navigation, and other location-aware AI applications. Full article
(This article belongs to the Special Issue Sensors for Indoor Positioning)
Show Figures

Figure 1

22 pages, 1044 KB  
Article
Explaining Food Waste Dissimilarities in the European Union: An Analysis of Economic, Demographic, and Educational Dimensions
by Claudiu George Bocean
Foods 2025, 14(24), 4244; https://doi.org/10.3390/foods14244244 - 10 Dec 2025
Viewed by 608
Abstract
Food waste remains a persistent sustainability challenge for the European Union, revealing how economic development, demographic structures, and educational attainment intersect to shape consumption behavior. Although rising prosperity can enhance efficiency, it often encourages overproduction and habits of abundance that increase food waste. [...] Read more.
Food waste remains a persistent sustainability challenge for the European Union, revealing how economic development, demographic structures, and educational attainment intersect to shape consumption behavior. Although rising prosperity can enhance efficiency, it often encourages overproduction and habits of abundance that increase food waste. This study investigates the structural drivers behind the variation in per capita food waste across EU member states by examining the combined influences of economic growth, human capital, and population density. Using a cross-country dataset, the analysis integrates factorial methods to identify latent relationships among socioeconomic indicators, a multilayer perceptron to capture nonlinear dependencies, and cluster analysis to classify countries according to shared development and education patterns. The results show that higher income and consumption levels tend to elevate food waste. Nevertheless, this effect is moderated when educational attainment and public awareness are stronger, highlighting the role of knowledge in shaping responsible consumption. The neural network further demonstrates that the relationship between prosperity and waste is not linear but mediated by the cognitive and social capacities of each society. Cluster patterns reveal regional models where sustainability policies and cultural norms contribute to more efficient food management. Overall, the study emphasizes that food waste arises from structural disparities rather than isolated behaviors, offering an evidence-based foundation for integrated EU policies that support more sustainable and equitable resource use. Full article
(This article belongs to the Special Issue Recent Advances in Sustainable Food Manufacturing)
Show Figures

Figure 1

15 pages, 741 KB  
Article
Spatializing Trust: A GeoAI-Based Model for Mapping Digital Trust Ecosystems in Mediterranean Smart Regions
by Simona Epasto
ISPRS Int. J. Geo-Inf. 2025, 14(12), 491; https://doi.org/10.3390/ijgi14120491 - 10 Dec 2025
Viewed by 708
Abstract
As digital transformation intensifies, the governance of spatial data infrastructures is becoming increasingly dependent on the capacity to generate and sustain trust—technological, institutional and civic. This challenge is particularly acute in the Mediterranean region, where disparities in how geospatial data are produced, accessed, [...] Read more.
As digital transformation intensifies, the governance of spatial data infrastructures is becoming increasingly dependent on the capacity to generate and sustain trust—technological, institutional and civic. This challenge is particularly acute in the Mediterranean region, where disparities in how geospatial data are produced, accessed, and validated are created by uneven digital development and fragmented governance structures. In response to this, this paper introduces an integrated framework combining geospatial artificial intelligence (GeoAI) and blockchain technologies to support transparent, verifiable and spatially explicit models of digital trust. Based on case studies from the Horizon 2020 TRUST project, the framework defines trust through territorial indicators across three dimensions: digital infrastructure, institutional transparency, and civic engagement. The system uses interpretable AI models, such as Random Forests, K-means clustering and convolutional neural networks, to classify regions into trust typologies based on multi-source geospatial data. These outputs are then transformed into semantically structured spatial products and anchored to the Ethereum blockchain via smart contracts and decentralized storage (IPFS), thereby ensuring data integrity, auditability and version control. Experimental results from pilot regions in Italy, Greece, Spain and Israel demonstrate the effectiveness of the framework in detecting spatial patterns of trust and producing interoperable, reusable datasets. The findings highlight significant spatial asymmetries in digital trust across the Mediterranean region, suggesting that trust is a measurable territorial condition, not merely a normative ideal. By combining GeoAI with decentralized verification mechanisms, the proposed approach helps to develop accountable, explainable and inclusive spatial data infrastructures, which are essential for democratic digital governance in complex regional environments. Full article
Show Figures

Figure 1

Back to TopTop