Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (968)

Search Parameters:
Keywords = digital field mapping

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 884 KB  
Article
AI-Driven Typography: A Human-Centered Framework for Generative Font Design Using Large Language Models
by Yuexi Dong and Mingyong Gao
Information 2026, 17(2), 150; https://doi.org/10.3390/info17020150 - 3 Feb 2026
Abstract
This paper presents a human-centered, AI-driven framework for font design that reimagines typography generation as a collaborative process between humans and large language models (LLMs). Unlike conventional pixel- or vector-based approaches, our method introduces a Continuous Style Projector that maps visual features from [...] Read more.
This paper presents a human-centered, AI-driven framework for font design that reimagines typography generation as a collaborative process between humans and large language models (LLMs). Unlike conventional pixel- or vector-based approaches, our method introduces a Continuous Style Projector that maps visual features from a pre-trained ResNet encoder into the LLM’s latent space, enabling zero-shot style interpolation and fine-grained control of stroke and serif attributes. To model handwriting trajectories more effectively, we employ a Mixture Density Network (MDN) head, allowing the system to capture multi-modal stroke distributions beyond deterministic regression. Experimental results show that users can interactively explore, mix, and generate new typefaces in real time, making the system accessible for both experts and non-experts. The approach reduces reliance on commercial font licenses and supports a wide range of applications in education, design, and digital communication. Overall, this work demonstrates how LLM-based generative models can enhance creativity, personalization, and cultural expression in typography, contributing to the broader field of AI-assisted design. Full article
Show Figures

Figure 1

32 pages, 5713 KB  
Article
The Nexus Between Digital Finance, Automation, Environmental, Social, and Governance (ESG) Objectives: Evidence Based on a Bibliometric Analysis
by Oana-Alexandra Dragomirescu, George Eduard Grigore and Ana-Ramona Bologa
Information 2026, 17(2), 132; https://doi.org/10.3390/info17020132 - 1 Feb 2026
Viewed by 154
Abstract
The main purpose of this study was to conduct a bibliometric analysis of scientific knowledge and trends in modern finance. To this end, the analysis was based on the keywords: “finance”, “automation”, and “ESG”. The analysis aimed to provide theoretical insights into the [...] Read more.
The main purpose of this study was to conduct a bibliometric analysis of scientific knowledge and trends in modern finance. To this end, the analysis was based on the keywords: “finance”, “automation”, and “ESG”. The analysis aimed to provide theoretical insights into the economic and financial implications of automation and its role in achieving ESG objectives. From a methodological standpoint, bibliometric research was conducted on 21 September 2025. It involved analysing a total of 16,500 scientific articles published between 1974 and 2026 in two databases: The Web of Science Core Collection and Scopus. The Bibliometrix R 5.2.0 version tool was used to generate visualisations. Thematic mapping, three-field plotting, keyword mapping, and clustering were the main methods used to analyse the associations between finance, automation, and ESG principles. The study’s results showed an average annual increase in publications of approximately 3.80% and 2.50%, respectively, while international collaborations between researchers have become increasingly prominent in recent years. At the same time, the co-occurrence network analysis identified five key thematic clusters in the Web of Science Core Collection and three in Scopus. From a comparative perspective, these clusters highlight the most significant connections between environmental, social, and governance (ESG) performance, corporate social responsibility (CSR) impact, financial performance, economic growth, sustainable development, and the implications of the automation process. From a bibliometric point of view, this research contributes to a better understanding of the multiple digital transformations specific to the current financial framework, generating possible future research directions on the significant role of automation in financial, environmental, and social performance. Furthermore, automation is a critical component of the digital future of finance. Analysing and investigating the causal relationships between automation and Environmental, Social, and Governance (ESG) principles will necessitate new areas of study within the financial sphere. Full article
(This article belongs to the Section Information Applications)
Show Figures

Graphical abstract

21 pages, 2204 KB  
Article
Digitizing Micromaser Steady States: Entropy, Information Graphs, and Multipartite Correlations in Qubit Registers
by István Németh, Szilárd Zsóka and Attila Bencze
Entropy 2026, 28(2), 162; https://doi.org/10.3390/e28020162 - 31 Jan 2026
Viewed by 57
Abstract
We develop a digitization-based analysis workflow for characterizing the entropy and correlation structure of truncated bosonic quantum fields after embedding them into small qubit registers, and illustrate it on the steady state of a coherently pumped micromaser. The cavity field is truncated to [...] Read more.
We develop a digitization-based analysis workflow for characterizing the entropy and correlation structure of truncated bosonic quantum fields after embedding them into small qubit registers, and illustrate it on the steady state of a coherently pumped micromaser. The cavity field is truncated to 32 Fock levels and embedded into a five-qubit register via a Gray-code mapping of photon number to computational basis states, with binary encoding used as a benchmark. On this register we compute reduced entropies, mutual informations, bipartite negativities and Coffman–Kundu–Wootters three-tangles for all qubit pairs and triplets, and use the resulting patterns to define information graphs. The micromaser Liouvillian naturally supports trapping manifolds in Fock space, whose structure depends on the choice of interaction angle and on thermal coupling to the reservoir. We show that these manifolds leave a clear imprint on the digitized information graph: multi-block trapping configurations induce sparse, banded patterns dominated by a few two-qubit links, while trapping on a single 32-dimensional manifold or coupling to a thermally populated cavity leads to more delocalized and collectively shared correlations. The entropy and mutual-information profiles of the register provide a complementary view on how energy and information are distributed across qubits in different parameter regimes. Although the full micromaser dynamics can in principle generate higher-order entanglement, we focus here on well-defined measures of two- and three-party correlations and treat the emerging information graph as a structural probe of digitized field states. We expect the workflow to transfer to other bosonic fields encoded in small qubit registers, and outline how the resulting information-graph view can serve as a practical diagnostic in studies of driven-dissipative correlation structure. Full article
(This article belongs to the Special Issue Dissipative Physical Dynamics)
20 pages, 8142 KB  
Article
The Patos Lagoon Digital Twin—A Framework for Assessing and Mitigating Impacts of Extreme Flood Events in Southern Brazil
by Elisa Helena Fernandes, Glauber Gonçalves, Pablo Dias da Silva, Vitor Gervini and Éder Maier
Climate 2026, 14(2), 34; https://doi.org/10.3390/cli14020034 - 29 Jan 2026
Viewed by 179
Abstract
Recent projections by the Intergovernmental Panel on Climate Change indicate that global warming will turn permanent and further intensify the severity and frequency of extreme weather events (heat waves, rain, and intense droughts), with coastal regions being the most vulnerable to extreme events. [...] Read more.
Recent projections by the Intergovernmental Panel on Climate Change indicate that global warming will turn permanent and further intensify the severity and frequency of extreme weather events (heat waves, rain, and intense droughts), with coastal regions being the most vulnerable to extreme events. Therefore, the risk of natural disasters and the associated regional impacts on water, food, energy, social, and health security represents one of the world’s greatest challenges of this century. However, conventional methodologies for monitoring these regions during extreme events are usually not available to managers and decision-makers with the necessary urgency. The aim of this study was to present a framework concept for assessing extreme flood event impacts in coastal zones using a suite of field data combined with numerical (hydrological, meteorological, and hydrodynamic) and computational (flooding) models in a virtual environment that provides a replica of a natural environment—the Patos Lagoon Digital Twin. The study case was the extreme flood event that occurred in the southernmost region of Brazil in May 2024, considered the largest flooding event in 125 years of data. The hydrodynamic model calculated the water levels around Rio Grande City (MAE ± 0.18 m). These results fed the flooding model, which projected the water over the digital elevation model of the city and produced predictions of flooding conditions on every street (ranging from a few centimeters up to 1.5 m) days before the flooding happened. The results were further customized to attend specific demands from the security forces and municipal civil defense, who evaluated the best alternatives for evacuation strategies and infrastructure safety during the May 2024 extreme flood event. Flood Safety Maps were also generated for all the terminals in the Port of Rio Grande, indicating that the terminals were 0.05 to 2.5 m above the flood level. Overall, this study contributes to a better understanding of the strengths of digital twin models in simulating the impacts of extreme flood events in coastal areas and provides valuable insights into the potential impacts of future climate change in coastal regions, particularly in southern Brazil. This knowledge is crucial for developing targeted strategies to increase regional resilience and sustainability, ensuring that adaptation measures are effectively tailored to anticipated climate impacts. Full article
(This article belongs to the Section Climate Adaptation and Mitigation)
Show Figures

Figure 1

21 pages, 1305 KB  
Article
Cross-Learner Spectral Subset Optimisation: PLS–Ensemble Feature Selection with Weighted Borda Count for Grapevine Cultivar Discrimination
by Kyle Loggenberg, Albert Strever and Zahn Münch
Geomatics 2026, 6(1), 12; https://doi.org/10.3390/geomatics6010012 - 28 Jan 2026
Viewed by 87
Abstract
The mapping of vineyard cultivars presents a substantial challenge in digital agriculture due to the crop’s high intra-class heterogeneity and low inter-class variability. High-dimensional spectral datasets, such as hyperspectral or spectrometry data, can overcome these difficulties. However, research has yet to fully address [...] Read more.
The mapping of vineyard cultivars presents a substantial challenge in digital agriculture due to the crop’s high intra-class heterogeneity and low inter-class variability. High-dimensional spectral datasets, such as hyperspectral or spectrometry data, can overcome these difficulties. However, research has yet to fully address the need for optimal spectral feature subsets tailored for grapevine cultivar discrimination, while few studies have systematically examined waveband subsets that transfer effectively across different learning algorithms. This study sets out to address these gaps by introducing a Partial Least Squares (PLS)-based ensemble feature selection framework with Weighted Borda Count aggregation for cultivar discrimination. Using in-field spectrometry data, collected for six cultivars, and 18 PLS-based feature selection methods spanning filter, wrapper, and hybrid approaches, the PLS–ensemble identified 100 wavebands most relevant for cultivar discrimination, reducing dimensionality by ~95%. The efficacy and transferability of this subset were evaluated using five classification algorithms: Oblique Random Forest (oRF), Multinomial Logistic Regression (Multinom), Support Vector Machine (SVM), Multi-Layer Perceptron (MLP), and a 1D Convolutional Neural Network (CNN). For oRF, Multinom, SVM, and MLP, the PLS–ensemble subset improved accuracy by 0.3–12% compared with using all wavebands. The subset was not optimal for the 1D-CNN, where accuracy decreased by up to 5.7%. Additionally, this study investigated waveband binning to transform narrow hyperspectral bands into broadband spectral features. Using feature multicollinearity and wavelength position, the 100 selected wavebands were condensed into 10 broadband features, which improved accuracy over both the full dataset and the original subset, delivering gains of 4.5–19.1%. The SVM model with this 10-feature subset outperformed all other models (F1: 1.00; BACC: 0.98; MCC: 0.78; AUC: 0.95). Full article
Show Figures

Figure 1

22 pages, 1454 KB  
Review
Sustainability in Heritage Tourism: Evidence from Emerging Travel Destinations
by Sara Sampieri and Silvia Mazzetto
Heritage 2026, 9(2), 45; https://doi.org/10.3390/heritage9020045 - 27 Jan 2026
Viewed by 307
Abstract
This study examines the conceptualization of sustainability in heritage tourism in Saudi Arabia following the introduction of the Saudi Vision 2030 program and the country’s opening to tourism in 2019, both of which aim to diversify the economy and promote cultural heritage. A [...] Read more.
This study examines the conceptualization of sustainability in heritage tourism in Saudi Arabia following the introduction of the Saudi Vision 2030 program and the country’s opening to tourism in 2019, both of which aim to diversify the economy and promote cultural heritage. A scoping review methodology based on the Arksey & O’Malley framework has been adopted; data were charted according to the Joanna Briggs Institute (JBI) charting method based on the PRISMA-ScR reporting protocol. Publications from 2019 to 2025 were systematically collected from the database and manual research, resulting in 25 fully accessible studies that met the inclusion criteria. Data were analyzed thematically, revealing six main areas of investigation, encompassing both sustainability outcomes and cross-cutting implementation enablers: heritage conservation and tourism development, architecture and urban planning, policy and governance, community engagement, marketing and technology, and geoheritage and environmental sustainability. The findings indicate that Saudi research in this field is primarily qualitative, focusing on ecological aspects. The studies reveal limited integration of social and technological dimensions, with significant gaps identified in standardized sustainability indicators, longitudinal monitoring, policy implementation, and digital heritage tools. The originality of this study lies in its comprehensive mapping of Saudi heritage tourism sustainability research, highlighting emerging gaps and future agendas. The results also provide a roadmap for policymakers, managers, and scholars to enhance governance policies, community participation, and technological integration, which can contribute to sustainable tourism development in line with Saudi Vision 2030 goals, thereby fostering international competitiveness while preserving cultural and natural heritage. Full article
Show Figures

Figure 1

27 pages, 4789 KB  
Article
Assessing Interaction Quality in Human–AI Dialogue: An Integrative Review and Multi-Layer Framework for Conversational Agents
by Luca Marconi, Luca Longo and Federico Cabitza
Mach. Learn. Knowl. Extr. 2026, 8(2), 28; https://doi.org/10.3390/make8020028 - 26 Jan 2026
Viewed by 378
Abstract
Conversational agents are transforming digital interactions across various domains, including healthcare, education, and customer service, thanks to advances in large language models (LLMs). As these systems become more autonomous and ubiquitous, understanding what constitutes high-quality interaction from a user perspective is increasingly critical. [...] Read more.
Conversational agents are transforming digital interactions across various domains, including healthcare, education, and customer service, thanks to advances in large language models (LLMs). As these systems become more autonomous and ubiquitous, understanding what constitutes high-quality interaction from a user perspective is increasingly critical. Despite growing empirical research, the field lacks a unified framework for defining, measuring, and designing user-perceived interaction quality in human–artificial intelligence (AI) dialogue. Here, we present an integrative review of 125 empirical studies published between 2017 and 2025, spanning text-, voice-, and LLM-powered systems. Our synthesis identifies three consistent layers of user judgment: a pragmatic core (usability, task effectiveness, and conversational competence), a social–affective layer (social presence, warmth, and synchronicity), and an accountability and inclusion layer (transparency, accessibility, and fairness). These insights are formalised into a four-layer interpretive framework—Capacity, Alignment, Levers, and Outcomes—operationalised via a Capacity × Alignment matrix that maps distinct success and failure regimes. It also identifies design levers such as anthropomorphism, role framing, and onboarding strategies. The framework consolidates constructs, positions inclusion and accountability as central to quality, and offers actionable guidance for evaluation and design. This research redefines interaction quality as a dialogic construct, shifting the focus from system performance to co-orchestrated, user-centred dialogue quality. Full article
Show Figures

Figure 1

33 pages, 18247 KB  
Article
Learning Debris Flow Dynamics with a Deep Learning Fourier Neural Operator: Application to the Rendinara–Morino Area
by Mauricio Secchi, Antonio Pasculli, Massimo Mangifesta and Nicola Sciarra
Geosciences 2026, 16(2), 55; https://doi.org/10.3390/geosciences16020055 - 24 Jan 2026
Viewed by 219
Abstract
Accurate numerical simulation of debris flows is essential for hazard assessment and early-warning design, yet high-fidelity solvers remain computationally expensive, especially when large ensembles must be explored under epistemic uncertainty in rheology, initial conditions, and topography. At the same time, field observations are [...] Read more.
Accurate numerical simulation of debris flows is essential for hazard assessment and early-warning design, yet high-fidelity solvers remain computationally expensive, especially when large ensembles must be explored under epistemic uncertainty in rheology, initial conditions, and topography. At the same time, field observations are typically sparse and heterogeneous, limiting purely data-driven approaches. In this work, we develop a deep-learning Fourier Neural Operator (FNO) as a fast, physics-consistent surrogate for one-dimensional shallow-water debris-flow simulations and demonstrate its application to the Rendinara–Morino system in central Italy. A validated finite-volume solver, equipped with HLLC and Rusanov fluxes, hydrostatic reconstruction, Voellmy-type basal friction, and robust wet–dry treatment, is used to generate a large ensemble of synthetic simulations over longitudinal profiles representative of the study area. The parameter space of bulk density, initial flow thickness, and Voellmy friction coefficients is systematically sampled, and the resulting space–time fields of flow depth and velocity form the training dataset. A two-dimensional FNO in the (x,t) domain is trained to learn the full solution operator, mapping topography, rheological parameters, and initial conditions directly to h(x,t) and u(x,t), thereby acting as a site-specific digital twin of the numerical solver. On a held-out validation set, the surrogate achieves mean relative L2 errors of about 6–7% for flow depth and 10–15% for velocity, and it generalizes to an unseen longitudinal profile with comparable accuracy. We further show that targeted reweighting of the training objective significantly improves the prediction of the velocity field without degrading depth accuracy, reducing the velocity error on the unseen profile by more than a factor of two. Finally, the FNO provides speed-ups of approximately 36× with respect to the reference solver at inference time. These results demonstrate that combining physics-based synthetic data with operator-learning architectures enables the construction of accurate, computationally efficient, and site-adapted surrogates for debris-flow hazard analysis in data-scarce environments. Full article
Show Figures

Figure 1

27 pages, 7306 KB  
Article
Design and Implementation of the AquaMIB Unmanned Surface Vehicle for Real-Time GIS-Based Spatial Interpolation and Autonomous Water Quality Monitoring
by Huseyin Duran and Namık Kemal Sonmez
Appl. Sci. 2026, 16(3), 1209; https://doi.org/10.3390/app16031209 - 24 Jan 2026
Viewed by 168
Abstract
This article introduces the design and implementation of an Unmanned Surface Vehicle (USV), named “AquaMIB”, which introduces a novel and integrated approach for real-time and autonomous water quality monitoring in aquatic environments. The system integrates modular hardware and software, combining sensors for temperature, [...] Read more.
This article introduces the design and implementation of an Unmanned Surface Vehicle (USV), named “AquaMIB”, which introduces a novel and integrated approach for real-time and autonomous water quality monitoring in aquatic environments. The system integrates modular hardware and software, combining sensors for temperature, pH, conductivity, dissolved oxygen, and oxidation reduction potential with GPS, LiDAR, a digital compass, communication modules, and a dedicated power unit. Software components include Python on a Raspberry Pi for navigation and control, C on an Atmega 324P for sensing, C++ on an Arduino Uno for remote control, and C#/JavaScript for the web-based control center. Users assign task points, and the USV autonomously navigates, collects data, and transmits it via RESTful API. Field trials showed 96.5% navigation accuracy over 2.2 km, with 66% of task points reached within 3 m. A total of 120 measurements were processed in real time and visualized as GIS-based spatial maps. The system demonstrates a cost-effective, modular solution for aquatic monitoring. The system’s ability to generate real-time GIS maps enables immediate identification of environmental anomalies, transforming raw sensor data into an actionable decision-support tool for aquatic management. Full article
Show Figures

Figure 1

45 pages, 2954 KB  
Review
A Review of Fault Diagnosis Methods: From Traditional Machine Learning to Large Language Model Fusion Paradigm
by Qingwei Nie, Junsai Geng and Changchun Liu
Sensors 2026, 26(2), 702; https://doi.org/10.3390/s26020702 - 21 Jan 2026
Viewed by 330
Abstract
Fault diagnosis is a core technology ensuring the safe and efficient operation of industrial systems. A paradigm shift has been observed wherein traditional signal analysis has been replaced by intelligent, algorithm-driven approaches. In recent years, large language models, digital twins, and knowledge graphs [...] Read more.
Fault diagnosis is a core technology ensuring the safe and efficient operation of industrial systems. A paradigm shift has been observed wherein traditional signal analysis has been replaced by intelligent, algorithm-driven approaches. In recent years, large language models, digital twins, and knowledge graphs have been introduced. A new stage of intelligent integration has been reached that is characterized by data-driven methods, knowledge guidance, and physical–virtual fusion. In the present paper, the evolutionary context of fault diagnosis technologies was systematically reviewed, with a focus on the theoretical methods and application practices of traditional machine learning, digital twins, knowledge graphs, and large language models. First, the research background, core objectives, and development history of fault diagnosis were described. Second, the principles, industrial applications, and limitations of supervised and unsupervised learning were analyzed. Third, innovative uses were examined involving physical–virtual mapping in digital twins, knowledge modeling in knowledge graphs, and feature learning in large language models. Subsequently, a multi-dimensional comparison framework was constructed to analyze the performance indicators, applicable scenarios, and collaborative potential of different technologies. Finally, the key challenges faced in the current fault diagnosis field were summarized. These included data quality, model generalization, and knowledge reuse. Future directions driven by the fusion of large language models, digital twins, and knowledge graphs were also outlined. A comprehensive technical map was established for fault diagnosis researchers, as well as an up-to-date reference. Theoretical innovation and engineering deployment of intelligent fault diagnosis are intended to be supported. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

35 pages, 4895 KB  
Article
Circular Design for Made in Italy Furniture: A Digital Tool for Data and Materials Exchange
by Lorenzo Imbesi, Serena Baiani, Sabrina Lucibello, Emanuele Panizzi, Paola Altamura, Viktor Malakuczi, Luca D’Elia, Carmen Rotondi, Mariia Ershova, Gabriele Rossini and Alessandro Aiuti
Sustainability 2026, 18(2), 1061; https://doi.org/10.3390/su18021061 - 20 Jan 2026
Viewed by 156
Abstract
Despite European and international regulatory frameworks promoting circular economy principles, sustainability in the furniture sector is still challenged by the limited access to reliable information about circular materials for designers, manufacturers, and waste managers in the Made-in-Italy furniture ecosystem. This research develops a [...] Read more.
Despite European and international regulatory frameworks promoting circular economy principles, sustainability in the furniture sector is still challenged by the limited access to reliable information about circular materials for designers, manufacturers, and waste managers in the Made-in-Italy furniture ecosystem. This research develops a digital infrastructure to address these information gaps through mixed methodology, combining desk research on regulatory frameworks and existing platforms; field research involving stakeholder mapping and interviews with designers, manufacturers, and waste managers; and the experimental development of AI-enhanced digital tools. The result integrates a web-based platform for circular materials with a CAD plugin supporting real-time sustainability assessment. As AI-assisted data entry showed a reduced form completion time while maintaining accuracy through human verification, testing also revealed how the system effectively bridges knowledge gaps between stakeholders operating in currently siloed value chains. The platform is a critical step in enabling designers to incorporate circular materials during the early design stages, while providing manufacturers access to verified punctual sustainability data compliant with mandatory Green Public Procurement criteria. Beyond the development of an innovative digital tool, the study outlines a corresponding operational model as a practical framework for strengthening the transition toward a circular economy in the Italian furniture industry. Full article
Show Figures

Figure 1

16 pages, 8966 KB  
Article
Evaluating High-Resolution LiDAR DEMs for Flood Hazard Analysis: A Comparison with 1:5000 Topographic Maps
by Tae-Yun Kim, Seung-Jun Lee, Ji-Sung Kim, Seung-Ho Han and Hong-Sik Yun
Appl. Sci. 2026, 16(2), 1029; https://doi.org/10.3390/app16021029 - 20 Jan 2026
Viewed by 134
Abstract
Flood disasters are increasing worldwide due to climate change, posing growing risks to infrastructure and human life. Korea, where nearly 70% of annual rainfall occurs during the summer monsoon, is particularly vulnerable to extreme precipitation events intensified by El Niño and La Niña. [...] Read more.
Flood disasters are increasing worldwide due to climate change, posing growing risks to infrastructure and human life. Korea, where nearly 70% of annual rainfall occurs during the summer monsoon, is particularly vulnerable to extreme precipitation events intensified by El Niño and La Niña. This study investigates how terrain resolution influences flood simulation accuracy by comparing a 1 m LiDAR digital elevation model (DEM) with a DEM generated from a 1:5000 topographic map. Flood depth and velocity fields produced by the two DEMs show notable quantitative differences: for final flood depth, the 1:5000 DEM yields a mean absolute error of approximately 56.9 cm and an RMSE of 76.4 cm relative to LiDAR results, with substantial local over- and underestimations. Flow velocity and maximum velocity also show significant deviations, with RMSE values of 58.0 cm/s and 68.4 cm/s, respectively. Although the 1:5000 DEM captures the general inundation pattern, these discrepancies—particularly in narrow channels and urbanized floodplains—demonstrate that coarse-resolution terrain data cannot reliably reproduce hydrodynamic behavior. We conclude that while 1:5000 DEMs may be acceptable for reconnaissance-level hazard screening, high-resolution LiDAR DEMs are essential for accurate flood depth and velocity simulation, supporting their integration into engineering design, urban flood risk assessment, and disaster management frameworks. Full article
(This article belongs to the Special Issue GIS-Based Spatial Analysis for Environmental Applications)
Show Figures

Figure 1

18 pages, 2295 KB  
Article
Automatic Retinal Nerve Fiber Segmentation and the Influence of Intersubject Variability in Ocular Parameters on the Mapping of Retinal Sites to the Pointwise Orientation Angles
by Diego Luján Villarreal and Adriana Leticia Vera-Tizatl
J. Imaging 2026, 12(1), 47; https://doi.org/10.3390/jimaging12010047 - 19 Jan 2026
Viewed by 201
Abstract
The current study investigates the influence of intersubject variability in ocular characteristics on the mapping of visual field (VF) sites to the pointwise directional angles in retinal nerve fiber layer (RNFL) bundle traces. In addition, the performance efficacy on the mapping of VF [...] Read more.
The current study investigates the influence of intersubject variability in ocular characteristics on the mapping of visual field (VF) sites to the pointwise directional angles in retinal nerve fiber layer (RNFL) bundle traces. In addition, the performance efficacy on the mapping of VF sites to the optic nerve head (ONH) was compared to ground truth baselines. Fundus photographs of 546 eyes of 546 healthy subjects (with no history of ocular disease or diabetic retinopathy) were enhanced digitally and RNFL bundle traces were segmented based on the Personalized Estimated Segmentation (PES) algorithm’s core technique. A 24-2 VF grid pattern was overlaid onto the photographs in order to relate VF test points to intersecting RNFL bundles. The PES algorithm effectively traced RNFL bundles in fundus images, achieving an average accuracy of 97.6% relative to the Jansonius map through the application of 10th-order Bezier curves. The PES algorithm assembled an average of 4726 RNFL bundles per fundus image based on 4975 sampling points, obtaining a total of 2,580,505 RNFL bundles based on 2,716,321 sampling points. The influence of ocular parameters could be evaluated for 34 out of 52 VF locations. The ONH-fovea angle and the ONH position in relation to the fovea were the most prominent predictors for variations in the mapping of retinal locations to the pointwise directional angle (p < 0.001). The variation explained by the model (R2 value) ranges from 27.6% for visual field location 15 to 77.8% in location 22, with a mean of 56%. Significant individual variability was found in the mapping of VF sites to the ONH, with a mean standard deviation (95% limit) of 16.55° (median 17.68°) for 50 out of 52 VF locations, ranging from less than 1° to 44.05°. The mean entry angles differed from previous baselines by a range of less than 1° to 23.9° (average difference of 10.6° ± 5.53°), and RMSE of 11.94. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

27 pages, 3407 KB  
Article
The iPSM-SD Framework: Enhancing Predictive Soil Mapping for Precision Agriculture Through Spatial Proximity Integration
by Peng-Tao Guo, Wen-Tao Li, Mao-Fen Li, Pei-Sheng Yan, Yan Liu and Ju Zhao
Agronomy 2026, 16(2), 231; https://doi.org/10.3390/agronomy16020231 - 18 Jan 2026
Viewed by 195
Abstract
A key challenge in precision agriculture is acquiring reliable spatial soil information under varying sampling densities, from sparse surveys to intensive monitoring. The individual predictive soil mapping (iPSM) method performs well in data-scarce conditions but neglects spatial proximity, limiting its predictive accuracy where [...] Read more.
A key challenge in precision agriculture is acquiring reliable spatial soil information under varying sampling densities, from sparse surveys to intensive monitoring. The individual predictive soil mapping (iPSM) method performs well in data-scarce conditions but neglects spatial proximity, limiting its predictive accuracy where spatial autocorrelation exists. To overcome this, we developed an enhanced framework, iPSM-Spatial Distance (iPSM-SD), which systematically integrates spatial proximity through multiplicative (MUL) and additive (ADD) strategies. The framework was validated using two contrasting cases: sparse soil organic carbon density data from Yunnan Province (n = 118) and dense soil organic matter data from Bayi Farm (n = 2511). Results show that the additive model (iPSM-ADD) significantly outperformed the original iPSM and benchmark models, including random forest, regression kriging, geographically weighted regression, and multiple linear regression, under sufficient sampling, achieving an R2 of 0.86 and reducing RMSE by 46.6% at Bayi Farm. It also maintained robust accuracy under sparse sampling conditions. The iPSM-SD framework thus provides a unified and adaptive tool for digital soil mapping across a wide range of data availability, supporting scalable soil management decisions from regional assessment to field-scale variable-rate applications in precision agriculture. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

58 pages, 606 KB  
Review
The Pervasiveness of Digital Identity: Surveying Themes, Trends, and Ontological Foundations
by Matthew Comb and Andrew Martin
Information 2026, 17(1), 85; https://doi.org/10.3390/info17010085 - 13 Jan 2026
Viewed by 303
Abstract
Digital identity operates as the connective infrastructure of the digital age, linking individuals, organisations, and devices into networks through which services, rights, and responsibilities are transacted. Despite this centrality, the field remains fragmented, with technical solutions, disciplinary perspectives, and regulatory approaches often developing [...] Read more.
Digital identity operates as the connective infrastructure of the digital age, linking individuals, organisations, and devices into networks through which services, rights, and responsibilities are transacted. Despite this centrality, the field remains fragmented, with technical solutions, disciplinary perspectives, and regulatory approaches often developing in parallel without interoperability. This paper presents a systematic survey of digital identity research, drawing on a Scopus-indexed baseline corpus of 2551 publications spanning full years 2005–2024, complemented by a recent stratum of 1241 publications (2023–2025) used to surface contemporary thematic structure and inform the ontology-oriented synthesis. The survey contributes in three ways. First, it provides an integrated overview of the digital identity landscape, tracing influential and widely cited works, historical developments, and recent scholarship across technical, legal, organisational, and cultural domains. Second, it applies natural language processing and subject metadata to identify thematic patterns, disciplinary emphases, and influential authors, exposing trends and cross-field connections difficult to capture through manual review. Third, it consolidates recurring concepts and relationships into ontological fragments (illustrative concept maps and subgraphs) that surface candidate entities, processes, and contexts as signals for future formalisation and alignment of fragmented approaches. By clarifying how digital identity has been conceptualised and where gaps remain, the study provides a foundation for progress toward a universal digital identity that is coherent, interoperable, and socially inclusive. Full article
(This article belongs to the Section Information and Communications Technology)
Show Figures

Figure 1

Back to TopTop