Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (43)

Search Parameters:
Keywords = map user expertise

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
37 pages, 3165 KB  
Systematic Review
No One-Size-Fits-All: A Systematic Review of LCA Software and a Selection Framework
by Veridiana Souza da Silva Alves, Vivian Karina Bianchini, Barbara Stolte Bezerra, Carlos do Amaral Razzino, Fernanda Neves da Silva Andrade and Sofia Seniciato Neme
Sustainability 2026, 18(1), 197; https://doi.org/10.3390/su18010197 - 24 Dec 2025
Viewed by 439
Abstract
Life Cycle Assessment (LCA) is a fundamental methodology for evaluating environmental impacts across the life cycle of products, processes, and services. However, selecting appropriate LCA software is a complex task due to the wide variety of tools, each with different functionalities, sectoral focuses, [...] Read more.
Life Cycle Assessment (LCA) is a fundamental methodology for evaluating environmental impacts across the life cycle of products, processes, and services. However, selecting appropriate LCA software is a complex task due to the wide variety of tools, each with different functionalities, sectoral focuses, and technical requirements. This study conducts a systematic literature review, following Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, to map the main characteristics, strengths, and limitations of LCA tools. The review includes 41 studies published between 2017 and 2025, identifying and categorizing 24 different tools. Technical and operational features were analyzed, such as modelling capacity, database compatibility, usability, integration capabilities, costs, and user requirements. Among the tools, five stood out for their frequent application: SimaPro, GaBi, OpenLCA, Umberto, and Athena. SimaPro is recognized for flexibility and robustness; GaBi for its industrial applications and Environmental Product Declaration (EPD) support; OpenLCA for being open-source and accessible; Umberto for energy and process modelling; and Athena for integration with Building Information Modelling (BIM) in construction. Despite their advantages, all tools presented specific limitations, including learning curve challenges and limited scope. The results show that no single tool fits all scenarios. In addition to the synthesis of these characteristics, this study also emphasizes the general features of the identified software, the challenges in making a well-supported selection decision, and proposes a decision flowchart designed to guide users through key selection criteria. This visual tool aims to support a more transparent, systematic, and context-oriented choice of LCA software, aligning capabilities with project-specific needs. Tool selection should align with research objectives, available expertise, and context. This review offers practical guidance for enhancing LCA applications in sustainability science. Full article
Show Figures

Figure 1

16 pages, 18470 KB  
Article
EyeInvaS: Lowering Barriers to Public Participation in Invasive Alien Species Monitoring Through Deep Learning
by Hao Chen, Jiaogen Zhou, Wenbiao Wu, Changhui Xu and Yanzhu Ji
Animals 2025, 15(21), 3181; https://doi.org/10.3390/ani15213181 - 31 Oct 2025
Viewed by 520
Abstract
Invasive alien species (IASs) pose escalating threats to global ecosystems, biodiversity, and human well-being. Public participation in IAS monitoring is often limited by taxonomic expertise gaps. To address this, we established a multi-taxa image dataset covering 54 key IAS in China, benchmarked nine [...] Read more.
Invasive alien species (IASs) pose escalating threats to global ecosystems, biodiversity, and human well-being. Public participation in IAS monitoring is often limited by taxonomic expertise gaps. To address this, we established a multi-taxa image dataset covering 54 key IAS in China, benchmarked nine deep learning models, and quantified impacts of varying scenarios and target scales. EfficientNetV2 achieved superior accuracy, with F1-scores of 83.66% (original dataset) and 93.32% (hybrid dataset). Recognition accuracy peaked when targets occupied 60% of the frame against simple backgrounds. Leveraging these findings, we developed EyeInvaS, an AI-powered system integrating image acquisition, recognition, geotagging, and data sharing to democratize IAS surveillance. Crucially, in a large-scale public deployment in Huai’an, China, 1683 user submissions via EyeInvaS enabled mapping of Solidago canadensis, revealing strong associations with riverbanks and roads. Our results validate the feasibility of deep learning in empowering citizens in IAS surveillance and biodiversity governance. Full article
(This article belongs to the Section Animal System and Management)
Show Figures

Figure 1

19 pages, 4228 KB  
Article
Density-Based Spatial Clustering of Vegetation Fire Points Based on Genetic Optimization of Threshold Values
by Xuan Gao, Tao Wang and Ke Xie
Fire 2025, 8(11), 431; https://doi.org/10.3390/fire8110431 - 31 Oct 2025
Viewed by 864
Abstract
Vegetation fires are among the most common natural disasters, posing significant threats to people and the natural environment worldwide. Density-based clustering methods can be used to identify geospatial clustering patterns of fire points. It further helps reveal the spatial distribution characteristics of wildfires, [...] Read more.
Vegetation fires are among the most common natural disasters, posing significant threats to people and the natural environment worldwide. Density-based clustering methods can be used to identify geospatial clustering patterns of fire points. It further helps reveal the spatial distribution characteristics of wildfires, which are crucial for regional-specific fire mapping, prediction, mitigation, and protection. DBSCAN (density-based spatial clustering of applications with noise) is widely used for clustering spatial objects. It needs two user-determined threshold values: the local radius and the minimum number of neighboring points for core points, which require user expertise and background information. This work proposes a dual-population genetic optimization to determine threshold values of DBSCAN for clustering vegetation fire points in western China. By constructing randomly generated threshold populations, optimized threshold values are obtained through crossover, mutation, and inter-population exchange, measured by multiple clustering metrics. Focusing on vegetation wildfires in western China during 2016–2022, the results reveal that vegetation wildfires can be divided into eight regions, each exhibiting distinct spatiotemporal patterns and geographic contexts. Full article
Show Figures

Figure 1

28 pages, 38011 KB  
Article
On the Use of LLMs for GIS-Based Spatial Analysis
by Roberto Pierdicca, Nikhil Muralikrishna, Flavio Tonetto and Alessandro Ghianda
ISPRS Int. J. Geo-Inf. 2025, 14(10), 401; https://doi.org/10.3390/ijgi14100401 - 14 Oct 2025
Cited by 2 | Viewed by 3411
Abstract
This paper presents an approach integrating Large Language Models (LLMs), specifically GPT-4 and the open-source DeepSeek-R1, into Geographic Information System (GIS) workflows to enhance the accessibility, flexibility, and efficiency of spatial analysis tasks. We designed and implemented a system capable of interpreting natural [...] Read more.
This paper presents an approach integrating Large Language Models (LLMs), specifically GPT-4 and the open-source DeepSeek-R1, into Geographic Information System (GIS) workflows to enhance the accessibility, flexibility, and efficiency of spatial analysis tasks. We designed and implemented a system capable of interpreting natural language instructions provided by users and translating them into automated GIS workflows through dynamically generated Python scripts. An interactive graphical user interface (GUI), built using CustomTkinter, was developed to enable intuitive user interaction with GIS data and processes, reducing the need for advanced programming or technical expertise. We conducted an empirical evaluation of this approach through a comparative case study involving typical GIS tasks such as spatial data validation, data merging, buffer analysis, and thematic mapping using urban datasets from Pesaro, Italy. The performance of our automated system was directly compared against traditional manual workflows executed by 10 experienced GIS analysts. The results from this evaluation indicate a substantial reduction in task completion time, decreasing from approximately 1 h and 45 min in the manual approach to roughly 27 min using our LLM-driven automation, without compromising analytical quality or accuracy. Furthermore, we systematically evaluated the system’s factual reliability using a diverse set of geospatial queries, confirming robust performance for practical GIS tasks. Additionally, qualitative feedback emphasized improved usability and accessibility, particularly for users without specialized GIS training. These findings highlight the significant potential of integrating LLMs into GISs, demonstrating clear advantages in workflow automation, user-friendliness, and broader adoption of advanced spatial analysis methodologies. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

12 pages, 3911 KB  
Article
Study Area Map Generator: A Web-Based Shiny Application for Generating Country-Level Study Area Maps for Scientific Publications
by Cesar Ivan Alvarez, Juan Gabriel Mollocana-Lara, Izar Sinde-González and Ana Claudia Teodoro
ISPRS Int. J. Geo-Inf. 2025, 14(10), 387; https://doi.org/10.3390/ijgi14100387 - 3 Oct 2025
Viewed by 2556
Abstract
The increasing demand for high-quality geospatial visualizations in scientific publications has highlighted the need for accessible and standardized tools that support reproducible research. Researchers from various disciplines—often without expertise in Geographic Information Systems (GIS)—frequently require a map figure to locate their study area. [...] Read more.
The increasing demand for high-quality geospatial visualizations in scientific publications has highlighted the need for accessible and standardized tools that support reproducible research. Researchers from various disciplines—often without expertise in Geographic Information Systems (GIS)—frequently require a map figure to locate their study area. This paper presents the Study Area Map Generator, a web-based application developed using Shiny for Python, designed to automate the creation of country- and city-level study area maps. The tool integrates geospatial data processing, cartographic rendering, and user-friendly customization features within a browser-based interface. It enables users—regardless of GIS proficiency—to generate publication-ready maps with customizable titles, basemaps, and inset views. A usability survey involving 92 participants from diverse professional and geographic-based backgrounds revealed high levels of satisfaction, ease of use, and perceived usefulness, with no significant differences across GIS experience levels. The application has already been adopted in academic and policy contexts, particularly in low-resource settings, demonstrating its potential to democratize access to cartographic tools. By aligning with open science principles and supporting reproducible workflows, the Study Area Map Generator contributes to more equitable and efficient scientific communication. The application is freely available online. Future developments include support for subnational units, thematic overlays, multilingual interfaces, and enhanced export options. Full article
(This article belongs to the Special Issue Cartography and Geovisual Analytics)
Show Figures

Figure 1

31 pages, 1209 KB  
Article
MiMapper: A Cloud-Based Multi-Hazard Mapping Tool for Nepal
by Catherine A. Price, Morgan Jones, Neil F. Glasser, John M. Reynolds and Rijan B. Kayastha
GeoHazards 2025, 6(4), 63; https://doi.org/10.3390/geohazards6040063 - 3 Oct 2025
Viewed by 1602
Abstract
Nepal is highly susceptible to natural hazards, including earthquakes, flooding, and landslides, all of which may occur independently or in combination. Climate change is projected to increase the frequency and intensity of these natural hazards, posing growing risks to Nepal’s infrastructure and development. [...] Read more.
Nepal is highly susceptible to natural hazards, including earthquakes, flooding, and landslides, all of which may occur independently or in combination. Climate change is projected to increase the frequency and intensity of these natural hazards, posing growing risks to Nepal’s infrastructure and development. To the authors’ knowledge, the majority of existing geohazard research in Nepal is typically limited to single hazards or localised areas. To address this gap, MiMapper was developed as a cloud-based, open-access multi-hazard mapping tool covering the full national extent. Built on Google Earth Engine and using only open-source spatial datasets, MiMapper applies an Analytical Hierarchy Process (AHP) to generate hazard indices for earthquakes, floods, and landslides. These indices are combined into an aggregated hazard layer and presented in an interactive, user-friendly web map that requires no prior GIS expertise. MiMapper uses a standardised hazard categorisation system for all layers, providing pixel-based scores for each layer between 0 (Very Low) and 1 (Very High). The modal and mean hazard categories for aggregated hazard in Nepal were Low (47.66% of pixels) and Medium (45.61% of pixels), respectively, but there was high spatial variability in hazard categories depending on hazard type. The validation of MiMapper’s flooding and landslide layers showed an accuracy of 0.412 and 0.668, sensitivity of 0.637 and 0.898, and precision of 0.116 and 0.627, respectively. These validation results show strong overall performance for landslide prediction, whilst broad-scale exposure patterns are predicted for flooding but may lack the resolution or sensitivity to fully represent real-world flood events. Consequently, MiMapper is a useful tool to support initial hazard screening by professionals in urban planning, infrastructure development, disaster management, and research. It can contribute to a Level 1 Integrated Geohazard Assessment as part of the evaluation for improving the resilience of hydropower schemes to the impacts of climate change. MiMapper also offers potential as a teaching tool for exploring hazard processes in data-limited, high-relief environments such as Nepal. Full article
Show Figures

Figure 1

48 pages, 4222 KB  
Review
Machine Learning Models of the Geospatial Distribution of Groundwater Quality: A Systematic Review
by Mohammad Mehrabi, David A. Polya and Yang Han
Water 2025, 17(19), 2861; https://doi.org/10.3390/w17192861 - 30 Sep 2025
Viewed by 2898
Abstract
Assessing the quality of groundwater, a primary source of water in many sectors, is of paramount importance. To this end, modeling the geospatial distribution of chemical contaminants in groundwater can be of great utility. Machine learning (ML) models are being increasingly used to [...] Read more.
Assessing the quality of groundwater, a primary source of water in many sectors, is of paramount importance. To this end, modeling the geospatial distribution of chemical contaminants in groundwater can be of great utility. Machine learning (ML) models are being increasingly used to overcome the shortcomings of conventional predictive techniques. We report here a systematic review of the nature and utility of various supervised and unsupervised ML models during the past two decades of machine learning groundwater hazard mapping (MLGHM). We identified and reviewed 284 relevant MLGHM journal articles that met our inclusion criteria. Firstly, trend analysis showed (i) an exponential increase in the number of MLGHM studies published between 2004 and 2025, with geographical distribution outlining Iran, India, the US, and China as the countries with the most extensively studied areas; (ii) nitrate as the most studied target, and groundwater chemicals as the most frequently considered category of predictive variables; (iii) that tree-based ML was the most popular model for feature selection; (iv) that supervised ML was far more favored than unsupervised ML (94% vs. 6% of models) with tree-based category—mostly random forest (RF)—as the most popular supervised ML. Secondly, compiling accuracy-based comparisons of ML models from the explored literature revealed that RF, deep learning, and ensembles (mostly meta-model ensembles and boosting ensembles) were frequently reported as the most accurate models. Thirdly, a critical evaluation of MLGHM models in terms of predictive accuracy, along with several other factors such as models’ computational efficiency and predictive power—which have often been overlooked in earlier review studies—resulted in considering the relative merits of commonly used MLGHM models. Accordingly, a flowchart was designed by integrating several MLGHM key criteria (i.e., accuracy, transparency, training speed, number of hyperparameters, intended scale of modeling, and required user’s expertise) to assist in informed model selection, recognising that the weighting of criteria for model selection may vary from problem to problem. Lastly, potential challenges that may arise during different stages of MLGHM efforts are discussed along with ideas for optimizing MLGHM models. Full article
(This article belongs to the Section Hydrogeology)
Show Figures

Figure 1

32 pages, 706 KB  
Review
Corporate Failure Prediction: A Literature Review of Altman Z-Score and Machine Learning Models Within a Technology Adoption Framework
by Christoph Braunsberger and Ewald Aschauer
J. Risk Financial Manag. 2025, 18(8), 465; https://doi.org/10.3390/jrfm18080465 - 20 Aug 2025
Cited by 1 | Viewed by 8239
Abstract
Research on corporate failure prediction is focused on increasing the model’s statistical accuracy, most recently via the introduction of a variety of machine learning (ML)-based models, often overlooking the practical appeal and potential adoption barriers in the context of corporate management. This literature [...] Read more.
Research on corporate failure prediction is focused on increasing the model’s statistical accuracy, most recently via the introduction of a variety of machine learning (ML)-based models, often overlooking the practical appeal and potential adoption barriers in the context of corporate management. This literature review compares ML models with the classic, widely accepted Altman Z-score through a technology adoption lens. We map how technological features, organizational readiness, environmental pressure and user perceptions shape adoption using an integrated technology adoption framework that combines the Technology–Organization–Environment framework with the Technology Acceptance Model. The analysis shows that Z-score models offer simplicity, interpretability and low cost, suiting firms with limited analytical resources, whereas ML models deliver superior accuracy and adaptability but require advanced data infrastructure, specialized expertise and regulatory clarity. By linking the models’ characteristics with adoption determinants, the study clarifies when each model is most appropriate and sets a research agenda for long-horizon forecasting, explainable artificial intelligence and context-specific model design. These insights help managers choose failure prediction tools that fit their strategic objectives and implementation capacity. Full article
(This article belongs to the Section Business and Entrepreneurship)
Show Figures

Figure 1

22 pages, 1007 KB  
Systematic Review
Mapping Drone Applications in Rural and Regional Cities: A Scoping Review of the Australian State of Practice
by Christine Steinmetz-Weiss, Nancy Marshall, Kate Bishop and Yuan Wei
Appl. Sci. 2025, 15(15), 8519; https://doi.org/10.3390/app15158519 - 31 Jul 2025
Viewed by 1445
Abstract
Consumer-accessible and user-friendly smart products such as unmanned aerial vehicles (UAVs), or drones, have become widely used, adaptable, and acceptable devices to observe, assess, measure, and explore urban and natural environments. A drone’s relatively low cost and flexibility in the level of expertise [...] Read more.
Consumer-accessible and user-friendly smart products such as unmanned aerial vehicles (UAVs), or drones, have become widely used, adaptable, and acceptable devices to observe, assess, measure, and explore urban and natural environments. A drone’s relatively low cost and flexibility in the level of expertise required to operate it has enabled users from novice to industry professionals to adapt a malleable technology to various disciplines. This review examines the academic literature and maps how drones are currently being used in 93 rural and regional city councils in New South Wales, Australia. Through a systematic review of the academic literature and scrutiny of current drone use in these councils using publicly available information found on council websites, findings reveal potential uses of drone technology for local governments who want to engage with smart technology devices. We looked at how drones were being used in the management of the council’s environment; health and safety initiatives; infrastructure; planning; social and community programmes; and waste and recycling. These findings suggest that drone technology is increasingly being utilised in rural and regional areas. While the focus is on rural and regional New South Wales, a review of the academic literature and local council websites provides a snapshot of drone use examples that holds global relevance for local councils in urban and remote areas seeking to incorporate drone technology into their daily practice of city, town, or region governance. Full article
Show Figures

Figure 1

22 pages, 3885 KB  
Article
Enhancing Drone Navigation and Control: Gesture-Based Piloting, Obstacle Avoidance, and 3D Trajectory Mapping
by Ben Taylor, Mathew Allen, Preston Henson, Xu Gao, Haroon Malik and Pingping Zhu
Appl. Sci. 2025, 15(13), 7340; https://doi.org/10.3390/app15137340 - 30 Jun 2025
Viewed by 3386
Abstract
Autonomous drone navigation presents challenges for users unfamiliar with manual flight controls, increasing the risk of collisions. This research addresses this issue by developing a multifunctional drone control system that integrates hand gesture recognition, obstacle avoidance, and 3D mapping to improve accessibility and [...] Read more.
Autonomous drone navigation presents challenges for users unfamiliar with manual flight controls, increasing the risk of collisions. This research addresses this issue by developing a multifunctional drone control system that integrates hand gesture recognition, obstacle avoidance, and 3D mapping to improve accessibility and safety. The system utilizes Google’s MediaPipe Hands software library, which employs machine learning to track 21 key landmarks of the user’s hand, enabling gesture-based control of the drone. Each recognized gesture is mapped to a flight command, eliminating the need for a traditional controller. The obstacle avoidance system, utilizing the Flow Deck V2 and Multi-Ranger Deck, detects objects within a safety threshold and autonomously moves the drone by a predefined avoidance distance away to prevent collisions. A mapping system continuously logs the drone’s flight path and detects obstacles, enabling 3D visualization of drone’s trajectory after the drone landing. Also, an AI-Deck streams live video, enabling navigation beyond the user’s direct line of sight. Experimental validation with the Crazyflie drone demonstrates seamless integration of these systems, providing a beginner-friendly experience where users can fly drones safely without prior expertise. This research enhances human–drone interaction, making drone technology more accessible for education, training, and intuitive navigation. Full article
Show Figures

Figure 1

21 pages, 24372 KB  
Article
Streamlining Haptic Design with Micro-Collision Haptic Map Generated by Stable Diffusion
by Hongyu Liu and Zhenyu Gu
Appl. Sci. 2025, 15(13), 7174; https://doi.org/10.3390/app15137174 - 26 Jun 2025
Viewed by 1224
Abstract
Rendering surface materials to provide realistic tactile sensations is a key focus in haptic interaction research. However, generating texture maps and designing corresponding haptic feedback often requires expert knowledge and significant effort. To simplify the workflow, we developed a micro-collision-based tactile texture dataset [...] Read more.
Rendering surface materials to provide realistic tactile sensations is a key focus in haptic interaction research. However, generating texture maps and designing corresponding haptic feedback often requires expert knowledge and significant effort. To simplify the workflow, we developed a micro-collision-based tactile texture dataset for several common materials and fine-tuned the VAE model of Stable Diffusion. Our approach allows designers to generate matching visual and haptic textures from natural language prompts and enables users to receive real-time, realistic haptic feedback when interacting with virtual surfaces. We evaluated our method through a haptic design task. Professional and non-haptic designers each created one haptic design using traditional tools and another using our approach. Participants then evaluated the four resulting designs. The results showed that our method produced haptic feedback comparable to that of professionals, though slightly lower in overall and consistency scores. Importantly, professional designers using our method required less time and fewer expert resources. Non-haptic designers also achieved better outcomes with our tool. Our generative method optimizes the haptic design workflow, lowering the expertise threshold and increasing efficiency. It has the potential to support broader adoption of haptic design in interactive media and enhance multisensory experiences. Full article
Show Figures

Figure 1

28 pages, 586 KB  
Review
Review and Mapping of Search-Based Approaches for Program Synthesis
by Takfarinas Saber and Ning Tao
Information 2025, 16(5), 401; https://doi.org/10.3390/info16050401 - 14 May 2025
Viewed by 3801
Abstract
Context: Program synthesis tools reduce software development costs by generating programs that perform tasks depicted by some specifications. Various methodologies have emerged for program synthesis, among which search-based algorithms have shown promising results. However, the proliferation of search-based program synthesis tools utilising diverse [...] Read more.
Context: Program synthesis tools reduce software development costs by generating programs that perform tasks depicted by some specifications. Various methodologies have emerged for program synthesis, among which search-based algorithms have shown promising results. However, the proliferation of search-based program synthesis tools utilising diverse search algorithms and input types and targeting various programming tasks can overwhelm users seeking the most suitable tool. Objective: This paper contributes to the ongoing discourse by presenting a comprehensive review of search-based approaches employed for program synthesis. We aim to offer an understanding of the guiding principles of current methodologies by mapping them to the required type of user intent, the type of search algorithm, and the representation of the search space. Furthermore, we aim to map the diverse search algorithms to the type of code generation tasks in which they have shown success, which would serve as a guideline for applying search-based approaches for program synthesis. Method: We conducted a literature review of 67 academic papers on search-based program synthesis. Results: Through analysis, we identified and categorised the main techniques with their trends. We have also mapped and shed light on patterns connecting the problem, the representation and the search algorithm type. Conclusions: Our study summarises the field of search-based program synthesis and provides an entry point to the acumen and expertise of the search-based community on program synthesis. Full article
(This article belongs to the Section Information Applications)
Show Figures

Figure 1

33 pages, 2131 KB  
Article
Domain- and Language-Adaptable Natural Language Interface for Property Graphs
by Ioannis Tsampos and Emmanouil Marakakis
Computers 2025, 14(5), 183; https://doi.org/10.3390/computers14050183 - 9 May 2025
Viewed by 2053
Abstract
Despite the growing adoption of Property Graph Databases, like Neo4j, interacting with them remains difficult for non-technical users due to the reliance on formal query languages. Natural Language Interfaces (NLIs) address this by translating natural language (NL) into Cypher. However, existing solutions are [...] Read more.
Despite the growing adoption of Property Graph Databases, like Neo4j, interacting with them remains difficult for non-technical users due to the reliance on formal query languages. Natural Language Interfaces (NLIs) address this by translating natural language (NL) into Cypher. However, existing solutions are typically limited to high-resource languages; are difficult to adapt to evolving domains with limited annotated data; and often depend on Machine Learning (ML) approaches, including Large Language Models (LLMs), that demand substantial computational resources and advanced expertise for training and maintenance. We address these limitations by introducing a novel dependency-based, training-free, schema-agnostic Natural Language Interface (NLI) that converts NL queries into Cypher for querying Property Graphs. Our system employs a modular pipeline-integrating entity and relationship extraction, Named Entity Recognition (NER), semantic mapping, triple creation via syntactic dependencies, and validation against an automatically extracted Schema Graph. The distinctive feature of this approach is the reduction in candidate entity pairs using syntactic analysis and schema validation, eliminating the need for candidate query generation and ranking. The schema-agnostic design enables adaptation across domains and languages. Our system supports single- and multi-hop queries, conjunctions, comparisons, aggregations, and complex questions through an explainable process. Evaluations on real-world queries demonstrate reliable translation results. Full article
(This article belongs to the Special Issue Natural Language Processing (NLP) and Large Language Modelling)
Show Figures

Graphical abstract

15 pages, 2025 KB  
Article
Establishing Multi-Dimensional LC-MS Systems for Versatile Workflows to Analyze Therapeutic Antibodies at Different Molecular Levels in Routine Operations
by Katrin Heinrich, Sina Hoelterhoff, Saban Oezipek, Martin Winter, Tobias Rainer, Lucas Hourtoulle, Ingrid Grunert, Tobias Graf, Michael Leiss and Anja Bathke
Pharmaceuticals 2025, 18(3), 401; https://doi.org/10.3390/ph18030401 - 12 Mar 2025
Cited by 1 | Viewed by 2013
Abstract
Background/Objectives: Multi-dimensional liquid chromatography coupled with mass spectrometry (mD-LC-MS) has emerged as a powerful technique for the in-depth characterization of biopharmaceuticals by assessing chromatographically resolved product variants in a streamlined and semi-automated manner. The study aims to demystify and enhance the accessibility to [...] Read more.
Background/Objectives: Multi-dimensional liquid chromatography coupled with mass spectrometry (mD-LC-MS) has emerged as a powerful technique for the in-depth characterization of biopharmaceuticals by assessing chromatographically resolved product variants in a streamlined and semi-automated manner. The study aims to demystify and enhance the accessibility to this powerful but inherently complex technique by detailing a robust and user-friendly instrument platform, allowing analysts to switch seamlessly between intact, subunit, and peptide mapping workflows. Methods: Starting from a commercially available Two-Dimensional Liquid Chromatography (2D-LC) system, we introduce specific hardware and software extensions leading to two versatile mD-LC-MS setups, in slightly different configurations. The technique’s efficacy is demonstrated through a case study on a cation exchange chromatography method assessing the charge variants of a bispecific antibody, isolating peak(s) of interest, followed by online sample processing, including reduction and enzymatic digestion, and subsequently mass spectrometry analysis. Results: The accuracy and reproducibility of both mD-LC-MS setups proposed in this study were successfully tested. Despite the complex peak patterns in the first dimension, the systems were equally effective in identifying and quantifying the underlying product species. This case study highlights the routine usability of mD-LC-MS technology for the characterization of (ultra) high-performance liquid chromatography (UHPLC) of therapeutic biomolecule. Conclusions: The demonstrated reliability and accuracy underscore the practicality of mD-LC-MS for routine use in biopharmaceutical analysis. Our detailed description of the mD-LC-MS systems and insights simplify access to this advanced technology for a broader scientific community, regardless of expertise level, and lower the entry barrier for its use in various research and industrial settings. Full article
(This article belongs to the Special Issue Advances in Drug Analysis and Drug Development)
Show Figures

Graphical abstract

23 pages, 17956 KB  
Article
Mobile Robots for Environment-Aware Navigation: A Code-Free Approach with Topometric Maps for Non-Expert Users
by Valeria Sarno, Elisa Stefanini, Giorgio Grioli and Lucia Pallottino
Robotics 2025, 14(2), 19; https://doi.org/10.3390/robotics14020019 - 4 Feb 2025
Cited by 1 | Viewed by 1551
Abstract
The growing use of mobile robots in unconventional environments demands new programming approaches to make them accessible to non-expert users. Indeed, traditional programming methods require specialized expertise in robotics and programming, limiting robots’ accessibility to a broader audience. End-user robot programming has emerged [...] Read more.
The growing use of mobile robots in unconventional environments demands new programming approaches to make them accessible to non-expert users. Indeed, traditional programming methods require specialized expertise in robotics and programming, limiting robots’ accessibility to a broader audience. End-user robot programming has emerged to overcome these limitations, aiming to simplify robot programming through intuitive methods. In this work, we propose a code-free approach for programming mobile robots to autonomously execute navigation tasks, i.e., reach a desired goal location from an arbitrary initial position. Our method relies on instructing the robot on new paths through demonstrations while creating and continuously updating a topometric map of the environment. Moreover, by leveraging the information gathered during the instruction phase, the robot can perceive slight environmental changes and autonomously make the best decision in response to unexpected situations (e.g., adjusting its path, stopping, or requesting user intervention). Experiments conducted in both simulated and real-world environments support the validity of our approach, as they show that the robot can successfully reach its assigned goal location in the vast majority of cases. Full article
(This article belongs to the Section Sensors and Control in Robotics)
Show Figures

Figure 1

Back to TopTop