Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (331)

Search Parameters:
Keywords = SQL databases

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 502 KB  
Article
Design and Evaluation of a Retrieval-Augmented Generation LLM Chatbot with Structured Database Access
by Juan Burbano, Pablo Landeta-López, Cathy Guevara-Vega and Antonio Quiña-Mera
Appl. Sci. 2026, 16(7), 3147; https://doi.org/10.3390/app16073147 - 25 Mar 2026
Viewed by 216
Abstract
Context. The grocery sector is undergoing a massive shift in consumer behavior, with global chatbot usage projected to reach 8.4 billion units by 2024—surpassing the total human population—and online grocery revenue per shopper expected to hit USD 449.00 by 2023. In this competitive [...] Read more.
Context. The grocery sector is undergoing a massive shift in consumer behavior, with global chatbot usage projected to reach 8.4 billion units by 2024—surpassing the total human population—and online grocery revenue per shopper expected to hit USD 449.00 by 2023. In this competitive landscape, small grocery stores must adopt AI-driven tools to modernize their operations. However, these businesses often face significant inefficiencies in manual inventory management, resulting in errors and reduced competitiveness. Objective. This research aims to develop and validate a chatbot application using Large Language Models and Retrieval-Augmented Generation (RAG) for operational management of grocery stores. Method. The method employed a quantitative experimental approach with a five-component system architecture: a web interface, a FastAPI API, a Mistral-7B-Instruct-v0.2 model, a dynamic SQL generator, and a custom RAG application with an FAISS vector database, all integrated through SQLAlchemy 2.0.40. Results. The results demonstrate that a chatbot achieves an average response time of 0.08 s with 80% overall accuracy, showing a 96.2% improvement in information query time and a 92.9% reduction in operational errors. Conclusions. Major conclusions suggest that the chatbot system is effective for retail environments and has the potential to enhance the operational efficiency of grocery stores, serving as a foundation for future research in applied conversational assistance. Full article
Show Figures

Figure 1

28 pages, 1665 KB  
Article
The Use of Social Media as Bibliographic Citations in Open Access Education Journals
by Dimitris Rousidis, Emmanouel Garoufallou, Paraskevas Koukaras, Ilias Nitsos and Christos Tjortjis
Appl. Sci. 2026, 16(6), 3095; https://doi.org/10.3390/app16063095 - 23 Mar 2026
Viewed by 153
Abstract
There has been a recent increase in the use of social media platforms (SMPs), as well as a large increase in scientific journals and academic article publications. We need to study if and how much academics, scholars and researchers trust SMPs as sources, [...] Read more.
There has been a recent increase in the use of social media platforms (SMPs), as well as a large increase in scientific journals and academic article publications. We need to study if and how much academics, scholars and researchers trust SMPs as sources, i.e., citations, for writing their research articles. The purpose of this research is to explore the relationship between SMPs and bibliographic article citations for ten years between 2010 and 2019, with 31 December marking the official identification of COVID-19, a milestone that affected the whole world, including academic publishing. By using a citation retrieval tool written in Java, the citations referring to the URLs of 6432 articles from 14 Q1 open access education journals ranked by the SCImago platform were extracted. The retrieved URLs were stored in a relational database, preprocessed and cleaned, and analyzed using SQL queries to identify and quantify citations originating from SMPs. The findings showed that there were 112 instances, which corresponds to 1.8% of the articles, of an SMP post being used as a citation. Out of the 17 SMPs checked, eight were used, with the most popular being YouTube, having a percentage of 68% of the aforementioned 112 citations, followed by Twitter (now X) with approximately 13.5% and then by Facebook with around 7%. Most of these in-text citations were found at the Introduction and the Design/Methodology sections of the papers. Other important findings of this study were that about 2% of the URL citations referred to blogs and wikis and that one in 100 articles used Wikipedia in the bibliography. Also, for a 26-year period from 1999 to 2024, it was observed that the number of journals increased by 82.8%, while the number of open access journals showed an impressive 552.14% increase. The findings of this study could lead to changes in the metadata design of bibliographic databases, like the way of searching them, and to a review of the life cycle duration of sustainable access to the content of the cited SMPs. Full article
(This article belongs to the Special Issue Advanced Technologies Applied in Digital Media Era)
Show Figures

Figure 1

22 pages, 2209 KB  
Article
Predictive Traumatic Brain Injury Model for Determining Discharge Disposition and Infection Outcomes: A Machine Learning Approach Developed from the National Trauma Data Bank
by Asher Ralphs, Constana Gracia, Devesh Sarda, Subhajit Chakrabarty, Navdeep Samra, Bharat Guthikonda, Deepak Kumbhare and Julie Schwertfeger
Trauma Care 2026, 6(1), 6; https://doi.org/10.3390/traumacare6010006 - 19 Mar 2026
Viewed by 141
Abstract
Background/Objectives: Traumatic brain injury (TBI) affects more than 50 million people annually worldwide. Challenges in managing moderate-to-severe TBI include high rates of hospital-acquired infections and substantial variability in discharge disposition, and these combined challenges contribute significantly to the cost and trajectory of health [...] Read more.
Background/Objectives: Traumatic brain injury (TBI) affects more than 50 million people annually worldwide. Challenges in managing moderate-to-severe TBI include high rates of hospital-acquired infections and substantial variability in discharge disposition, and these combined challenges contribute significantly to the cost and trajectory of health recovery. Although current strategies such as antibiotic-impregnated external ventricular drains (EVDs) offer some benefit in controlling infections, they remain limited by high cost and inconsistent implementation. A clearer understanding of clinical and demographic factors associated with infection risk and discharge disposition are essential for improving care pathways. This study aims to identify and quantify key determinants of infection and discharge outcomes in patients with TBI. Methods: The National Trauma Database (NTDB) was queried using structured query language (SQL) based on predefined inclusion criteria (adult patients with ICD-coded TBI), input variables (basic demographics, injury location and severity, and vital signs), and specified outcome variables (emergency department discharge disposition, infection, and sepsis) to identify and filter the eligible patient cohort. A set of machine learning models were trained for each outcome (e.g., Emergency Department (ED) discharge, types of infections, and sepsis). Results: Data from 310,494 patients were extracted. The prediction model we developed, the Predictive TBI-Disposition Model (PTDM), was able to predict the outcome of a patient’s discharge with 96% accuracy. The accuracy of the models for infection and sepsis was 93% and 94%, respectively. Conclusions: Demographic and clinical factors significantly influence the discharge disposition and infection risk among TBI patients. Machine learning models demonstrated strong predictive performance, suggesting their utility in early risk stratification and targeted clinical decision-making. Full article
Show Figures

Figure 1

23 pages, 17441 KB  
Article
A Method for Automated Crop Health Monitoring in Large Areas Using Multi-Spectral Images and Deep Convolutional Neural Networks
by Oscar Andrés Martínez, Kevin David Ortega Quiñones and German Andrés Holguin-Londoño
AgriEngineering 2026, 8(3), 109; https://doi.org/10.3390/agriengineering8030109 - 13 Mar 2026
Viewed by 291
Abstract
Crop monitoring over large land extensions represents a central challenge in precision agriculture, especially in polyculture contexts where species with different nutritional needs are combined. This study presents a methodology to manage and analyze large volumes of multispectral images captured by unmanned aerial [...] Read more.
Crop monitoring over large land extensions represents a central challenge in precision agriculture, especially in polyculture contexts where species with different nutritional needs are combined. This study presents a methodology to manage and analyze large volumes of multispectral images captured by unmanned aerial vehicles (UAVs) in order to identify and monitor crops at the plant level. The images are efficiently stored and retrieved using a Hilbert Curve, which reduces the complexity of the search process from O(n2) to O(log(n)) where n represents the number of indexed data points). The system connects to a distributed Structured Query Language (SQL) database, allowing for fast image retrieval based on GPS coordinates and other metadata. Additionally, the Normalized Difference Vegetation Index (NDVI) is calculated using reflectance data from the red and near-infrared channels, adjusted by semantic segmentation masks generated with a U-Net model, which allows for species-specific evaluations. The methodology was evaluated on a 20,000 m2 polyculture farm with coffee, avocado, and plantain crops, using a dataset of 270 aerial images partitioned into 70% for training and 30% for validation. The results show improvements in retrieval speed and precision with the Hilbert Space-Filling Curve (HSFC) approach, and an accuracy of 82.3% and an the Mean Intersection over Union (MIoU) of 68.4% in species detection with the U-Net model. Overall, this integrated framework demonstrates a scalable potential for precision agriculture in complex polyculture systems, facilitating efficient data management and targeted crop interventions. Full article
Show Figures

Figure 1

26 pages, 409 KB  
Article
Unified Data Governance in Heterogeneous Database Environments: An API-Driven Architecture for Multi-Platform Policy Enforcement
by Maryam Abbasi, Paulo Váz, José Silva, Filipe Cardoso, Filipe Sá and Pedro Martins
Data 2026, 11(3), 54; https://doi.org/10.3390/data11030054 - 7 Mar 2026
Viewed by 455
Abstract
Modern organizations increasingly rely on heterogeneous database environments that combine relational, document-oriented, and key-value storage systems to optimize performance for diverse application requirements. However, this technological diversity creates significant challenges for implementing consistent data governance policies, regulatory compliance, and access control across disparate [...] Read more.
Modern organizations increasingly rely on heterogeneous database environments that combine relational, document-oriented, and key-value storage systems to optimize performance for diverse application requirements. However, this technological diversity creates significant challenges for implementing consistent data governance policies, regulatory compliance, and access control across disparate systems. Traditional governance approaches that operate within individual database silos fail to provide unified policy enforcement and create compliance gaps that expose organizations to regulatory and operational risks. This paper presents a novel API-driven architecture that enables unified data governance across heterogeneous database environments without requiring database-specific modifications or vendor lock-in. The proposed framework implements a centralized governance layer that coordinates policy enforcement across PostgreSQL, MongoDB, and Amazon DynamoDB systems through RESTful API interfaces. Key architectural components include differentiated access control through hierarchical API key management, automated compliance workflows for regulatory requirements such as GDPR, real-time audit trail generation, and comprehensive data quality monitoring with automated improvement mechanisms. Comprehensive experimental evaluation demonstrates the framework’s effectiveness across multiple operational dimensions. The system achieved 95.2% accuracy in access control enforcement across different data classification levels, while automated GDPR compliance workflows demonstrated 98.6% success rates with average processing times of 2.9 h. Performance evaluation reveals acceptable overhead characteristics with linear scaling patterns for PostgreSQL operations (R2 = 0.89), consistent sub-20ms response times for MongoDB logging operations, and sustained throughput rates ranging from 38.9 to 142.7 requests per second across the integrated system. Data quality improvements ranged from 16.1% to 34.3% across accuracy, completeness, consistency, and timeliness dimensions over a 12-week monitoring period, with accuracy improving by 17.8 percentage points, completeness by 13.2 percentage points, consistency by 19.7 percentage points, and timeliness by 24.5 percentage points. The duplicate detection system achieved 94.6% precision and 95.6% recall across various duplicate types, including cross-database redundancy identification. The results demonstrate that API-driven governance architectures can effectively address the persistent challenges of policy fragmentation in multi-database environments while maintaining operational performance and enabling measurable improvements in data quality and regulatory compliance. The framework provides a practical migration path for organizations seeking to implement comprehensive governance capabilities without replacing existing database infrastructure investments. Full article
(This article belongs to the Section Information Systems and Data Management)
Show Figures

Figure 1

20 pages, 4824 KB  
Article
CIR-SQL: A Dual-Model Intent Recognition Framework for Chinese Text-to-SQL
by Yao Wang, Huiyong Lv and Yurong Qian
AI 2026, 7(3), 91; https://doi.org/10.3390/ai7030091 - 4 Mar 2026
Viewed by 453
Abstract
In Industry 4.0 environments, operators and production managers frequently query industrial databases for production monitoring, quality control, and equipment maintenance using natural language. Existing Chinese NL2SQL systems often process semantic, program, and schema information in a single encoder, which leads to semantic-program interference [...] Read more.
In Industry 4.0 environments, operators and production managers frequently query industrial databases for production monitoring, quality control, and equipment maintenance using natural language. Existing Chinese NL2SQL systems often process semantic, program, and schema information in a single encoder, which leads to semantic-program interference and frequent structural or schema errors in the generated SQL. We present CIR-SQL, a dual-model framework that separates intent recognition from SQL generation via structured intermediate representations, decoupling semantic understanding from program synthesis. CIR-SQL employs a seven-category intent classification system (simple_select, count_query, filter_query, max_min_query, sort_query, join_query, group_by_query) and leverages large language models for intent recognition and structured information extraction. A three-level hierarchical backtracking strategy (SQL, context, intent) further improves robustness by correcting different error types. The architecture is particularly suited to Industry 4.0 scenarios where Chinese-speaking operators interact with complex industrial databases containing production data, quality metrics, and equipment status information. Full article
Show Figures

Figure 1

27 pages, 2223 KB  
Article
Off-the-Shelf AAL—A Practical Approach to Face the Population Shift
by Gerhard Leitner
Appl. Sci. 2026, 16(5), 2251; https://doi.org/10.3390/app16052251 - 26 Feb 2026
Viewed by 228
Abstract
Although the concept of Active and Assisted Living (AAL) has been a prominent topic in academia and in industry for decades, the widespread adoption of related technologies remains well below expectations. The underlying causes are multifaceted. The installation and retrofitting of such systems [...] Read more.
Although the concept of Active and Assisted Living (AAL) has been a prominent topic in academia and in industry for decades, the widespread adoption of related technologies remains well below expectations. The underlying causes are multifaceted. The installation and retrofitting of such systems typically require substantial financial investments, significant manual effort, and specialized expertise for setup and maintenance. Existing solutions lack flexibility and are difficult to tailor to the individual living situations and diverse needs of the primary target group, older adults. While state-of-the-art smart home platforms would, in principle, be capable of supporting a broad range of AAL functionalities and could be adapted to different usage contexts, much of the research in this domain has been conducted in artificial settings, such as laboratory environments or model houses, conditions that fail to fully capture the complexity and variability of real-world living environments of the elderly population. In this paper, we explore the potential, opportunities, and limitations of integrating low-cost hardware with open-source software components in residential environments representative of older adults’ everyday lives. Our work is based on a longitudinal case study conducted over several years in an actual household, focusing on delivering fundamental AAL functionality. By documenting the iterative development and real-world deployment of the system, this study offers practical insights into the feasibility and challenges of implementing on-site AAL support under realistic conditions. Full article
(This article belongs to the Special Issue Human Activity Recognition (HAR) in Healthcare, 3rd Edition)
Show Figures

Figure 1

23 pages, 1201 KB  
Article
Comparative Read Performance Analysis of PostgreSQL and MongoDB in E-Commerce: An Empirical Study of Filtering and Analytical Queries
by Jovita Urnikienė, Vaida Steponavičienė and Svetoslav Atanasov
Big Data Cogn. Comput. 2026, 10(2), 66; https://doi.org/10.3390/bdcc10020066 - 19 Feb 2026
Viewed by 749
Abstract
This paper presents a comparative analysis of read performance for PostgreSQL and MongoDB in e-commerce scenarios, using identical datasets in a resource-constrained single-host environment. The results demonstrate that PostgreSQL executes complex analytical queries 1.6–15.1 times faster, depending on the query type and data [...] Read more.
This paper presents a comparative analysis of read performance for PostgreSQL and MongoDB in e-commerce scenarios, using identical datasets in a resource-constrained single-host environment. The results demonstrate that PostgreSQL executes complex analytical queries 1.6–15.1 times faster, depending on the query type and data volume. The study employed synthetic data generation with the Faker library across three stages, processing up to 300,000 products and executing each of 6 query types 15 times. Both filtering and analytical queries were tested on non-indexed data in a controlled localhost environment with PostgreSQL 17.5 and MongoDB 7.0.14, using default configurations. PostgreSQL showed 65–80% shorter execution times for multi-criteria queries, while MongoDB required approximately 33% less disk space. These findings suggest that normalized relational schemas are advantageous for transactional e-commerce systems where analytical queries dominate the workload. The results are directly applicable to small and medium e-commerce developers operating in budget-constrained, single-host deployment environments when choosing between relational and document-oriented databases for structured transactional data with read-heavy analytical workloads. A minimal indexed validation confirms that the baseline trends remain consistent under a simple indexing configuration. Future work will examine broader indexing strategies, write-intensive workloads, and distributed deployment scenarios. Full article
Show Figures

Figure 1

33 pages, 4781 KB  
Article
Modeling Multi-Sensor Daily Fire Events in Brazil: The DescrEVE Relational Framework for Wildfire Monitoring
by Henrique Bernini, Fabiano Morelli, Fabrício Galende Marques de Carvalho, Guilherme dos Santos Benedito, William Max dos Santos Silva Silva and Samuel Lucas Vieira de Melo
Remote Sens. 2026, 18(4), 606; https://doi.org/10.3390/rs18040606 - 14 Feb 2026
Viewed by 476
Abstract
Wildfire monitoring in tropical regions requires robust frameworks capable of transforming heterogeneous satellite detections into consistent, event-level information suitable for decision support. This study presents the DescrEVE Fogo (Descrição de Eventos de Fogo) framework, a relational and scalable system that models daily fire [...] Read more.
Wildfire monitoring in tropical regions requires robust frameworks capable of transforming heterogeneous satellite detections into consistent, event-level information suitable for decision support. This study presents the DescrEVE Fogo (Descrição de Eventos de Fogo) framework, a relational and scalable system that models daily fire events in Brazil by integrating Advanced Very High Resolution Radiometer (AVHRR), Moderate-Resolution Imaging Spectroradiometer (MODIS), and Visible Infrared Imaging Radiometer Suite (VIIRS) active-fire detections within a unified Structured Query Language (SQL)/PostGIS environment. The framework formalizes a mathematical and computational model that defines and tracks fire fronts and multi-day fire events based on explicit spatio-temporal rules and geometry-based operations. Using database-native functions, DescrEVE Fogo aggregates daily fronts into events and computes intrinsic and environmental descriptors, including duration, incremental area, Fire Radiative Power (FRP), number of fronts, rainless days, and fire risk. Applied to the 2003–2025 archive of the Brazilian National Institute for Space Research (INPE) Queimadas Program, the framework reveals that the integration of VIIRS increases the fraction of multi-front events and enhances detectability of larger and longer-lived events, while the overall regime remains dominated by small, short-lived occurrences. A simple, prototype fire-type rule distinguishes new isolated fire events, possible incipient wildfires, and wildfires, indicating that fewer than 10% of events account for more than 40% of the area proxy and nearly 60% of maximum FRP. For the 2025 operational year, daily ignition counts show strong temporal coherence with the Global Fire Emissions Database version 5 (GFEDv5), albeit with a systematic positive bias reflecting differences in sensors and event definitions. A case study of the 2020 Pantanal wildfire illustrates how front-level metrics and environmental indicators can be combined to characterize persistence, spread, and climatic coupling. Overall, the database-native design provides a transparent and reproducible basis for large-scale, near-real-time wildfire analysis in Brazil, while current limitations in sensor homogeneity, typology, and validation point to clear avenues for future refinement and operational integration. Full article
Show Figures

Figure 1

21 pages, 551 KB  
Article
Agentic RAG for Maritime AIoT: Natural Language Access to Structured Data
by Oxana Sachenkova, Melker Andreasson, Dongzhu Tan and Alisa Lincke
Sensors 2026, 26(4), 1227; https://doi.org/10.3390/s26041227 - 13 Feb 2026
Viewed by 480
Abstract
Maritime operations are increasingly reliant on sensor data to drive efficiency and enhance decision-making. However, despite rapid advances in large language models, including expanded context windows and stronger generative capabilities, critical industrial settings still require secure, role-constrained access to enterprise data and explicit [...] Read more.
Maritime operations are increasingly reliant on sensor data to drive efficiency and enhance decision-making. However, despite rapid advances in large language models, including expanded context windows and stronger generative capabilities, critical industrial settings still require secure, role-constrained access to enterprise data and explicit limitation of model context. Retrieval-Augmented Generation (RAG) remains essential to enforce data minimization, preserve privacy, support verifiability, and meet regulatory obligations by retrieving only permissioned, provenance-tracked slices of information at query time. However, current RAG solutions lack robust validation protocols for numerical accuracy for high-stakes industrial applications. This paper introduces Lighthouse Bot, a novel Agentic RAG system specifically designed to provide natural-language access to complex maritime sensor data, including time-series and relational sensor data. The system addresses a critical need for verifiable autonomous data analysis within the Artificial Intelligence of Things (AIoT) domain, which we explore through a case study on optimizing ferry operations. We present a detailed architecture that integrates a Large Language Model with a specialized database and coding agents to transform natural language into executable tasks, enabling core AIoT capabilities such as generating Python code for time-series analysis, executing complex SQL queries on relational sensor databases, and automating workflows, while keeping sensitive data outside the prompt and ensuring auditable, policy-aligned tool use. To evaluate performance, we designed a test suite of 24 questions with ground-truth answers, categorized by query complexity (simple, moderate, complex) and data interaction type (retrieval, aggregation, analysis). Our results show robust, controlled data access with high factual fidelity: the proprietary Claude 3.7 achieved close to 90% overall factual correctness, while the open-source Qwen 72B achieved 66% overall and 99% on simple retrieval and aggregation queries. These findings underscore the need for a secure limited-context RAG in maritime AIoT and the potential for cost-effective automation of routine exploratory analyses. Full article
Show Figures

Figure 1

20 pages, 682 KB  
Article
Semantic Search for System Dynamics Models Using Vector Embeddings in a Cloud Microservices Environment
by Pavel Kyurkchiev, Anton Iliev and Nikolay Kyurkchiev
Future Internet 2026, 18(2), 86; https://doi.org/10.3390/fi18020086 - 5 Feb 2026
Viewed by 579
Abstract
Efficient retrieval of mathematical and structural similarities in System Dynamics models remains a significant challenge for traditional lexical systems, which often fail to capture the contextual dependencies of simulation processes. This paper presents an architectural approach and implementation of a semantic search module [...] Read more.
Efficient retrieval of mathematical and structural similarities in System Dynamics models remains a significant challenge for traditional lexical systems, which often fail to capture the contextual dependencies of simulation processes. This paper presents an architectural approach and implementation of a semantic search module integrated into an existing cloud-based modeling and simulation system. The proposed method employs a strategy for serializing graph structures into textual descriptions, followed by the generation of vector embeddings via local ONNX inference and indexing within a vector database (Qdrant). Experimental validation performed on a diverse corpus of complex dynamic models, compares the proposed approach against traditional information retrieval methods (Full-Text Search, Keyword Search in PostgreSQL, and Apache Lucene with Standard and BM25 scoring). The results demonstrate the distinct advantage of semantic search, achieving high precision (over 90%) within the scope of the evaluated corpus and effectively eliminating information noise. In comparison, keyword search exhibited only 24.8% precision with a significant rate of false positives, while standard full-text analysis failed to identify relevant models for complex conceptual queries (0 results). Despite a recorded increase in latency (~2 s), the study proves that the vector-based approach is a significantly more robust solution for detecting hidden semantic connections in mathematical model databases, providing a foundation for future developments toward multi-vector indexing strategies. Full article
(This article belongs to the Special Issue Intelligent Agents and Their Application)
Show Figures

Graphical abstract

18 pages, 1445 KB  
Article
Adaptive Thermostat Setpoint Prediction Using IoT and Machine Learning in Smart Buildings
by Fatemeh Mosleh, Ali A. Hamidi, Hamidreza Abootalebi Jahromi and Md Atiqur Rahman Ahad
Automation 2026, 7(1), 29; https://doi.org/10.3390/automation7010029 - 5 Feb 2026
Viewed by 692
Abstract
Increased global energy consumption contributes to higher operational costs in the energy sector and results in environmental deterioration. This study evaluates the effectiveness of integrating Internet of Things (IoT) sensors and machine learning techniques to predict adaptive thermostat setpoints to support behavior-aware Heating, [...] Read more.
Increased global energy consumption contributes to higher operational costs in the energy sector and results in environmental deterioration. This study evaluates the effectiveness of integrating Internet of Things (IoT) sensors and machine learning techniques to predict adaptive thermostat setpoints to support behavior-aware Heating, Ventilation, and Air Conditioning (HVAC) operation in residential buildings. The dataset was collected over two years from 2080 IoT devices installed in 370 zones in two buildings in Halifax, Canada. Specific categories of real-time information, including indoor and outdoor temperature, humidity, thermostat setpoints, and window/door status, shaped the dataset of the study. Data preprocessing included retrieving data from the MySQL database and converting the data into an analytical format suitable for visualization and processing. In the machine learning phase, deep learning (DL) was employed to predict adaptive threshold settings (“from” and “to”) for the thermostats, and a gradient boosted trees (GBT) approach was used to predict heating and cooling thresholds. Standard metrics (RMSE, MAE, and R2) were used to evaluate effective prediction for adaptive thermostat setpoints. A comparative analysis between GBT ”from” and “to” models and the deep learning (DL) model was performed to assess the accuracy of prediction. Deep learning achieved the highest performance, reducing the MAE value by about 9% in comparison to the strongest GBT model (1.12 vs. 1.23) and reaching an R2 value of up to 0.60, indicating improved predictive accuracy under real-world building conditions. The results indicate that IoT-driven setpoint prediction provides a practical foundation for behavior-aware thermostat modeling and future adaptive HVAC control strategies in smart buildings. This study focuses on setpoint prediction under real operational conditions and does not evaluate automated HVAC control or assess actual energy savings. Full article
Show Figures

Figure 1

29 pages, 2594 KB  
Article
The Value Addition of Healthcare 4.0 Loyalty Programs: Implications for Logistics Management
by Maria João Vieira, Ana Luísa Ramos and João Amaral
Logistics 2026, 10(2), 30; https://doi.org/10.3390/logistics10020030 - 26 Jan 2026
Viewed by 559
Abstract
Background: Digital transformation is reshaping healthcare operations, with loyalty programs increasingly used to strengthen patient engagement and streamline administrative workflows. However, fragmented information systems and manual verification routines continue to create bottlenecks, inconsistencies, and extended lead times. Methods: This study applies [...] Read more.
Background: Digital transformation is reshaping healthcare operations, with loyalty programs increasingly used to strengthen patient engagement and streamline administrative workflows. However, fragmented information systems and manual verification routines continue to create bottlenecks, inconsistencies, and extended lead times. Methods: This study applies a mixed-methods approach within the Business Process Management (BPM) lifecycle to redesign the eligibility verification process for a loyalty program at Casa de Saúde São Mateus Hospital. Quantitative time measurements were collected during peak periods, while qualitative insights from staff observations and discussions supported process discovery and bottleneck identification. The proposed solution integrates a centralized SQL database, automated verification routines, and a dedicated administrative interface synchronized with the MedicineOne system. Results: The redesigned process reduced eligibility verification time by approximately 80% and improved Flow Efficiency by around 11.7%. Manual interventions, data fragmentation, and discount-application errors decreased substantially. The centralized database improved data reliability, while automated checks enhanced consistency and reduced staff workload. The system also enabled more accurate beneficiary management and improved coordination across administrative activities. Conclusions: Integrating Healthcare 4.0 principles with BPM enhances internal logistics, reduces lead times, and improves operational reliability. The proposed model offers a replicable framework for modernizing healthcare service delivery. Full article
(This article belongs to the Section Humanitarian and Healthcare Logistics)
Show Figures

Figure 1

26 pages, 1629 KB  
Article
Performance Evaluation of MongoDB and RavenDB in IIoT-Inspired Data-Intensive Mobile and Web Applications
by Mădălina Ciumac, Cornelia Aurora Győrödi, Robert Ștefan Győrödi and Felicia Mirabela Costea
Future Internet 2026, 18(1), 57; https://doi.org/10.3390/fi18010057 - 20 Jan 2026
Viewed by 480
Abstract
The exponential growth of data generated by modern digital applications, including systems inspired by Industrial Internet of Things (IIoT) requirements, has accelerated the adoption of NoSQL databases due to their scalability, flexibility, and performance advantages over traditional relational systems. Among document-oriented solutions, MongoDB [...] Read more.
The exponential growth of data generated by modern digital applications, including systems inspired by Industrial Internet of Things (IIoT) requirements, has accelerated the adoption of NoSQL databases due to their scalability, flexibility, and performance advantages over traditional relational systems. Among document-oriented solutions, MongoDB and RavenDB stand out due to their architectural features and their ability to manage dynamic, large-scale datasets. This paper presents a comparative analysis of MongoDB and RavenDB, focusing on the performance of fundamental CRUD (Create, Read, Update, Delete) operations. To ensure a controlled performance evaluation, a mobile and web application for managing product orders was implemented as a case study inspired by IIoT data characteristics, such as high data volume and frequent transactional operations, with experiments conducted on datasets ranging from 1000 to 1,000,000 records. Beyond the core CRUD evaluation, the study also investigates advanced operational scenarios, including joint processing strategies (lookup versus document inclusion), bulk data ingestion techniques, aggregation performance, and full-text search capabilities. These complementary tests provide deeper insight into the systems’ architectural strengths and their behavior under more complex and data-intensive workloads. The experimental results highlight MongoDB’s consistent performance advantage in terms of response time, particularly with large data volumes, while RavenDB demonstrates competitive behavior and offers additional benefits such as built-in ACID compliance, automatic indexing, and optimized mechanisms for relational retrieval and bulk ingestion. The analysis does not propose a new benchmarking methodology but provides practical insights for selecting an appropriate document-oriented database for data intensive mobile and web application contexts, including IIoT-inspired data characteristics, based on a controlled single-node experimental setting, while acknowledging the limitations of a single-host experimental environment. Full article
Show Figures

Graphical abstract

29 pages, 2803 KB  
Article
Benchmarking SQL and NoSQL Persistence in Microservices Under Variable Workloads
by Nenad Pantelic, Ljiljana Matic, Lazar Jakovljevic, Stefan Eric, Milan Eric, Miladin Stefanović and Aleksandar Djordjevic
Future Internet 2026, 18(1), 53; https://doi.org/10.3390/fi18010053 - 15 Jan 2026
Viewed by 829
Abstract
This paper presents a controlled comparative evaluation of SQL and NoSQL persistence mechanisms in containerized microservice architectures under variable workload conditions. Three persistence configurations—SQL with indexing, SQL without indexing, and a document-oriented NoSQL database, including supplementary hybrid SQL variants used for robustness analysis—are [...] Read more.
This paper presents a controlled comparative evaluation of SQL and NoSQL persistence mechanisms in containerized microservice architectures under variable workload conditions. Three persistence configurations—SQL with indexing, SQL without indexing, and a document-oriented NoSQL database, including supplementary hybrid SQL variants used for robustness analysis—are assessed across read-dominant, write-dominant, and mixed workloads, with concurrency levels ranging from low to high contention. The experimental setup is fully containerized and executed in a single-node environment to isolate persistence-layer behavior and ensure reproducibility. System performance is evaluated using multiple metrics, including percentile-based latency (p95), throughput, CPU utilization, and memory consumption. The results reveal distinct performance trade-offs among the evaluated configurations, highlighting the sensitivity of persistence mechanisms to workload composition and concurrency intensity. In particular, indexing strategies significantly affect read-heavy scenarios, while document-oriented persistence demonstrates advantages under write-intensive workloads. The findings emphasize the importance of workload-aware persistence selection in microservice-based systems and support the adoption of polyglot persistence strategies. Rather than providing absolute performance benchmarks, the study focuses on comparative behavioral trends that can inform architectural decision-making in practical microservice deployments. Full article
Show Figures

Figure 1

Back to TopTop