Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (254)

Search Parameters:
Keywords = big data stores

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 3230 KB  
Article
Enhanced MQTT Protocol for Securing Big Data/Hadoop Data Management
by Ferdaous Kamoun-Abid and Amel Meddeb-Makhlouf
J. Sens. Actuator Netw. 2026, 15(1), 22; https://doi.org/10.3390/jsan15010022 - 16 Feb 2026
Viewed by 435
Abstract
Big data has significantly transformed data processing and analytics across various domains. However, ensuring security and data confidentiality in distributed platforms such as Hadoop remains a challenging task. Distributed environments face major security issues, particularly in the management and protection of large-scale data. [...] Read more.
Big data has significantly transformed data processing and analytics across various domains. However, ensuring security and data confidentiality in distributed platforms such as Hadoop remains a challenging task. Distributed environments face major security issues, particularly in the management and protection of large-scale data. In this article, we focus on the cost of secure information transmission, implementation complexity, and scalability. Furthermore, we address the confidentiality of information stored in Hadoop by analyzing different AES encryption modes and examining their potential to enhance Hadoop security. At the application layer, we operate within our Hadoop environment using an extended, secure, and widely used MQTT protocol for large-scale data communication. This approach is based on implementing MQTT with TLS, and before connecting, we add a hash verification of the data nodes’ identities and send the JWT. This protocol uses TCP at the transport layer for underlying transmission. The advantage of TCP lies in its reliability and small header size, making it particularly suitable for big data environments. This work proposes a triple-layer protection framework. The first layer is the assessment of the performance of existing AES encryption modes (CTR, CBC, and GCM) with different key sizes to optimize data confidentiality and processing efficiency in large-scale Hadoop deployments. Afterwards, we propose evaluating the integrity of DataNodes using a novel verification mechanism that employs SHA-3-256 hashing to authenticate nodes and prevent unauthorized access during cluster initialization. At the third tier, the integrity of data blocks within Hadoop is ensured using SHA-3-256. Through extensive performance testing and security validation, we demonstrate integration. Full article
(This article belongs to the Section Network Security and Privacy)
Show Figures

Figure 1

39 pages, 30009 KB  
Article
A Case Study on DNN-Based Surface Roughness QA Analysis of Hollow Metal AM Fabricated Parts in a DT-Enabled CW-GTAW Robotic Manufacturing Cell
by João Vítor A. Cabral, Alberto J. Alvares, Antonio Carlos da C. Facciolli and Guilherme C. de Carvalho
Sensors 2026, 26(1), 4; https://doi.org/10.3390/s26010004 - 19 Dec 2025
Viewed by 694
Abstract
In the context of Industry 4.0, new methods of manufacturing, monitoring, and data generation related to industrial processes have emerged. Over the last decade, a new method of part manufacturing that has been revolutionizing the industry is Additive Manufacturing, which comes in various [...] Read more.
In the context of Industry 4.0, new methods of manufacturing, monitoring, and data generation related to industrial processes have emerged. Over the last decade, a new method of part manufacturing that has been revolutionizing the industry is Additive Manufacturing, which comes in various forms, including the more traditional Fusion Deposition Modeling (FDM) and the more innovative ones, such as Laser Metal Deposition (LMD) and Wire Arc Additive Manufacturing (WAAM). New technologies related to monitoring these processes are also emerging, such as Cyber-Physical Systems (CPSs) or Digital Twins (DTs), which can be used to enable Artificial Intelligence (AI)-powered analysis of generated big data. However, few works have dealt with a comprehensive data analysis, based on Digital Twin systems, to study quality levels of manufactured parts using 3D models. With this background in mind, this current project uses a Digital Twin-enabled dataflow to constitute a basis for a proposed data analysis pipeline. The pipeline consists of analyzing metal AM-manufactured parts’ surface roughness quality levels by the application of a Deep Neural Network (DNN) analytical model and enabling the assessment and tuning of deposition parameters by comparing AM-built models’ 3D representation, obtained by photogrammetry scanning, with the positional data acquired during the deposition process and stored in a cloud database. Stored and analyzed data may be further used to refine the manufacturing of parts, calibration of sensors and refining of the DT model. Also, this work presents a comprehensive study on experiments carried out using the CW-GTAW (Cold Wire Gas Tungsten Arc Welding) process as the means of depositing metal, resulting in hollow parts whose geometries were evaluated by means of both 3D scanned data, obtained via photogrammetry, and positional/deposition process parameters obtained from the Digital Twin architecture pipeline. Finally, an adapted PointNet DNN model was used to evaluate surface roughness quality levels of point clouds into 3 classes (good, fair, and poor), obtaining an overall accuracy of 75.64% on the evaluation of real deposited metal parts. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

28 pages, 1587 KB  
Article
Application of a Multi-Objective Optimization Algorithm Based on Differential Grouping to Financial Asset Allocation
by Peng Jia, Qiting Jiang, Haodong Wang, Weibin Guo, Weichao Ding and Zhe Wang
Appl. Sci. 2025, 15(21), 11341; https://doi.org/10.3390/app152111341 - 22 Oct 2025
Viewed by 1353
Abstract
In the era of big data and rapid information growth, investors encounter a complex financial environment characterized by extensive data, conflicting investment objectives, and markets that are unpredictable due to economic and policy fluctuations. Hence, asset selection is vital for both investors and [...] Read more.
In the era of big data and rapid information growth, investors encounter a complex financial environment characterized by extensive data, conflicting investment objectives, and markets that are unpredictable due to economic and policy fluctuations. Hence, asset selection is vital for both investors and researchers. Multi-objective optimization algorithms balance multiple objectives to find optimal solutions and are widely used in engineering, economics, etc. This paper proposes a multi-objective decomposition optimization algorithm integrated with differential grouping (DG-MOEA/D). Initially, the algorithm employs the recursive spectral clustering differential grouping (RDGSC) technique to identify dependencies among variables, grouping them to reduce interactions between the variables. It then uses MOEA/D-UTEA to optimize each group, with an external archive for storing and updating solutions. Experimental results on the DTLZ and LSMOP test functions show that the DG-MOEA/D algorithm greatly outperforms the other seven comparison algorithms. When used in real-world scenarios like stock and bond asset allocation, the algorithm continues to outperform other methods, demonstrating its significant advantages in practical applications. Full article
Show Figures

Figure 1

45 pages, 1071 KB  
Article
Reducing Waste in Retail: A Mixed Strategy, Cost Optimization Model for Sustainable Dead Stock Management
by Richard Li, Rosemary Seva and Anthony Chiu
Sustainability 2025, 17(20), 9242; https://doi.org/10.3390/su17209242 - 17 Oct 2025
Viewed by 3519
Abstract
The retail sector is the most demand-sensitive echelon in the supply chain, where non-moving items accumulate and become dead stock. Existing inventory management studies focus on fast-moving products and income generation. This paper focuses on dead stock management and proposes a mixed strategy [...] Read more.
The retail sector is the most demand-sensitive echelon in the supply chain, where non-moving items accumulate and become dead stock. Existing inventory management studies focus on fast-moving products and income generation. This paper focuses on dead stock management and proposes a mixed strategy solution using a pure integer non-linear programming model that minimizes the dead stock management cost of a retail chain operator. The number of products and volume of product-related data in a retail chain system require big data analysis to ensure sustainable inventory practices that reduce waste generated from dead stock inventory. Through hypothetical data sets, the 3-store, 10-product run showed that discount percentage, expected sales success probability of a product in a store location, and disposition of unsold products were the main drivers of the decisions made by the model. The most significant cost contributors arising from these decisions were the unrecovered product cost (UPC), disposed product cost (PC), and salvage value from the successful sale of dead stock. Inventory managers must balance the effect on these cost components when they choose the strategies to use in managing dead stock. Full article
Show Figures

Figure 1

8 pages, 1093 KB  
Proceeding Paper
Predicting Big Mart Sales with Machine Learning
by Muhammad Husban, Azka Mir and Indra Yustiana
Eng. Proc. 2025, 107(1), 95; https://doi.org/10.3390/engproc2025107095 - 16 Sep 2025
Viewed by 2249
Abstract
Currently, supermarket-run shopping centers, known as “Big Marts,” monitor sales information for every single item in order to predict potential customer demand and update inventory management. Anomalies and general trends are commonly discovered through data warehouse mining using a range of machine learning [...] Read more.
Currently, supermarket-run shopping centers, known as “Big Marts,” monitor sales information for every single item in order to predict potential customer demand and update inventory management. Anomalies and general trends are commonly discovered through data warehouse mining using a range of machine learning techniques, and businesses such as Big Marts can use the obtained data to forecast future sales volumes. Compared to other research publications, this one forecasted sales with higher accuracy using machine learning models including KNN (K Nearest Neighbors), Naïve Bayes, and Random Forest. To adapt the proposed business model to anticipated outcomes, the sales forecast is based on Big Mart sales for various stores. Using different machine learning methods, the data that is produced may then be used to predict potential sales volumes for retailers such as Big Marts. The projected cost of the suggested system includes the following identifiers: price, outlet, and outlet location. In order to facilitate data-driven decision-making in retail operations and help Big Marts optimize their business models and effectively satisfy anticipated demand, this study emphasizes the importance of incorporating cutting-edge machine learning approaches. Full article
Show Figures

Figure 1

41 pages, 9508 KB  
Article
CTAARCHS: Cloud-Based Technologies for Archival Astronomical Research Contents and Handling Systems
by Stefano Gallozzi, Georgios Zacharis, Federico Fiordoliva and Fabrizio Lucarelli
Metrics 2025, 2(3), 18; https://doi.org/10.3390/metrics2030018 - 8 Sep 2025
Viewed by 1223
Abstract
This paper presents a flexible approach to a multipurpose, heterogeneous archive and data management system model that merges the robustness of legacy grid-based technologies with modern cloud and edge computing paradigms. It leverages innovations driven by big data, IoT, AI, and machine learning [...] Read more.
This paper presents a flexible approach to a multipurpose, heterogeneous archive and data management system model that merges the robustness of legacy grid-based technologies with modern cloud and edge computing paradigms. It leverages innovations driven by big data, IoT, AI, and machine learning to create an adaptive data storage and processing framework. In today’s digital age, where data are the new intangible gold, the “gold rush” lies in managing and storing massive datasets effectively—especially when these data serve governmental or commercial purposes, raising concerns about privacy and data misuse by third-party aggregators. Astronomical data, in particular, require this same thoughtful approach. Scientific discovery increasingly depends on efficient extraction and processing of large datasets. Distributed archival models, unlike centralized warehouses, offer scalability by allowing data to be accessed and processed across locations via cloud services. Incorporating edge computing further enables real-time access with reduced latency. Major astronomical projects must also avoid common single points of failure (SPOFs), often resulting from suboptimal technological choices driven by collaboration politics or In-Kind Contributions (IKCs). These missteps can hinder innovation and long-term project success. The principal goal of this work is to outline best practices in archival and data management projects—from policy development and task planning to use-case definition and implementation. Only after these steps can a coherent selection of hardware, software, or virtual environments be made. The proposed model—CTAARCHS (Cloud-based Technologies for Astronomical Archiving Research Contents and Handling Systems)—is an open-source, multidisciplinary platform supporting big data needs in astronomy. It promotes broad institutional collaboration, offering code repositories and sample data for immediate use. Full article
Show Figures

Figure 1

23 pages, 2699 KB  
Article
Leveraging Visual Side Information in Recommender Systems via Vision Transformer Architectures
by Arturo Álvarez-Sánchez, Diego M. Jiménez-Bravo, María N. Moreno-García, Sergio García González and David Cruz García
Electronics 2025, 14(17), 3550; https://doi.org/10.3390/electronics14173550 - 6 Sep 2025
Cited by 1 | Viewed by 1509
Abstract
Recommender systems are essential tools in the digital age, helping users discover products, content, and services across platforms like streaming services, online stores, and social networks. Traditionally, these systems have relied on methods such as collaborative filtering, content-based, and knowledge-based approaches, using data [...] Read more.
Recommender systems are essential tools in the digital age, helping users discover products, content, and services across platforms like streaming services, online stores, and social networks. Traditionally, these systems have relied on methods such as collaborative filtering, content-based, and knowledge-based approaches, using data like user–item interactions and demographic details. With the rise of big data, an increasing amount of “side information”, like contextual data, social behavior, and metadata, has become available, enabling more personalized and effective recommendations. This work provides a comparative analysis of traditional recommender systems and newer models incorporating side information, particularly visual features, to determine whether integrating such data improves recommendation quality. By evaluating the benefits and limitations of using complex formats like visual content, this work aims to contribute to the development of more robust and adaptive recommender systems, offering insights for future research in the field. Full article
(This article belongs to the Special Issue Application of Data Mining in Social Media)
Show Figures

Figure 1

29 pages, 919 KB  
Article
DDoS Defense Strategy Based on Blockchain and Unsupervised Learning Techniques in SDN
by Shengmin Peng, Jialin Tian, Xiangyu Zheng, Shuwu Chen and Zhaogang Shu
Future Internet 2025, 17(8), 367; https://doi.org/10.3390/fi17080367 - 13 Aug 2025
Cited by 1 | Viewed by 1525
Abstract
With the rapid development of technologies such as cloud computing, big data, and the Internet of Things (IoT), Software-Defined Networking (SDN) is emerging as a new network architecture for the modern Internet. SDN separates the control plane from the data plane, allowing a [...] Read more.
With the rapid development of technologies such as cloud computing, big data, and the Internet of Things (IoT), Software-Defined Networking (SDN) is emerging as a new network architecture for the modern Internet. SDN separates the control plane from the data plane, allowing a central controller, the SDN controller, to quickly direct the routing devices within the topology to forward data packets, thus providing flexible traffic management for communication between information sources. However, traditional Distributed Denial of Service (DDoS) attacks still significantly impact SDN systems. This paper proposes a novel dual-layer strategy capable of detecting and mitigating DDoS attacks in an SDN network environment. The first layer of the strategy enhances security by using blockchain technology to replace the SDN flow table storage container in the northbound interface of the SDN controller. Smart contracts are then used to process the stored flow table information. We employ the time window algorithm and the token bucket algorithm to construct the first layer strategy to defend against obvious DDoS attacks. To detect and mitigate less obvious DDoS attacks, we design a second-layer strategy that uses a composite data feature correlation coefficient calculation method and the Isolation Forest algorithm from unsupervised learning techniques to perform binary classification, thereby identifying abnormal traffic. We conduct experimental validation using the publicly available DDoS dataset CIC-DDoS2019. The results show that using this strategy in the SDN network reduces the average deviation of round-trip time (RTT) by approximately 38.86% compared with the original SDN network without this strategy. Furthermore, the accuracy of DDoS attack detection reaches 97.66% and an F1 score of 92.2%. Compared with other similar methods, under comparable detection accuracy, the deployment of our strategy in small-scale SDN network topologies provides faster detection speeds for DDoS attacks and exhibits less fluctuation in detection time. This indicates that implementing this strategy can effectively identify DDoS attacks without affecting the stability of data transmission in the SDN network environment. Full article
(This article belongs to the Special Issue DDoS Attack Detection for Cyber–Physical Systems)
Show Figures

Figure 1

18 pages, 6788 KB  
Review
Weather Forecasting Satellites—Past, Present, & Future
by Etai Nardi, Ohad Cohen, Yosef Pinhasi, Motti Haridim and Jacob Gavan
Information 2025, 16(8), 677; https://doi.org/10.3390/info16080677 - 8 Aug 2025
Cited by 2 | Viewed by 4434
Abstract
Climate change has made weather more erratic and unpredictable. As a result, a growing need to develop more reliable short-term weather prediction models paved the way for a new era in satellite instrumentation technology, where radar systems for meteorological applications became critically important. [...] Read more.
Climate change has made weather more erratic and unpredictable. As a result, a growing need to develop more reliable short-term weather prediction models paved the way for a new era in satellite instrumentation technology, where radar systems for meteorological applications became critically important. This paper presents a comprehensive review of the evolution of weather forecasting satellites. We trace the technological development from the early weather and climate monitoring systems of the 1960s. Since the use of stabilized TV camera platforms on satellites aimed at capturing cloud cover data and storing it on magnetic tape for later readout and transmission back to ground stations, satellite sensor instrument technologies took great strides in the following decades, incorporating advancements in image and signal processing into satellite imagery methodologies. As innovative as they were, these technologies still lacked the capabilities needed to allow for practical use cases other than scientific research. The paper further examines how the next phase of satellite platforms is aimed at addressing this technological gap by leveraging the advantages of low Earth orbit (LEO) based satellite constellation deployments for near-real-time tracking of atmospheric hydrometers and precipitation profiles through innovative methods. These methods involve combining the collected data into big-data lakes on internet cloud platforms and constructing innovative AI-based multi-layered weather prediction models specifically tailored to remote sensing. Finally, we discuss how these recent advancements form the basis for new applications in aviation, severe weather readiness, energy, agriculture, and beyond. Full article
(This article belongs to the Special Issue Sensing and Wireless Communications)
Show Figures

Figure 1

22 pages, 2702 KB  
Article
Spatial Heterogeneity of Intra-Urban E-Commerce Demand and Its Retail-Delivery Interactions: Evidence from Waybill Big Data
by Yunnan Cai, Jiangmin Chen and Shijie Li
J. Theor. Appl. Electron. Commer. Res. 2025, 20(3), 190; https://doi.org/10.3390/jtaer20030190 - 1 Aug 2025
Viewed by 1950
Abstract
E-commerce growth has reshaped consumer behavior and retail services, driving parcel demand and challenging last-mile logistics. Existing research predominantly relies on survey data and global regression models that overlook intra-urban spatial heterogeneity in shopping behaviors. This study bridges this gap by analyzing e-commerce [...] Read more.
E-commerce growth has reshaped consumer behavior and retail services, driving parcel demand and challenging last-mile logistics. Existing research predominantly relies on survey data and global regression models that overlook intra-urban spatial heterogeneity in shopping behaviors. This study bridges this gap by analyzing e-commerce demand’s spatial distribution from a retail service perspective, identifying key drivers, and evaluating implications for omnichannel strategies and logistics. Utilizing waybill big data, spatial analysis, and multiscale geographically weighted regression, we reveal: (1) High-density e-commerce demand areas are predominantly located in central districts, whereas peripheral regions exhibit statistically lower volumes. The spatial distribution pattern of e-commerce demand aligns with the urban development spatial structure. (2) Factors such as population density and education levels significantly influence e-commerce demand. (3) Convenience stores play a dual role as retail service providers and parcel collection points, reinforcing their importance in shaping consumer accessibility and service efficiency, particularly in underserved urban areas. (4) Supermarkets exert a substitution effect on online shopping by offering immediate product availability, highlighting their role in shaping consumer purchasing preferences and retail service strategies. These findings contribute to retail and consumer services research by demonstrating how spatial e-commerce demand patterns reflect consumer shopping preferences, the role of omnichannel retail strategies, and the competitive dynamics between e-commerce and physical retail formats. Full article
(This article belongs to the Topic Data Science and Intelligent Management)
Show Figures

Figure 1

22 pages, 1159 KB  
Article
Compaction-Aware Flash Memory Remapping for Key–Value Stores
by Jialin Wang, Zhen Yang, Yi Fan and Yajuan Du
Micromachines 2025, 16(6), 699; https://doi.org/10.3390/mi16060699 - 11 Jun 2025
Viewed by 2239
Abstract
With the rapid development of big data and artificial intelligence, the demand for memory has exploded. As a key data structure in modern databases and distributed storage systems, the Log-Structured Merge Tree (LSM-tree) has been widely employed (such as LevelDB, RocksDB, etc.) in [...] Read more.
With the rapid development of big data and artificial intelligence, the demand for memory has exploded. As a key data structure in modern databases and distributed storage systems, the Log-Structured Merge Tree (LSM-tree) has been widely employed (such as LevelDB, RocksDB, etc.) in systems based on key–value pairs due to its efficient writing performance. In LSM-tree-based KV stores, typically deployed on systems with DRAM-SSD storage, the KV items are first organized into MemTable as buffer for SSTables in main memory. When the buffer size exceeds the threshold, MemTable is flushed to the SSD and reorganized into an SSTable, which is then passed down level by level through compaction. However, the compaction degrades write performance and SSD endurance due to significant write amplification. To address this issue, recent proposals have mostly focused on redesigning the structure of LSM trees. We discover the prevalence of unchanged data blocks (UDBs) in the LSM-tree compaction process, i.e., UDBs are written back to SSD the same as they are read into memory, which induces extra write amplification and degrades I/O performance. In this paper, we propose a KV store design in SSD, called RemapCom, to exploit remapping on these UDBs. RemapCom first identifies UDBs with a lightweight state machine integrated into the compaction merge process. In order to increase the ratio of UDBs, RemapCom further designs a UDB retention method to further develop the benefit of remapping. Moreover, we implement a prototype of RemapCom on LevelDB by providing two primitives for the remapping. Compared to the state of the art, the evaluation results demonstrate that RemapCom can reduce write amplification by up to 53% and improve write throughput by up to 30%. Full article
Show Figures

Figure 1

19 pages, 2055 KB  
Article
Design and Implementation of a Scalable Data Warehouse for Agricultural Big Data
by Asterios Theofilou, Stefanos A. Nastis, Michail Tsagris, Santiago Rodriguez-Perez and Konstadinos Mattas
Sustainability 2025, 17(8), 3727; https://doi.org/10.3390/su17083727 - 20 Apr 2025
Cited by 3 | Viewed by 3474
Abstract
The rapid growth of agricultural data necessitates the development of storage systems that are scalable and efficient in storing, retrieving and analyzing very large datasets. The traditional relational database management systems (RDBMSs) struggle to keep up with large-scale analytical queries due to the [...] Read more.
The rapid growth of agricultural data necessitates the development of storage systems that are scalable and efficient in storing, retrieving and analyzing very large datasets. The traditional relational database management systems (RDBMSs) struggle to keep up with large-scale analytical queries due to the volume and complexity inherent in those data. This study presents the design and implementation of a scalable data warehouse (DWH) system for agricultural big data. The proposed solution efficiently integrates data and optimizes data ingestion, transformation, and query performance, leveraging a distributed architecture based on HDFS, Apache Hive, and Apache Spark, deployed on dockerized Ubuntu Linux environments. This paper highlights the reasons why a DWH is irreplaceable for big data processing, without disputing the strengths of traditional databases in transactional use cases. By detailing the architectural choices and implementation strategy, this study provides a practical framework for deploying robust DWH solutions that are useful in supporting agricultural research, market predictions and policy decision-making. Full article
Show Figures

Figure 1

19 pages, 7037 KB  
Article
An Artificial Intelligence Home Monitoring System That Uses CNN and LSTM and Is Based on the Android Studio Development Platform
by Guo-Ming Sung, Sachin D. Kohale, Te-Hui Chiang and Yu-Jie Chong
Appl. Sci. 2025, 15(3), 1207; https://doi.org/10.3390/app15031207 - 24 Jan 2025
Cited by 2 | Viewed by 1628
Abstract
This paper developed an artificial intelligence home environment monitoring system by using the Android Studio development platform. A database was constructed within a server to store sensor data. The proposed system comprises multiple sensors, a message queueing telemetry transport (MQTT) communication protocol, cloud [...] Read more.
This paper developed an artificial intelligence home environment monitoring system by using the Android Studio development platform. A database was constructed within a server to store sensor data. The proposed system comprises multiple sensors, a message queueing telemetry transport (MQTT) communication protocol, cloud data storage and computation, and end device control. A mobile application was developed using MongoDB software, which is a file-oriented NoSQL database management system developed using C++. This system represents a new database for processing big sensor data. The k-nearest neighbor (KNN) algorithm was used to impute missing data. Node-RED development software was used within the server as a data-receiving, storage, and computing environment that is convenient to manage and maintain. Data on indoor temperature, humidity, and carbon dioxide concentrations are transmitted to a mobile phone application through the MQTT communication protocol for real-time display and monitoring. The system can control a fan or warning light through the mobile application to maintain ambient temperature inside the house and to warn users of emergencies. A long short-term memory (LSTM) model and a convolutional neural network (CNN) model were used to predict indoor temperature, humidity, and carbon dioxide concentrations. Average relative errors in the predicted values of humidity and carbon dioxide concentration were approximately 0.0415% and 0.134%, respectively, for data storage using the KNN algorithm. For indoor temperature prediction, the LSTM model had a mean absolute percentage error of 0.180% and a root-mean-squared error of 0.042 °C. The CNN–LSTM model had a mean absolute percentage error of 1.370% and a root-mean-squared error of 0.117 °C. Full article
Show Figures

Figure 1

13 pages, 708 KB  
Article
Enhancing Decision-Making and Data Management in Healthcare: A Hybrid Ensemble Learning and Blockchain Approach
by Geetanjali Rathee and Razi Iqbal
Technologies 2025, 13(2), 43; https://doi.org/10.3390/technologies13020043 - 23 Jan 2025
Cited by 2 | Viewed by 2159
Abstract
Currently, big data is considered one of the most significant areas of research and development. The advancement in technologies along with the involvement of intelligent and automated devices in each field of development leads to huge generation, analysis, and the recording of information [...] Read more.
Currently, big data is considered one of the most significant areas of research and development. The advancement in technologies along with the involvement of intelligent and automated devices in each field of development leads to huge generation, analysis, and the recording of information in the network. Though a number of schemes have been proposed for providing accurate decision-making while analyzing the records, however, the existing methods lead to massive delays and difficulty in the management of stored information. Furthermore, the excessive delays in information processing pose a critical challenge to making accurate decisions in the context of big data. The aim of this paper is to propose an effective approach for accurate decision-making and analysis of the vast volumes of data generated by intelligent devices in the healthcare sector. The processed and managed records can be stored and accessed in a systematic and efficient manner. The proposed mechanism uses the hybrid of ensemble learning along with blockchain for fast and continuous recording and surveillance of information. The recorded information is analyzed using several existing methods focusing on various measurement outcomes. The results of the proposed technique are compared with existing techniques through various experiments that demonstrate the efficiency and accuracy of this technique. Full article
Show Figures

Figure 1

15 pages, 6590 KB  
Article
The Analysis of Customers’ Transactions Based on POS and RFID Data Using Big Data Analytics Tools in the Retail Space of the Future
by Marina Kholod, Alberto Celani and Gianandrea Ciaramella
Appl. Sci. 2024, 14(24), 11567; https://doi.org/10.3390/app142411567 - 11 Dec 2024
Cited by 4 | Viewed by 10064
Abstract
In today’s business landscape, the volume of transaction data is rapidly increasing. This study explores the integration of Point of Sale (POS) and Radio-Frequency Identification (RFID) technologies to enhance the analysis of customer transactions using big data tools. By leveraging these technologies, businesses [...] Read more.
In today’s business landscape, the volume of transaction data is rapidly increasing. This study explores the integration of Point of Sale (POS) and Radio-Frequency Identification (RFID) technologies to enhance the analysis of customer transactions using big data tools. By leveraging these technologies, businesses can extract valuable insights to improve processes, optimize inventory, and boost customer satisfaction. The research employs an object—subject management approach, which facilitates real-time decision-making by merging retail transactions of the clients with their movement patterns. An experiment involving around 7000 customers demonstrates the effective collection and processing of POS and RFID data, highlighting the benefits of integrating these data streams. Key metrics, such as time spent in different store sections, provide deeper insights into consumer behavior. The findings reveal the potential of these technologies to transform retail services, offering opportunities for demand forecasting, risk management, and personalized customer experiences. The study concludes that merging POS and RFID data opens new avenues for developing management solutions aimed at enhancing customer engagement and the operational efficiency of the retailer. Future research will focus on further elaborating these solutions to maximize the benefits of integrated data analysis. Full article
(This article belongs to the Special Issue Applied Machine Learning for Information Retrieval)
Show Figures

Figure 1

Back to TopTop