Topic Editors

Computer Science Department, College of Engineering, Effat University, Jeddah, Saudi Arabia
Department of Economics and Economic Policy, Bucharest University of Economic Studies, 010374 Bucharest, Romania

Big Data and Artificial Intelligence, 3rd Edition

Abstract submission deadline
30 May 2026
Manuscript submission deadline
30 August 2026
Viewed by
6141

Topic Information

Dear Colleagues,

The evolution of research on big data and artificial intelligence in recent years challenges almost all domains of human activity. The potential of artificial intelligence to act as a catalyst for all given business models, and the capacity of big data research to provide sophisticated data and service ecosystems at a global scale, provide a challenging context for scientific contributions and applied research. This Topic section promotes scientific dialogue for the added value of novel methodological approaches and research in the specified areas. Our interests are in the entire end-to-end spectrum of big data and artificial intelligence research, from social sciences to computer science, including strategic frameworks, models, and best practices, and sophisticated research related to radical innovation. The topics include, but are not limited to, the following indicative list:

  • Enabling Technologies for Big Data and AI Research
    • Data warehouses;
    • Business intelligence;
    • Machine learning;
    • Neural networks;
    • Natural language processing;
    • Image processing;
    • Bot technology;
    • AI agents;
    • Analytics and dashboards;
    • Distributed computing;
    • Edge computing.
  • Methodologies, Frameworks, and Models for Artificial Intelligence and Big Data Research
    • Towards sustainable development goals;
    • As responses to social problems and challenges;
    • For innovations in business, research, academia, industry, and technology;
    • For theoretical foundations and contributions to the body of knowledge of AI and Big Data research.
  • Best practices and use cases;
  • Outcomes of R&D projects;
  • Advanced data science analytics;
  • Industry–government collaboration;
  • Systems of information systems;
  • Interoperability issues;
  • Security and privacy issues;
  • Ethics on big data and AI;
  • Social impact of AI;
  • Open data.

Prof. Dr. Miltiadis D. Lytras
Prof. Dr. Andreea Claudia Serban
Topic Editors

Keywords

  • artificial intelligence
  • big data
  • machine learning
  • open data
  • decision making

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
AI
ai
5.0 6.9 2020 19.2 Days CHF 1800 Submit
Big Data and Cognitive Computing
BDCC
4.4 9.8 2017 23.1 Days CHF 1800 Submit
Future Internet
futureinternet
3.6 8.3 2009 16.1 Days CHF 1800 Submit
Information
information
2.9 6.5 2010 20.9 Days CHF 1800 Submit
Sustainability
sustainability
3.3 7.7 2009 17.9 Days CHF 2400 Submit

Preprints.org is a multidisciplinary platform offering a preprint service designed to facilitate the early sharing of your research. It supports and empowers your research journey from the very beginning.

MDPI Topics is collaborating with Preprints.org and has established a direct connection between MDPI journals and the platform. Authors are encouraged to take advantage of this opportunity by posting their preprints at Preprints.org prior to publication:

  1. Share your research immediately: disseminate your ideas prior to publication and establish priority for your work.
  2. Safeguard your intellectual contribution: Protect your ideas with a time-stamped preprint that serves as proof of your research timeline.
  3. Boost visibility and impact: Increase the reach and influence of your research by making it accessible to a global audience.
  4. Gain early feedback: Receive valuable input and insights from peers before submitting to a journal.
  5. Ensure broad indexing: Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers (5 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
31 pages, 5840 KB  
Systematic Review
A Systematic Review of Ontology–AI Integration for Construction Image Recognition
by Yerim Kim, Jihyun Hwang, Seungjun Lee and Seulki Lee
Information 2026, 17(1), 48; https://doi.org/10.3390/info17010048 - 4 Jan 2026
Viewed by 579
Abstract
This study presents a systematic review of ontology–AI integration for construction image understanding, aiming to clarify how ontologies enhance semantic consistency, interpretability, and reasoning in AI-based visual analysis. Construction sites involve highly dynamic and unstructured conditions, making image-based hazard detection and situation assessment [...] Read more.
This study presents a systematic review of ontology–AI integration for construction image understanding, aiming to clarify how ontologies enhance semantic consistency, interpretability, and reasoning in AI-based visual analysis. Construction sites involve highly dynamic and unstructured conditions, making image-based hazard detection and situation assessment both essential and challenging. Ontology-based frameworks offer a structured semantic layer that can complement deep learning models; however, most existing studies adopt ontologies only as post-processing mechanisms rather than embedding them within model training or inference workflows. Following PRISMA 2020 guidelines, a comprehensive search of the Web of Science Core Collection (2014–2025) identified 587 publications, of which 152 met the eligibility criteria, and 16 explicitly addressed construction image data. Topic modeling revealed five functional objectives—regulatory compliance, hazard reasoning, decision support, knowledge reuse, and sustainability—and four primary data modalities: BIM, text, image, and sensor data. Two dominant integration patterns were observed: training-stage and output-stage enhancement. While quantitative performance improvements were modest, qualitative gains were consistent across studies, including reduced false positives, improved interpretability, and enhanced situational understanding. Persistent gaps were identified in standardization, scalability, and real-world validation. This review provides the first structured synthesis of ontology–AI research for construction image understanding and offers an evidence-based research agenda that links observed limitations to actionable directions for semantic AI in construction. Full article
(This article belongs to the Topic Big Data and Artificial Intelligence, 3rd Edition)
Show Figures

Figure 1

26 pages, 3290 KB  
Article
Empirical Evaluation of Big Data Stacks: Performance and Design Analysis of Hadoop, Modern, and Cloud Architectures
by Widad Elouataoui and Youssef Gahi
Big Data Cogn. Comput. 2026, 10(1), 7; https://doi.org/10.3390/bdcc10010007 - 24 Dec 2025
Viewed by 905
Abstract
The proliferation of big data applications across various industries has led to a paradigm shift in data architecture, with traditional approaches giving way to more agile and scalable frameworks. The evolution of big data architecture began with the emergence of the Hadoop-based data [...] Read more.
The proliferation of big data applications across various industries has led to a paradigm shift in data architecture, with traditional approaches giving way to more agile and scalable frameworks. The evolution of big data architecture began with the emergence of the Hadoop-based data stack, leveraging technologies like Hadoop Distributed File System (HDFS) and Apache Spark for efficient data processing. However, recent years have seen a shift towards modern data stacks, offering flexibility and diverse toolsets tailored to specific use cases. Concurrently, cloud computing has revolutionized big data management, providing unparalleled scalability and integration capabilities. Despite their benefits, navigating these data stack paradigms can be challenging. While existing literature offers valuable insights into individual data stack paradigms, there remains a dearth of studies that offer practical, in-depth comparisons of these paradigms across the entire big data value chain. To address this gap in the field, this paper examines three main big data stack paradigms: the Hadoop data stack, modern data stack, and cloud-based data stack. Indeed, we conduct in this study an exhaustive architectural comparison of these stacks covering the entire big data value chain from data acquisition to exposition. Moreover, this study extends beyond architectural considerations to include end-to-end use case implementations for a comprehensive evaluation of each stack. Using a large dataset of Amazon reviews, different data stack scenarios are implemented and compared. Furthermore, the paper explores critical factors such as data integration, implementation costs, and ease of deployment to provide researchers and practitioners with a relevant and up-to-date reference for navigating the complex landscape of big data technologies and making informed decisions about data strategies. Full article
(This article belongs to the Topic Big Data and Artificial Intelligence, 3rd Edition)
Show Figures

Figure 1

21 pages, 1384 KB  
Article
Exploring the Impact of Generative AI on Digital Inclusion: A Case Study of the E-Government Divide
by Stefan Radojičić and Dragan Vukmirović
AI 2025, 6(12), 303; https://doi.org/10.3390/ai6120303 - 25 Nov 2025
Viewed by 1823
Abstract
This paper examines how Generative AI (GenAI) reshapes digital inclusion in e-government. We develop the E-Government Divide Measurement Indicator (EGDMI) across three dimensions: D1—Breadth of the Divide (foundational access, affordability, and basic skills), D2—Sectoral/Specific Divide (actual use, experience, and trust in e-government), and [...] Read more.
This paper examines how Generative AI (GenAI) reshapes digital inclusion in e-government. We develop the E-Government Divide Measurement Indicator (EGDMI) across three dimensions: D1—Breadth of the Divide (foundational access, affordability, and basic skills), D2—Sectoral/Specific Divide (actual use, experience, and trust in e-government), and D3—GenAI Gap (access, task use, and competence). The index architecture specifies indicator lists, sources, units, transformations, uniform normalization, and a documented weighting strategy with sensitivity and basic uncertainty checks. Using official statistics and qualitative evidence for Serbia, we report D1 and D2 as composite indices and treat D3 as an exploratory, non-aggregated layer given current data maturity. Results show strong foundational readiness (D1 = 73.6) but very low e-government uptake (D2 = 19.9), indicating a shift of the divide from access to meaningful use, usability, and trust. GenAI capabilities are emergent and uneven (D3 sub-dimensions: access 47.8; task use 39.4; competence/verification 43.6). Cluster analysis identifies four user profiles—from “Digitally Excluded” to “GenAI-Augmented Citizens”— that support differentiated interventions. The initial hypothesis—that GenAI can widen disparities in the short run—receives partial confirmation: GenAI may lower interaction costs but raises verification and ethics thresholds for vulnerable groups. We outline a policy roadmap prioritizing human-centered service redesign, transparency, and GenAI literacy before automation, and provide reporting templates to support comparable monitoring and cross-country learning. Full article
(This article belongs to the Topic Big Data and Artificial Intelligence, 3rd Edition)
Show Figures

Figure 1

18 pages, 966 KB  
Article
Deep Learning Approaches for Classifying Aviation Safety Incidents: Evidence from Australian Data
by Aziida Nanyonga, Keith Francis Joiner, Ugur Turhan and Graham Wild
AI 2025, 6(10), 251; https://doi.org/10.3390/ai6100251 - 1 Oct 2025
Viewed by 1241
Abstract
Aviation safety remains a critical area of research, requiring accurate and efficient classification of incident reports to enhance risk assessment and accident prevention strategies. This study evaluates the performance of three deep learning models, BERT, Convolutional Neural Networks (CNN), and Long Short-Term Memory [...] Read more.
Aviation safety remains a critical area of research, requiring accurate and efficient classification of incident reports to enhance risk assessment and accident prevention strategies. This study evaluates the performance of three deep learning models, BERT, Convolutional Neural Networks (CNN), and Long Short-Term Memory (LSTM) for classifying incidents based on injury severity levels: Nil, Minor, Serious, and Fatal. The dataset, drawn from ATSB records covering the years 2013 to 2023, consists of 53,273 records and was used. The models were trained using a standardized preprocessing pipeline, with hyperparameter tuning to optimize performance. Model performance was evaluated using metrics such as F1-score accuracy, recall, and precision. Results revealed that BERT outperformed both LSTM and CNN across all metrics, achieving near-perfect scores (1.00) for precision, recall, F1-score, and accuracy in all classes. In comparison, LSTM achieved an accuracy of 99.01%, with strong performance in the “Nil” class, but less favorable results for the “Minor” class. CNN, with an accuracy of 98.99%, excelled in the “Fatal” and “Serious” classes, though it showed moderate performance in the “Minor” class. BERT’s flawless performance highlights the strengths of transformer architecture in processing sophisticated text classification problems. These findings underscore the strengths and limitations of traditional deep learning models versus transformer-based approaches, providing valuable insights for future research in aviation safety analysis. Future work will explore integrating ensemble methods, domain-specific embeddings, and model interpretability to further improve classification performance and transparency in aviation safety prediction. Full article
(This article belongs to the Topic Big Data and Artificial Intelligence, 3rd Edition)
Show Figures

Figure 1

13 pages, 9468 KB  
Article
Collaborative Fusion Attention Mechanism for Vehicle Fault Prediction
by Hong Jia, Dalin Qian, Fanghua Chen and Wei Zhou
Future Internet 2025, 17(9), 428; https://doi.org/10.3390/fi17090428 - 19 Sep 2025
Cited by 2 | Viewed by 722
Abstract
In this study, we investigate a deep learning-based vehicle fault prediction model aimed at achieving accurate prediction of vehicle faults by analyzing the correlations among different faults and the impact of critical faults on future fault development. To this end, we propose a [...] Read more.
In this study, we investigate a deep learning-based vehicle fault prediction model aimed at achieving accurate prediction of vehicle faults by analyzing the correlations among different faults and the impact of critical faults on future fault development. To this end, we propose a collaborative modeling approach utilizing multiple attention mechanisms. This approach incorporates a graph attention mechanism for the fusion representation of fault correlation information and employs a novel learning method that combines a Long Short-Term Memory (LSTM) network with an attention mechanism to capture the impact of key faults. Based on experimental validation using real-world vehicle fault record data, the model significantly outperforms existing prediction models in terms of fault prediction accuracy. Full article
(This article belongs to the Topic Big Data and Artificial Intelligence, 3rd Edition)
Show Figures

Figure 1

Back to TopTop