applsci-logo

Journal Browser

Journal Browser

Recent Advances in Deep Learning and Machine Learning in Information Systems

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: 30 August 2026 | Viewed by 3518

Special Issue Editors


E-Mail Website
Guest Editor
Department of Computer Science and Information Technology, University of the District of Columbia, Washington, DC 20759, USA
Interests: data analytics; information visualization; uncertainty quantification in deep learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Division of Software, Hallym University, Chuncheon 24252, Republic of Korea
Interests: big data analytics; deep learning acceleration; GPU-based design of parallel applications

E-Mail Website
Guest Editor
Department of Management and Decision Sciences, Coastal Carolina University, Conway, SC 29528, USA
Interests: data analytics; visualization; management information systems; HCI
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

This Special Issue explores the transformative impact of deep learning and machine learning technologies on information systems across diverse domains. It aims to bridge theoretical advancements with practical applications, focusing on how these technologies enhance decision-making, automate processes, and create intelligent systems that adapt to complex organizational needs. This issue welcomes research that addresses challenges in implementing ML/DL within information systems, including ethical considerations, interpretability, data quality, and integration with existing enterprise architectures. We particularly encourage submissions that demonstrate novel applications, methodological innovations, and empirical results that advance our understanding of how these technologies reshape organizational capabilities and information management paradigms.

Suggested topics:

  • Neural networks in enterprise systems;
  • Explainable AI for business intelligence;
  • Deep learning for knowledge management;
  • Natural language processing in information retrieval;
  • Computer vision in business processes;
  • Reinforcement learning for decision support;
  • Federated learning for distributed information systems;
  • Transfer learning in enterprise applications;
  • AI governance and information security;
  • Responsible AI in organizational contexts;
  • ML/DL model deployment in IS infrastructures;
  • Predictive analytics for business forecasting;
  • Generative AI for content management systems;
  • Anomaly detection in information systems;
  • Human–AI collaboration in enterprise environments;
  • Multimodal learning for information integration;
  • Sentiment analysis for customer relationship management;
  • Big data analytics with deep learning;
  • Self-supervised learning for knowledge discovery.

Dr. Dong Hyun Jeong
Prof. Dr. Jeong-Gun Lee
Dr. Bong-Keun Jeong
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • enterprise neural networks
  • explainable AI
  • knowledge management AI
  • business NLP
  • corporate computer vision
  • decision support systems
  • federated learning
  • AI governance
  • responsible AI
  • IS model deployment
  • business predictive analytics
  • generative business AI
  • information anomaly detection
  • human–AI collaboration
  • multimodal information systems
  • customer sentiment analysis
  • big data analytics
  • knowledge discovery
  • intelligent information systems

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

27 pages, 2289 KB  
Article
Knowledge-Injected Transformer (KIT): A Modular Encoder–Decoder Architecture for Efficient Knowledge Integration and Reliable Question Answering
by Lyudmyla Kirichenko, Daniil Maksymenko, Olena Turuta, Sergiy Yakovlev and Oleksii Turuta
Appl. Sci. 2026, 16(3), 1601; https://doi.org/10.3390/app16031601 - 5 Feb 2026
Viewed by 533
Abstract
Decoder-only language models (LMs) store factual knowledge directly in their parameters, resulting in large model sizes, costly retraining when facts change, and limited controllability in knowledge-intensive information systems. These models frequently mix stored knowledge with user-provided context, which leads to hallucinations and reduces [...] Read more.
Decoder-only language models (LMs) store factual knowledge directly in their parameters, resulting in large model sizes, costly retraining when facts change, and limited controllability in knowledge-intensive information systems. These models frequently mix stored knowledge with user-provided context, which leads to hallucinations and reduces reliability. To address these limitations, we propose KIT (Knowledge-Injected Transformer), a modular encoder–decoder architecture that separates syntactic competence from factual knowledge representation. In KIT, the decoder is pre-trained on knowledge-agnostic narrative corpora to learn language structure, while the encoder is trained independently to compress structured facts into compact latent representations. During joint training, the decoder learns to decompress these representations and generate accurate, fact-grounded responses. The modular design provides three key benefits: (1) factual knowledge can be updated by retraining only the encoder, without modifying decoder weights; (2) strict domain boundaries can be enforced, the modular design provides a structural foundation for reducing knowledge source confusion and hallucinations, with its actual effectiveness awaiting future validation on standard hallucination benchmarks; and (3) interpretability is improved because each generated token can be traced back to encoder activations. A real-world experimental evaluation demonstrates that KIT achieves competitive answer accuracy while offering superior controllability and substantially lower update costs compared to decoder-only baselines. These results indicate that modular encoder–decoder architectures represent a promising and reliable alternative for explainable, adaptable, and domain-specific question answering in modern information systems. Full article
Show Figures

Figure 1

18 pages, 635 KB  
Article
A Federated Deep Learning Framework for Sleep-Stage Monitoring Using the ISRUC-Sleep Dataset
by Alba Amato
Appl. Sci. 2026, 16(2), 1073; https://doi.org/10.3390/app16021073 - 21 Jan 2026
Viewed by 520
Abstract
Automatic sleep-stage classification is a key component of long-term sleep monitoring and digital health applications. Although deep learning models trained on centralized datasets have achieved strong performance, their deployment in real-world healthcare settings is constrained by privacy, data-governance, and regulatory requirements. Federated learning [...] Read more.
Automatic sleep-stage classification is a key component of long-term sleep monitoring and digital health applications. Although deep learning models trained on centralized datasets have achieved strong performance, their deployment in real-world healthcare settings is constrained by privacy, data-governance, and regulatory requirements. Federated learning (FL) addresses these issues by enabling decentralized training in which raw data remain local and only model parameters are exchanged; however, its effectiveness under realistic physiological heterogeneity remains insufficiently understood. In this work, we investigate a subject-level federated deep learning framework for sleep-stage classification using polysomnography data from the ISRUC-Sleep dataset. We adopt a realistic one subject = one client setting spanning three clinically distinct subgroups and evaluate a lightweight one-dimensional convolutional neural network (1D-CNN) under four training regimes: a centralized baseline and three federated strategies (FedAvg, FedProx, and FedBN), all sharing identical architecture and preprocessing. The centralized model, trained on a cohort with regular sleep architecture, achieves stable performance (accuracy 69.65%, macro-F1 0.6537). In contrast, naive FedAvg fails to converge under subject-level non-IID data (accuracy 14.21%, macro-F1 0.0601), with minority stages such as N1 and REM largely lost. FedProx yields only marginal improvement, while FedBN—by preserving client-specific batch-normalization statistics—achieves the best federated performance (accuracy 26.04%, macro-F1 0.1732) and greater stability across clients. These findings indicate that the main limitation of FL for sleep staging lies in physiological heterogeneity rather than model capacity, highlighting the need for heterogeneity-aware strategies in privacy-preserving sleep analytics. Full article
Show Figures

Figure 1

Review

Jump to: Research

20 pages, 339 KB  
Review
An Overview of Recent Advances in Natural Language Processing for Information Systems
by Douglas O’Shaughnessy
Appl. Sci. 2026, 16(2), 1122; https://doi.org/10.3390/app16021122 - 22 Jan 2026
Viewed by 1840
Abstract
The crux of information systems is efficient storage and access to useful data by users. This paper is an overview of work that has advanced the use of such systems in recent years, primarily in machine learning, and specifically, deep learning methods. Situating [...] Read more.
The crux of information systems is efficient storage and access to useful data by users. This paper is an overview of work that has advanced the use of such systems in recent years, primarily in machine learning, and specifically, deep learning methods. Situating progress in terms of classical pattern recognition techniques for text, we review computational methods to process spoken and written data. Digital assistants such as Siri, Cortana, and Google Now exploit large language models and encoder-only transformer-based systems such as BERT. Practical tasks include Machine Translation, Information Retrieval, Text Summarization, Question-Answering, Sentiment Analysis, Natural Language Generation, Named Entity Recognition, and Relation Extraction. Issues to be covered include: post-training through alignment, parsing, and Reinforcement Learning. Full article
Back to TopTop