Machine Learning: Techniques, Industry Applications, Code Sharing, and Future Trends

A special issue of Computers (ISSN 2073-431X).

Deadline for manuscript submissions: 31 October 2025 | Viewed by 1157

Special Issue Editors


E-Mail Website
Guest Editor
1. Artificial Intelligence and Cyber Futures Institute, Charles Sturt University, Orange, NSW 2800, Australia
2. Rural Health Research Institute, Charles Sturt University, Orange, NSW 2800, Australia
Interests: artificial intelligence; uncertainty quantification; imbalanced data
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Computer Science and Engineering, Macau University of Science and Technology, Macau 999078, China
Interests: cloud computing; networks and distributed systems; blockchain; deep learning; natural language processing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

This Special Issue aims to highlight the importance of transparency, reproducibility, and openness in machine learning research by encouraging solutions accompanied by publicly shared codes. The goal is to promote best practices in sharing codes and datasets, making it easier for the research community to reproduce and build upon existing works.

Potential authors are encouraged to submit new concepts according to the submission guidelines. We also encourage researchers to share their codes in public repositories and implement them in open platforms like Kaggle, Code Ocean, etc. Editors and reviewers will aim to improve the presented concepts by providing effective feedback to researchers. This Special Issue can potentially bring about technological advances and an improved understanding of concepts among everyone involved, including readers. 

Scope and Topics of Interest:

We invite original research papers, reviews, and case studies that demonstrate innovative applications of machine learning and provide public access to the codebases used for the research. The topics of interest include, but are not limited to, the following:

  • Open-source machine learning frameworks and tools;
  • New machine learning models with publicly available implementation;
  • Benchmarking studies with open access datasets and codes;
  • Case studies and applications of machine learning in various domains with shared codes;
  • Best practices for reproducibility in machine learning research;
  • Public repositories and tools for collaborative machine learning development;
  • Studies on the impact of code sharing in AI research;
  • Efficient data preprocessing, feature extraction, and model evaluation using shared codes;
  • Reusable machine learning pipelines and workflows.

Dr. Hussain Mohammed Dipu Kabir
Dr. Subrota Kumar Mondal
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Computers is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • open-source machine learning
  • reproducible research
  • code sharing in AI
  • machine learning frameworks
  • publicly available datasets
  • transparent machine learning
  • benchmarking in machine learning
  • collaborative machine learning
  • open science in AI
  • code-based research validation
  • machine learning algorithms with codes
  • open repositories in ML
  • GitHub for machine learning
  • computational experiment reproducibility
  • best practices in code sharing

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21 pages, 2758 KiB  
Article
Enhancing Cognitive Workload Classification Using Integrated LSTM Layers and CNNs for fNIRS Data Analysis
by Mehshan Ahmed Khan, Houshyar Asadi, Mohammad Reza Chalak Qazani, Adetokunbo Arogbonlo, Siamak Pedrammehr, Adnan Anwar, Hailing Zhou, Lei Wei, Asim Bhatti, Sam Oladazimi, Burhan Khan and Saeid Nahavandi
Computers 2025, 14(2), 73; https://doi.org/10.3390/computers14020073 - 17 Feb 2025
Viewed by 845
Abstract
Functional near-infrared spectroscopy (fNIRS) is employed as a non-invasive method to monitor functional brain activation by capturing changes in the concentrations of oxygenated hemoglobin (HbO) and deoxygenated hemoglobin (HbR). Various machine learning classification techniques have been utilized to distinguish cognitive states. However, conventional [...] Read more.
Functional near-infrared spectroscopy (fNIRS) is employed as a non-invasive method to monitor functional brain activation by capturing changes in the concentrations of oxygenated hemoglobin (HbO) and deoxygenated hemoglobin (HbR). Various machine learning classification techniques have been utilized to distinguish cognitive states. However, conventional machine learning methods, although simpler to implement, undergo a complex pre-processing phase before network training and demonstrate reduced accuracy due to inadequate data preprocessing. Additionally, previous research in cognitive load assessment using fNIRS has predominantly focused on differentiating between two levels of mental workload. These studies mainly aim to classify low and high levels of cognitive load or distinguish between easy and difficult tasks. To address these limitations associated with conventional methods, this paper conducts a comprehensive exploration of the impact of Long Short-Term Memory (LSTM) layers on the effectiveness of Convolutional Neural Networks (CNNs) within deep learning models. This is to address the issues related to spatial feature overfitting and the lack of temporal dependencies in CNNs discussed in the previous studies. By integrating LSTM layers, the model can capture temporal dependencies in the fNIRS data, allowing for a more comprehensive understanding of cognitive states. The primary objective is to assess how incorporating LSTM layers enhances the performance of CNNs. The experimental results presented in this paper demonstrate that the integration of LSTM layers with convolutional layers results in an increase in the accuracy of deep learning models from 97.40% to 97.92%. Full article
Show Figures

Figure 1

Back to TopTop