Special Issue "Feature Papers for AI"

A special issue of AI (ISSN 2673-2688).

Deadline for manuscript submissions: 31 December 2022 | Viewed by 16733

Special Issue Editors

Prof. Dr. Kenji Suzuki
E-Mail Website
Guest Editor
Artificial Intelligence in Biomedical Imaging Lab (AIBI Lab), Laboratory for Future Interdisciplinary Research of Science and Technology, Institute of Innovative Research, Tokyo Institute of Technology, Tokyo 152-8550, Japan
Interests: machine learning; deep learning; artificial intelligence; medical image analysis; medical imaging; computer-aided diagnosis; signal and image processing; computer vision
Special Issues, Collections and Topics in MDPI journals
Dr. José Manuel Ferreira Machado
E-Mail Website1 Website2
Guest Editor
Computer Science and Technology, ALGORITMI Research Centre, University of Minho, 4710-057 Braga, Portugal
Interests: biomedical informatics; electronic health records; interoperability; databases; business intelligence; applied artificial intelligence
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

This Special Issue aims to collect high-quality reviews and original papers in the relevant artificial intelligence (AI) research fields. We encourage researchers from various fields within the journal’s scope (https://www.mdpi.com/journal/ai/about) to contribute papers that highlight the latest developments in their research field, or to invite relevant experts and colleagues to do so. The topics of this Special Issue include, but are not limited to, the following:

  • machine and deep learning;
  • knowledge reasoning and discovery;
  • automated planning and scheduling;
  • natural language processing and recognition;
  • computer vision;
  • intelligent robotics;
  • artificial neural networks;
  • artificial general intelligence;
  • applications of AI.

We welcome you to send short proposals for feature paper submissions to the Editorial Office ([email protected]) before the formal submission of your manuscript. Selected planned papers can be published in full open access form, free of charge, if they are accepted after a blind peer-review.

Prof. Dr. Kenji Suzuki
Prof. Dr. José Machado
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. AI is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

Article
Robust and Lightweight System for Gait-Based Gender Classification toward Viewing Angle Variations
AI 2022, 3(2), 538-553; https://doi.org/10.3390/ai3020031 - 14 Jun 2022
Viewed by 293
Abstract
In computer vision applications, gait-based gender classification is a challenging task as a person may walk at various angles with respect to the camera viewpoint. In some of the viewing angles, the person’s limb movement can be occluded from the camera, preventing the [...] Read more.
In computer vision applications, gait-based gender classification is a challenging task as a person may walk at various angles with respect to the camera viewpoint. In some of the viewing angles, the person’s limb movement can be occluded from the camera, preventing the perception of the gait-based features. To solve this problem, this study proposes a robust and lightweight system for gait-based gender classification. It uses a gait energy image (GEI) for representing the gait of an individual. A discrete cosine transform (DCT) is applied on GEI to generate a gait-based feature vector. Further, this DCT feature vector is applied to XGBoost classifier for performing gender classification. To improve the classification results, the XGBoost parameters are tuned. Finally, the results are compared with the other state-of-the-art approaches. The performance of the proposed system is evaluated on the OU-MVLP dataset. The experiment results show a mean CCR (correct classification rate) of 95.33% for the gender classification. The results obtained from various viewpoints of OU-MVLP illustrate the robustness of the proposed system for gait-based gender classification. Full article
(This article belongs to the Special Issue Feature Papers for AI)
Show Figures

Figure 1

Article
Can Interpretable Reinforcement Learning Manage Prosperity Your Way?
AI 2022, 3(2), 526-537; https://doi.org/10.3390/ai3020030 - 13 Jun 2022
Viewed by 378
Abstract
Personalisation of products and services is fast becoming the driver of success in banking and commerce. Machine learning holds the promise of gaining a deeper understanding of and tailoring to customers’ needs and preferences. Whereas traditional solutions to financial decision problems frequently rely [...] Read more.
Personalisation of products and services is fast becoming the driver of success in banking and commerce. Machine learning holds the promise of gaining a deeper understanding of and tailoring to customers’ needs and preferences. Whereas traditional solutions to financial decision problems frequently rely on model assumptions, reinforcement learning is able to exploit large amounts of data to improve customer modelling and decision-making in complex financial environments with fewer assumptions. Model explainability and interpretability present challenges from a regulatory perspective which demands transparency for acceptance; they also offer the opportunity for improved insight into and understanding of customers. Post-hoc approaches are typically used for explaining pretrained reinforcement learning models. Based on our previous modeling of customer spending behaviour, we adapt our recent reinforcement learning algorithm that intrinsically characterizes desirable behaviours and we transition to the problem of prosperity management. We train inherently interpretable reinforcement learning agents to give investment advice that is aligned with prototype financial personality traits which are combined to make a final recommendation. We observe that the trained agents’ advice adheres to their intended characteristics, they learn the value of compound growth, and, without any explicit reference, the notion of risk as well as improved policy convergence. Full article
(This article belongs to the Special Issue Feature Papers for AI)
Show Figures

Figure 1

Article
Navigation Map-Based Artificial Intelligence
AI 2022, 3(2), 434-464; https://doi.org/10.3390/ai3020026 - 12 May 2022
Viewed by 665
Abstract
A biologically inspired cognitive architecture is described which uses navigation maps (i.e., spatial locations of objects) as its main data elements. The navigation maps are also used to represent higher-level concepts as well as to direct operations to perform on other navigation maps. [...] Read more.
A biologically inspired cognitive architecture is described which uses navigation maps (i.e., spatial locations of objects) as its main data elements. The navigation maps are also used to represent higher-level concepts as well as to direct operations to perform on other navigation maps. Incoming sensory information is mapped to local sensory navigation maps which then are in turn matched with the closest multisensory maps, and then mapped onto a best-matched multisensory navigation map. Enhancements of the biologically inspired feedback pathways allow the intermediate results of operations performed on the best-matched multisensory navigation map to be fed back, temporarily stored, and re-processed in the next cognitive cycle. This allows the exploration and generation of cause-and-effect behavior. In the re-processing of these intermediate results, navigation maps can, by core analogical mechanisms, lead to other navigation maps which offer an improved solution to many routine problems the architecture is exposed to. Given that the architecture is brain-inspired, analogical processing may also form a key mechanism in the human brain, consistent with psychological evidence. Similarly, for conventional artificial intelligence systems, analogical processing as a core mechanism may possibly allow enhanced performance. Full article
(This article belongs to the Special Issue Feature Papers for AI)
Show Figures

Figure 1

Article
Distributed Big Data Analytics Method for the Early Prediction of the Neonatal 5-Minute Apgar Score before or during Birth and Ranking the Risk Factors from a National Dataset
AI 2022, 3(2), 371-389; https://doi.org/10.3390/ai3020023 - 21 Apr 2022
Cited by 1 | Viewed by 496
Abstract
One-minute and five-minute Apgar scores are good measures to assess the health status of newborns. A five-minute Apgar score can predict the risk of some disorders such as asphyxia, encephalopathy, cerebral palsy and ADHD. The early prediction of Apgar score before or during [...] Read more.
One-minute and five-minute Apgar scores are good measures to assess the health status of newborns. A five-minute Apgar score can predict the risk of some disorders such as asphyxia, encephalopathy, cerebral palsy and ADHD. The early prediction of Apgar score before or during birth and ranking the risk factors can be helpful to manage and reduce the probability of birth producing low Apgar scores. Therefore, the main aim of this study is the early prediction of the neonate 5-min Apgar score before or during birth and ranking the risk factors for a big national dataset using big data analytics methods. In this study, a big dataset including 60 features describing birth cases registered in Iranian maternal and neonatal (IMAN) registry from 1 April 2016 to 1 January 2017 is collected. A distributed big data analytics method for the early prediction of neonate Apgar score and a distributed big data feature ranking method for ranking the predictors of neonate Apgar score are proposed in this study. The main aim of this study is to provide the ability to predict birth cases with low Apgar scores by analyzing the features that describe prenatal properties before or during birth. The top 14 features were identified in this study and used for training the classifiers. Our proposed stack ensemble outperforms the compared classifiers with an accuracy of 99.37 ± 1.06, precision of 99.37 ± 1.06, recall of 99.50 ± 0.61 and F-score of 99.41 ± 0.70 (for confidence interval of 95%) to predict low, moderate and high 5-min Apgar scores. Among the top predictors, fetal height around the baby’s head and fetal weight denote fetal growth status. Fetal growth restrictions can lead to low or moderate 5-min Apgar score. Moreover, hospital type and medical science university are healthcare system-related factors that can be managed via improving the quality of healthcare services all over the country. Full article
(This article belongs to the Special Issue Feature Papers for AI)
Show Figures

Figure 1

Article
Performance Evaluation of Deep Neural Network Model for Coherent X-ray Imaging
AI 2022, 3(2), 318-330; https://doi.org/10.3390/ai3020020 - 18 Apr 2022
Viewed by 993
Abstract
We present a supervised deep neural network model for phase retrieval of coherent X-ray imaging and evaluate the performance. A supervised deep-learning-based approach requires a large amount of pre-training datasets. In most proposed models, the various experimental uncertainties are not considered when the [...] Read more.
We present a supervised deep neural network model for phase retrieval of coherent X-ray imaging and evaluate the performance. A supervised deep-learning-based approach requires a large amount of pre-training datasets. In most proposed models, the various experimental uncertainties are not considered when the input dataset, corresponding to the diffraction image in reciprocal space, is generated. We explore the performance of the deep neural network model, which is trained with an ideal quality of dataset, when it faces real-like corrupted diffraction images. We focus on three aspects of data qualities such as a detection dynamic range, a degree of coherence and noise level. The investigation shows that the deep neural network model is robust to a limited dynamic range and partially coherent X-ray illumination in comparison to the traditional phase retrieval, although it is more sensitive to the noise than the iteration-based method. This study suggests a baseline capability of the supervised deep neural network model for coherent X-ray imaging in preparation for the deployment to the laboratory where diffraction images are acquired. Full article
(This article belongs to the Special Issue Feature Papers for AI)
Show Figures

Figure 1

Article
Detection in Adverse Weather Conditions for Autonomous Vehicles via Deep Learning
AI 2022, 3(2), 303-317; https://doi.org/10.3390/ai3020019 - 18 Apr 2022
Viewed by 730
Abstract
Weather detection systems (WDS) have an indispensable role in supporting the decisions of autonomous vehicles, especially in severe and adverse circumstances. With deep learning techniques, autonomous vehicles can effectively identify outdoor weather conditions and thus make appropriate decisions to easily adapt to new [...] Read more.
Weather detection systems (WDS) have an indispensable role in supporting the decisions of autonomous vehicles, especially in severe and adverse circumstances. With deep learning techniques, autonomous vehicles can effectively identify outdoor weather conditions and thus make appropriate decisions to easily adapt to new conditions and environments. This paper proposes a deep learning (DL)-based detection framework to categorize weather conditions for autonomous vehicles in adverse or normal situations. The proposed framework leverages the power of transfer learning techniques along with the powerful Nvidia GPU to characterize the performance of three deep convolutional neural networks (CNNs): SqueezeNet, ResNet-50, and EfficientNet. The developed models have been evaluated on two up-to-date weather imaging datasets, namely, DAWN2020 and MCWRD2018. The combined dataset has been used to provide six weather classes: cloudy, rainy, snowy, sandy, shine, and sunrise. Experimentally, all models demonstrated superior classification capacity, with the best experimental performance metrics recorded for the weather-detection-based ResNet-50 CNN model scoring 98.48%, 98.51%, and 98.41% for detection accuracy, precision, and sensitivity. In addition to this, a short detection time has been noted for the weather-detection-based ResNet-50 CNN model, involving an average of 5 (ms) for the time-per-inference step using the GPU component. Finally, comparison with other related state-of-art models showed the superiority of our model which improved the classification accuracy for the six weather conditions classifiers by a factor of 0.5–21%. Consequently, the proposed framework can be effectively implemented in real-time environments to provide decisions on demand for autonomous vehicles with quick, precise detection capacity. Full article
(This article belongs to the Special Issue Feature Papers for AI)
Show Figures

Figure 1

Article
A Technology Acceptance Model Survey of the Metaverse Prospects
AI 2022, 3(2), 285-302; https://doi.org/10.3390/ai3020018 - 11 Apr 2022
Viewed by 8908
Abstract
The technology acceptance model is a widely used model to investigate whether users will accept or refuse a new technology. The Metaverse is a 3D world based on virtual reality simulation to express real life. It can be considered the next generation of [...] Read more.
The technology acceptance model is a widely used model to investigate whether users will accept or refuse a new technology. The Metaverse is a 3D world based on virtual reality simulation to express real life. It can be considered the next generation of using the internet. In this paper, we are going to investigate variables that may affect users’ acceptance of Metaverse technology and the relationships between those variables by applying the extended technology acceptance model to investigate many factors (namely self-efficiency, social norm, perceived curiosity, perceived pleasure, and price). The goal of understanding these factors is to know how Metaverse developers might enhance this technology to meet users’ expectations and let the users interact with this technology better. To this end, a sample of 302 educated participants of different ages was chosen to answer an online Likert scale survey ranging from 1 (strongly disagree) to 5 (strongly agree). The study found that, first, self-efficiency, perceived curiosity, and perceived pleasure positively influence perceived ease of use. Secondly, social norms, perceived pleasure, and perceived ease of use positively influences perceived usefulness. Third, perceived ease of use and perceived usefulness positively influence attitude towards Metaverse technology use, which overall will influence behavioral intention. Fourth, the relationship between price and behavioral intention was significant and negative. Finally, the study found that participants with an age of less than 20 years were the most positively accepting of Metaverse technology. Full article
(This article belongs to the Special Issue Feature Papers for AI)
Show Figures

Figure 1

Article
Enhancement of Partially Coherent Diffractive Images Using Generative Adversarial Network
AI 2022, 3(2), 274-284; https://doi.org/10.3390/ai3020017 - 11 Apr 2022
Viewed by 701
Abstract
We present a deep learning-based generative model for the enhancement of partially coherent diffractive images. In lensless coherent diffractive imaging, a highly coherent X-ray illumination is required to image an object at high resolution. Non-ideal experimental conditions result in a partially coherent X-ray [...] Read more.
We present a deep learning-based generative model for the enhancement of partially coherent diffractive images. In lensless coherent diffractive imaging, a highly coherent X-ray illumination is required to image an object at high resolution. Non-ideal experimental conditions result in a partially coherent X-ray illumination, lead to imperfections of coherent diffractive images recorded on a detector, and ultimately limit the capability of lensless coherent diffractive imaging. The previous approaches, relying on the coherence property of illumination, require preliminary experiments or expensive computations. In this article, we propose a generative adversarial network (GAN) model to enhance the visibility of fringes in partially coherent diffractive images. Unlike previous approaches, the model is trained to restore the latent sharp features from blurred input images without finding coherence properties of illumination. We demonstrate that the GAN model performs well with both coherent diffractive imaging and ptychography. It can be applied to a wide range of imaging techniques relying on phase retrieval of coherent diffraction patterns. Full article
(This article belongs to the Special Issue Feature Papers for AI)
Show Figures

Figure 1

Article
Evolution towards Smart and Software-Defined Internet of Things
AI 2022, 3(1), 100-123; https://doi.org/10.3390/ai3010007 - 21 Feb 2022
Cited by 1 | Viewed by 1054
Abstract
The Internet of Things (IoT) is a mesh network of interconnected objects with unique identifiers that can transmit data and communicate with one another without the need for human intervention. The IoT has brought the future closer to us. It has opened up [...] Read more.
The Internet of Things (IoT) is a mesh network of interconnected objects with unique identifiers that can transmit data and communicate with one another without the need for human intervention. The IoT has brought the future closer to us. It has opened up new and vast domains for connecting not only people, but also all kinds of simple objects and phenomena all around us. With billions of heterogeneous devices connected to the Internet, the network architecture must evolve to accommodate the expected increase in data generation while also improving the security and efficiency of connectivity. Traditional IoT architectures are primitive and incapable of extending functionality and productivity to the IoT infrastructure’s desired levels. Software-Defined Networking (SDN) and virtualization are two promising technologies for cost-effectively handling the scale and versatility required for IoT. In this paper, we discussed traditional IoT networks and the need for SDN and Network Function Virtualization (NFV), followed by an analysis of SDN and NFV solutions for implementing IoT in various ways. Full article
(This article belongs to the Special Issue Feature Papers for AI)
Show Figures

Figure 1

Review

Jump to: Research

Review
A Review of the Potential of Artificial Intelligence Approaches to Forecasting COVID-19 Spreading
AI 2022, 3(2), 493-511; https://doi.org/10.3390/ai3020028 - 19 May 2022
Viewed by 617
Abstract
The spread of SARS-CoV-2 can be considered one of the most complicated patterns with a large number of uncertainties and nonlinearities. Therefore, analysis and prediction of the distribution of this virus are one of the most challenging problems, affecting the planning and managing [...] Read more.
The spread of SARS-CoV-2 can be considered one of the most complicated patterns with a large number of uncertainties and nonlinearities. Therefore, analysis and prediction of the distribution of this virus are one of the most challenging problems, affecting the planning and managing of its impacts. Although different vaccines and drugs have been proved, produced, and distributed one after another, several new fast-spreading SARS-CoV-2 variants have been detected. This is why numerous techniques based on artificial intelligence (AI) have been recently designed or redeveloped to forecast these variants more effectively. The focus of such methods is on deep learning (DL) and machine learning (ML), and they can forecast nonlinear trends in epidemiological issues appropriately. This short review aims to summarize and evaluate the trustworthiness and performance of some important AI-empowered approaches used for the prediction of the spread of COVID-19. Sixty-five preprints, peer-reviewed papers, conference proceedings, and book chapters published in 2020 were reviewed. Our criteria to include or exclude references were the performance of these methods reported in the documents. The results revealed that although methods under discussion in this review have suitable potential to predict the spread of COVID-19, there are still weaknesses and drawbacks that fall in the domain of future research and scientific endeavors. Full article
(This article belongs to the Special Issue Feature Papers for AI)
Show Figures

Figure 1

Review
Hybrid Deep Learning Techniques for Predicting Complex Phenomena: A Review on COVID-19
AI 2022, 3(2), 416-433; https://doi.org/10.3390/ai3020025 - 06 May 2022
Cited by 1 | Viewed by 779
Abstract
Complex phenomena have some common characteristics, such as nonlinearity, complexity, and uncertainty. In these phenomena, components typically interact with each other and a part of the system may affect other parts or vice versa. Accordingly, the human brain, the Earth’s global climate, the [...] Read more.
Complex phenomena have some common characteristics, such as nonlinearity, complexity, and uncertainty. In these phenomena, components typically interact with each other and a part of the system may affect other parts or vice versa. Accordingly, the human brain, the Earth’s global climate, the spreading of viruses, the economic organizations, and some engineering systems such as the transportation systems and power grids can be categorized into these phenomena. Since both analytical approaches and AI methods have some specific characteristics in solving complex problems, a combination of these techniques can lead to new hybrid methods with considerable performance. This is why several types of research have recently been conducted to benefit from these combinations to predict the spreading of COVID-19 and its dynamic behavior. In this review, 80 peer-reviewed articles, book chapters, conference proceedings, and preprints with a focus on employing hybrid methods for forecasting the spreading of COVID-19 published in 2020 have been aggregated and reviewed. These documents have been extracted from Google Scholar and many of them have been indexed on the Web of Science. Since there were many publications on this topic, the most relevant and effective techniques, including statistical models and deep learning (DL) or machine learning (ML) approach, have been surveyed in this research. The main aim of this research is to describe, summarize, and categorize these effective techniques considering their restrictions to be used as trustable references for scientists, researchers, and readers to make an intelligent choice to use the best possible method for their academic needs. Nevertheless, considering the fact that many of these techniques have been used for the first time and need more evaluations, we recommend none of them as an ideal way to be used in their project. Our study has shown that these methods can hold the robustness and reliability of statistical methods and the power of computation of DL ones. Full article
(This article belongs to the Special Issue Feature Papers for AI)
Show Figures

Figure 1

Back to TopTop