Special Issue "Feature Papers for AI"

A special issue of AI (ISSN 2673-2688).

Deadline for manuscript submissions: 31 December 2023 | Viewed by 100603

Special Issue Editors

Artificial Intelligence in Biomedical Imaging Lab (AIBI Lab), Laboratory for Future Interdisciplinary Research of Science and Technology, Institute of Innovative Research, Tokyo Institute of Technology, Tokyo 152-8550, Japan
Interests: machine learning; deep learning; artificial intelligence; medical image analysis; medical imaging; computer-aided diagnosis; signal and image processing; computer vision
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

This Special Issue aims to collect high-quality reviews and original papers in the relevant artificial intelligence (AI) research fields. We encourage researchers from various fields within the journal’s scope (https://www.mdpi.com/journal/ai/about) to contribute papers that highlight the latest developments in their research field, or to invite relevant experts and colleagues to do so. The topics of this Special Issue include, but are not limited to, the following:

  • machine and deep learning;
  • knowledge reasoning and discovery;
  • automated planning and scheduling;
  • natural language processing and recognition;
  • computer vision;
  • intelligent robotics;
  • artificial neural networks;
  • artificial general intelligence;
  • applications of AI.

We welcome you to send short proposals for feature paper submissions to the Editorial Office ([email protected]) before the formal submission of your manuscript. Selected planned papers can be published in full open access form, free of charge, if they are accepted after a blind peer-review.

Prof. Dr. Kenji Suzuki
Dr. José Machado
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. AI is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1200 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (30 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review, Other

Article
Applying Few-Shot Learning for In-the-Wild Camera-Trap Species Classification
AI 2023, 4(3), 574-597; https://doi.org/10.3390/ai4030031 - 31 Jul 2023
Viewed by 653
Abstract
Few-shot learning (FSL) describes the challenge of learning a new task using a minimum amount of labeled data, and we have observed significant progress made in this area. In this paper, we explore the effectiveness of the FSL theory by considering a real-world [...] Read more.
Few-shot learning (FSL) describes the challenge of learning a new task using a minimum amount of labeled data, and we have observed significant progress made in this area. In this paper, we explore the effectiveness of the FSL theory by considering a real-world problem where labels are hard to obtain. To assist a large study on chimpanzee hunting activities, we aim to classify various animal species that appear in our in-the-wild camera traps located in Senegal. Using the philosophy of FSL, we aim to train an FSL network to learn to separate animal species using large public datasets and implement the network on our data with its novel species/classes and unseen environments, needing only to label a few images per new species. Here, we first discuss constraints and challenges caused by having in-the-wild uncurated data, which are often not addressed in benchmark FSL datasets. Considering these new challenges, we create two experiments and corresponding evaluation metrics to determine a network’s usefulness in a real-world implementation scenario. We then compare results from various FSL networks, and describe how factors may affect a network’s potential real-world usefulness. We consider network design factors such as distance metrics or extra pre-training, and examine their roles in a real-world implementation setting. We also consider additional factors such as support set selection and ease of implementation, which are usually ignored when a benchmark dataset has been established. Full article
(This article belongs to the Special Issue Feature Papers for AI)
Show Figures

Figure 1

Article
Improving Alzheimer’s Disease and Brain Tumor Detection Using Deep Learning with Particle Swarm Optimization
AI 2023, 4(3), 551-573; https://doi.org/10.3390/ai4030030 - 28 Jul 2023
Viewed by 1523
Abstract
Convolutional Neural Networks (CNNs) have exhibited remarkable potential in effectively tackling the intricate task of classifying MRI images, specifically in Alzheimer’s disease detection and brain tumor identification. While CNNs optimize their parameters automatically through training processes, finding the optimal values for these parameters [...] Read more.
Convolutional Neural Networks (CNNs) have exhibited remarkable potential in effectively tackling the intricate task of classifying MRI images, specifically in Alzheimer’s disease detection and brain tumor identification. While CNNs optimize their parameters automatically through training processes, finding the optimal values for these parameters can still be a challenging task due to the complexity of the search space and the potential for suboptimal results. Consequently, researchers often encounter difficulties determining the ideal parameter settings for CNNs. This challenge necessitates using trial-and-error methods or expert judgment, as the search for the best combination of parameters involves exploring a vast space of possibilities. Despite the automatic optimization during training, the process does not guarantee finding the globally-optimal parameter values. Hence, researchers often rely on iterative experimentation and expert knowledge to fine-tune these parameters and maximize CNN performance. This poses a significant obstacle in developing real-world applications that leverage CNNs for MRI image analysis. This paper presents a new hybrid model that combines the Particle Swarm Optimization (PSO) algorithm with CNNs to enhance detection and classification capabilities. Our method utilizes the PSO algorithm to determine the optimal configuration of CNN hyper-parameters. Subsequently, these optimized parameters are applied to the CNN architectures for classification. As a result, our hybrid model exhibits improved prediction accuracy for brain diseases while reducing the loss of function value. To evaluate the performance of our proposed model, we conducted experiments using three benchmark datasets. Two datasets were utilized for Alzheimer’s disease: the Alzheimer’s Disease Neuroimaging Initiative (ADNI) and an international dataset from Kaggle. The third dataset focused on brain tumors. The experimental assessment demonstrated the superiority of our proposed model, achieving unprecedented accuracy rates of 98.50%, 98.83%, and 97.12% for the datasets mentioned earlier, respectively. Full article
(This article belongs to the Special Issue Feature Papers for AI)
Show Figures

Figure 1

Article
Federated Learning for IoT Intrusion Detection
AI 2023, 4(3), 509-530; https://doi.org/10.3390/ai4030028 - 24 Jul 2023
Viewed by 1234
Abstract
The number of Internet of Things (IoT) devices has increased considerably in the past few years, resulting in a large growth of cyber attacks on IoT infrastructure. As part of a defense in depth approach to cybersecurity, intrusion detection systems (IDSs) have acquired [...] Read more.
The number of Internet of Things (IoT) devices has increased considerably in the past few years, resulting in a large growth of cyber attacks on IoT infrastructure. As part of a defense in depth approach to cybersecurity, intrusion detection systems (IDSs) have acquired a key role in attempting to detect malicious activities efficiently. Most modern approaches to IDS in IoT are based on machine learning (ML) techniques. The majority of these are centralized, which implies the sharing of data from source devices to a central server for classification. This presents potentially crucial issues related to privacy of user data as well as challenges in data transfers due to their volumes. In this article, we evaluate the use of federated learning (FL) as a method to implement intrusion detection in IoT environments. FL is an alternative, distributed method to centralized ML models, which has seen a surge of interest in IoT intrusion detection recently. In our implementation, we evaluate FL using a shallow artificial neural network (ANN) as the shared model and federated averaging (FedAvg) as the aggregation algorithm. The experiments are completed on the ToN_IoT and CICIDS2017 datasets in binary and multiclass classification. Classification is performed by the distributed devices using their own data. No sharing of data occurs among participants, maintaining data privacy. When compared against a centralized approach, results have shown that a collaborative FL IDS can be an efficient alternative, in terms of accuracy, precision, recall and F1-score, making it a viable option as an IoT IDS. Additionally, with these results as baseline, we have evaluated alternative aggregation algorithms, namely FedAvgM, FedAdam and FedAdagrad, in the same setting by using the Flower FL framework. The results from the evaluation show that, in our scenario, FedAvg and FedAvgM tend to perform better compared to the two adaptive algorithms, FedAdam and FedAdagrad. Full article
(This article belongs to the Special Issue Feature Papers for AI)
Show Figures

Figure 1

Article
Training Artificial Neural Networks Using a Global Optimization Method That Utilizes Neural Networks
AI 2023, 4(3), 491-508; https://doi.org/10.3390/ai4030027 - 20 Jul 2023
Viewed by 965
Abstract
Perhaps one of the best-known machine learning models is the artificial neural network, where a number of parameters must be adjusted to learn a wide range of practical problems from areas such as physics, chemistry, medicine, etc. Such problems can be reduced to [...] Read more.
Perhaps one of the best-known machine learning models is the artificial neural network, where a number of parameters must be adjusted to learn a wide range of practical problems from areas such as physics, chemistry, medicine, etc. Such problems can be reduced to pattern recognition problems and then modeled from artificial neural networks, whether these problems are classification problems or regression problems. To achieve the goal of neural networks, they must be trained by appropriately adjusting their parameters using some global optimization methods. In this work, the application of a recent global minimization technique is suggested for the adjustment of neural network parameters. In this technique, an approximation of the objective function to be minimized is created using artificial neural networks and then sampling is performed from the approximation function and not the original one. Therefore, in the present work, learning of the parameters of artificial neural networks is performed using other neural networks. The new training method was tested on a series of well-known problems, a comparative study was conducted against other neural network parameter tuning techniques, and the results were more than promising. From what was seen after performing the experiments and comparing the proposed technique with others that have been used for classification datasets as well as regression datasets, there was a significant difference in the performance of the proposed technique, starting with 30% for classification datasets and reaching 50% for regression problems. However, the proposed technique, because it presupposes the use of global optimization techniques involving artificial neural networks, may require significantly higher execution time than other techniques. Full article
(This article belongs to the Special Issue Feature Papers for AI)
Show Figures

Figure 1

Article
CAA-PPI: A Computational Feature Design to Predict Protein–Protein Interactions Using Different Encoding Strategies
AI 2023, 4(2), 385-400; https://doi.org/10.3390/ai4020020 - 28 Apr 2023
Viewed by 1531
Abstract
Protein–protein interactions (PPIs) are involved in an extensive variety of biological procedures, including cell-to-cell interactions, and metabolic and developmental control. PPIs are becoming one of the most important aims of system biology. PPIs act as a fundamental part in predicting the protein function [...] Read more.
Protein–protein interactions (PPIs) are involved in an extensive variety of biological procedures, including cell-to-cell interactions, and metabolic and developmental control. PPIs are becoming one of the most important aims of system biology. PPIs act as a fundamental part in predicting the protein function of the target protein and the drug ability of molecules. An abundance of work has been performed to develop methods to computationally predict PPIs as this supplements laboratory trials and offers a cost-effective way of predicting the most likely set of interactions at the entire proteome scale. This article presents an innovative feature representation method (CAA-PPI) to extract features from protein sequences using two different encoding strategies followed by an ensemble learning method. The random forest methodwas used as a classifier for PPI prediction. CAA-PPI considers the role of the trigram and bond of a given amino acid with its nearby ones. The proposed PPI model achieved more than a 98% prediction accuracy with one encoding scheme and more than a 95% prediction accuracy with another encoding scheme for the two diverse PPI datasets, i.e., H. pylori and Yeast. Further, investigations were performed to compare the CAA-PPI approach with existing sequence-based methods and revealed the proficiency of the proposed method with both encoding strategies. To further assess the practical prediction competence, a blind test was implemented on five other species’ datasets independent of the training set, and the obtained results ascertained the productivity of CAA-PPI with both encoding schemes. Full article
(This article belongs to the Special Issue Feature Papers for AI)
Show Figures

Figure 1

Article
A General Hybrid Modeling Framework for Systems Biology Applications: Combining Mechanistic Knowledge with Deep Neural Networks under the SBML Standard
AI 2023, 4(1), 303-318; https://doi.org/10.3390/ai4010014 - 01 Mar 2023
Cited by 1 | Viewed by 2098
Abstract
In this paper, a computational framework is proposed that merges mechanistic modeling with deep neural networks obeying the Systems Biology Markup Language (SBML) standard. Over the last 20 years, the systems biology community has developed a large number of mechanistic models that are [...] Read more.
In this paper, a computational framework is proposed that merges mechanistic modeling with deep neural networks obeying the Systems Biology Markup Language (SBML) standard. Over the last 20 years, the systems biology community has developed a large number of mechanistic models that are currently stored in public databases in SBML. With the proposed framework, existing SBML models may be redesigned into hybrid systems through the incorporation of deep neural networks into the model core, using a freely available python tool. The so-formed hybrid mechanistic/neural network models are trained with a deep learning algorithm based on the adaptive moment estimation method (ADAM), stochastic regularization and semidirect sensitivity equations. The trained hybrid models are encoded in SBML and uploaded in model databases, where they may be further analyzed as regular SBML models. This approach is illustrated with three well-known case studies: the Escherichia coli threonine synthesis model, the P58IPK signal transduction model, and the Yeast glycolytic oscillations model. The proposed framework is expected to greatly facilitate the widespread use of hybrid modeling techniques for systems biology applications. Full article
(This article belongs to the Special Issue Feature Papers for AI)
Show Figures

Graphical abstract

Article
Artificial Intelligence-Enhanced UUV Actuator Control
AI 2023, 4(1), 270-288; https://doi.org/10.3390/ai4010012 - 16 Feb 2023
Cited by 2 | Viewed by 2088
Abstract
This manuscript compares deterministic artificial intelligence to a model-following control applied to DC motor control, including an evaluation of the threshold computation rate to let unmanned underwater vehicles correctly follow the challenging discontinuous square wave command signal. The approaches presented in the main [...] Read more.
This manuscript compares deterministic artificial intelligence to a model-following control applied to DC motor control, including an evaluation of the threshold computation rate to let unmanned underwater vehicles correctly follow the challenging discontinuous square wave command signal. The approaches presented in the main text are validated by simulations in MATLAB®, where the motor process is discretized at multiple step sizes, which is inversely proportional to the computation rate. Performance is compared to canonical benchmarks that are evaluated by the error mean and standard deviation. With a large step size, discrete deterministic artificial intelligence shows a larger error mean than the model-following self-turning regulator approach (the selected benchmark). However, the performance improves with a decreasing step size. The error mean is close to the continuous deterministic artificial intelligence when the step size is reduced to 0.2 s, which means that the computation rate and the sampling period restrict discrete deterministic artificial intelligence. In that case, continuous deterministic artificial intelligence is the most feasible and reliable selection for future applications on unmanned underwater vehicles, since it is superior to all the approaches investigated at multiple computation rates. Full article
(This article belongs to the Special Issue Feature Papers for AI)
Show Figures

Figure 1

Article
Embarrassingly Parallel Independent Training of Multi-Layer Perceptrons with Heterogeneous Architectures
AI 2023, 4(1), 16-27; https://doi.org/10.3390/ai4010002 - 27 Dec 2022
Viewed by 1288
Abstract
In this paper we propose a procedure to enable the training of several independent Multilayer Perceptron Neural Networks with a different number of neurons and activation functions in parallel (ParallelMLPs) by exploring the principle of locality and parallelization capabilities of modern CPUs and [...] Read more.
In this paper we propose a procedure to enable the training of several independent Multilayer Perceptron Neural Networks with a different number of neurons and activation functions in parallel (ParallelMLPs) by exploring the principle of locality and parallelization capabilities of modern CPUs and GPUs. The core idea of this technique is to represent several sub-networks as a single large network and use a Modified Matrix Multiplication that replaces an ordinal matrix multiplication with two simple matrix operations that allow separate and independent paths for gradient flowing. We have assessed our algorithm in simulated datasets varying the number of samples, features and batches using 10,000 different models as well as in the MNIST dataset. We achieved a training speedup from 1 to 4 orders of magnitude if compared to the sequential approach. The code is available online. Full article
(This article belongs to the Special Issue Feature Papers for AI)
Show Figures

Figure 1

Article
Estimation of Clinch Joint Characteristics Based on Limited Input Data Using Pre-Trained Metamodels
AI 2022, 3(4), 990-1006; https://doi.org/10.3390/ai3040059 - 08 Dec 2022
Viewed by 1528
Abstract
Given strict emission targets and legal requirements, especially in the automotive industry, environmentally friendly and simultaneously versatile applicable production technologies are gaining importance. In this regard, the use of mechanical joining processes, such as clinching, enable assembly sheet metals to achieve strength properties [...] Read more.
Given strict emission targets and legal requirements, especially in the automotive industry, environmentally friendly and simultaneously versatile applicable production technologies are gaining importance. In this regard, the use of mechanical joining processes, such as clinching, enable assembly sheet metals to achieve strength properties similar to those of established thermal joining technologies. However, to guarantee a high reliability of the generated joint connection, the selection of a best-fitting joining technology as well as the meaningful description of individual joint properties is essential. In the context of clinching, few contributions have to date investigated the metamodel-based estimation and optimization of joint characteristics, such as neck or interlock thickness, by applying machine learning and genetic algorithms. Therefore, several regression models have been trained on varying databases and amounts of input parameters. However, if product engineers can only provide limited data for a new joining task, such as incomplete information on applied joining tool dimensions, previously trained metamodels often reach their limits. This often results in a significant loss of prediction quality and leads to increasing uncertainties and inaccuracies within the metamodel-based design of a clinch joint connection. Motivated by this, the presented contribution investigates different machine learning algorithms regarding their ability to achieve a satisfying estimation accuracy on limited input data applying a statistically based feature selection method. Through this, it is possible to identify which regression models are suitable to predict clinch joint characteristics considering only a minimum set of required input features. Thus, in addition to the opportunity to decrease the training effort as well as the model complexity, the subsequent formulation of design equations can pave the way to a more versatile application and reuse of pretrained metamodels on varying tool configurations for a given clinch joining task. Full article
(This article belongs to the Special Issue Feature Papers for AI)
Show Figures

Figure 1

Article
Gamma Ray Source Localization for Time Projection Chamber Telescopes Using Convolutional Neural Networks
AI 2022, 3(4), 975-989; https://doi.org/10.3390/ai3040058 - 30 Nov 2022
Viewed by 1516
Abstract
Diverse phenomena such as positron annihilation in the Milky Way, merging binary neutron stars, and dark matter can be better understood by studying their gamma ray emission. Despite their importance, MeV gamma rays have been poorly explored at sensitivities that would allow for [...] Read more.
Diverse phenomena such as positron annihilation in the Milky Way, merging binary neutron stars, and dark matter can be better understood by studying their gamma ray emission. Despite their importance, MeV gamma rays have been poorly explored at sensitivities that would allow for deeper insight into the nature of the gamma emitting objects. In response, a liquid argon time projection chamber (TPC) gamma ray instrument concept called GammaTPC has been proposed and promises exploration of the entire sky with a large field of view, large effective area, and high polarization sensitivity. Optimizing the pointing capability of this instrument is crucial and can be accomplished by leveraging convolutional neural networks to reconstruct electron recoil paths from Compton scattering events within the detector. In this investigation, we develop a machine learning model architecture to accommodate a large data set of high fidelity simulated electron tracks and reconstruct paths. We create two model architectures: one to predict the electron recoil track origin and one for the initial scattering direction. We find that these models predict the true origin and direction with extremely high accuracy, thereby optimizing the observatory’s estimates of the sky location of gamma ray sources. Full article
(This article belongs to the Special Issue Feature Papers for AI)
Show Figures

Figure 1

Article
A Patient-Specific Algorithm for Lung Segmentation in Chest Radiographs
AI 2022, 3(4), 931-947; https://doi.org/10.3390/ai3040055 - 18 Nov 2022
Viewed by 2124
Abstract
Lung segmentation plays an important role in computer-aided detection and diagnosis using chest radiographs (CRs). Currently, the U-Net and DeepLabv3+ convolutional neural network architectures are widely used to perform CR lung segmentation. To boost performance, ensemble methods are often used, whereby probability map [...] Read more.
Lung segmentation plays an important role in computer-aided detection and diagnosis using chest radiographs (CRs). Currently, the U-Net and DeepLabv3+ convolutional neural network architectures are widely used to perform CR lung segmentation. To boost performance, ensemble methods are often used, whereby probability map outputs from several networks operating on the same input image are averaged. However, not all networks perform adequately for any specific patient image, even if the average network performance is good. To address this, we present a novel multi-network ensemble method that employs a selector network. The selector network evaluates the segmentation outputs from several networks; on a case-by-case basis, it selects which outputs are fused to form the final segmentation for that patient. Our candidate lung segmentation networks include U-Net, with five different encoder depths, and DeepLabv3+, with two different backbone networks (ResNet50 and ResNet18). Our selector network is a ResNet18 image classifier. We perform all training using the publicly available Shenzhen CR dataset. Performance testing is carried out with two independent publicly available CR datasets, namely, Montgomery County (MC) and Japanese Society of Radiological Technology (JSRT). Intersection-over-Union scores for the proposed approach are 13% higher than the standard averaging ensemble method on MC and 5% better on JSRT. Full article
(This article belongs to the Special Issue Feature Papers for AI)
Show Figures

Figure 1

Article
Structural Model Based on Genetic Algorithm for Inhibiting Fatty Acid Amide Hydrolase
AI 2022, 3(4), 863-870; https://doi.org/10.3390/ai3040052 - 13 Oct 2022
Viewed by 1352
Abstract
The fatty acid amide hydrolase (FAAH) is an enzyme responsible for the degradation of anandamide, an endocannabinoid. Pharmacologically blocking this target can lead to anxiolytic effects; therefore, new inhibitors can improve therapy in this field. In order to speed up the process of [...] Read more.
The fatty acid amide hydrolase (FAAH) is an enzyme responsible for the degradation of anandamide, an endocannabinoid. Pharmacologically blocking this target can lead to anxiolytic effects; therefore, new inhibitors can improve therapy in this field. In order to speed up the process of drug discovery, various in silico methods can be used, such as molecular docking, quantitative structure–activity relationship models (QSAR), and artificial intelligence (AI) classification algorithms. Besides architecture, one important factor for an AI model with high accuracy is the dataset quality. This issue can be solved by a genetic algorithm that can select optimal features for the prediction. The objective of the current study is to use this feature selection method in order to identify the most relevant molecular descriptors that can be used as independent variables, thus improving the efficacy of AI algorithms that can predict FAAH inhibitors. The model that used features chosen by the genetic algorithm had better accuracy than the model that used all molecular descriptors generated by the CDK descriptor calculator 1.4.6 software. Hence, carefully selecting the input data used by AI classification algorithms by using a GA is a promising strategy in drug development. Full article
(This article belongs to the Special Issue Feature Papers for AI)
Article
Learning Functions and Classes Using Rules
AI 2022, 3(3), 751-763; https://doi.org/10.3390/ai3030044 - 05 Sep 2022
Viewed by 1374
Abstract
In the current work, a novel method is presented for generating rules for data classification as well as for regression problems. The proposed method generates simple rules in a high-level programming language with the help of grammatical evolution. The method does not depend [...] Read more.
In the current work, a novel method is presented for generating rules for data classification as well as for regression problems. The proposed method generates simple rules in a high-level programming language with the help of grammatical evolution. The method does not depend on any prior knowledge of the dataset; the memory it requires for its execution is constant regardless of the objective problem, and it can be used to detect any hidden dependencies between the features of the input problem as well. The proposed method was tested on a extensive range of problems from the relevant literature, and comparative results against other machine learning techniques are presented in this manuscript. Full article
(This article belongs to the Special Issue Feature Papers for AI)
Show Figures

Figure 1

Article
The Effect of Appearance of Virtual Agents in Human-Agent Negotiation
AI 2022, 3(3), 683-701; https://doi.org/10.3390/ai3030039 - 16 Aug 2022
Viewed by 1618
Abstract
Artificial Intelligence (AI) changed our world in various ways. People start to interact with a variety of intelligent systems frequently. As the interaction between human and AI systems increases day by day, the factors influencing their communication have become more and more important, [...] Read more.
Artificial Intelligence (AI) changed our world in various ways. People start to interact with a variety of intelligent systems frequently. As the interaction between human and AI systems increases day by day, the factors influencing their communication have become more and more important, especially in the field of human-agent negotiation. In this study, our aim is to investigate the effect of knowing your negotiation partner (i.e., opponent) with limited knowledge, particularly the effect of familiarity with the opponent during human-agent negotiation so that we can design more effective negotiation systems. As far as we are aware, this is the first study investigating this research question in human-agent negotiation settings. Accordingly, we present a human-agent negotiation framework and conduct a user experiment in which participants negotiate with an avatar whose appearance and voice are a replica of a celebrity of their choice and with an avatar whose appearance and voice are not familiar. The results of the within-subject design experiment show that human participants tend to be more collaborative when their opponent is a celebrity avatar towards whom they have a positive feeling rather than a non-celebrity avatar. Full article
(This article belongs to the Special Issue Feature Papers for AI)
Show Figures

Figure 1

Article
Robust and Lightweight System for Gait-Based Gender Classification toward Viewing Angle Variations
AI 2022, 3(2), 538-553; https://doi.org/10.3390/ai3020031 - 14 Jun 2022
Cited by 4 | Viewed by 1994
Abstract
In computer vision applications, gait-based gender classification is a challenging task as a person may walk at various angles with respect to the camera viewpoint. In some of the viewing angles, the person’s limb movement can be occluded from the camera, preventing the [...] Read more.
In computer vision applications, gait-based gender classification is a challenging task as a person may walk at various angles with respect to the camera viewpoint. In some of the viewing angles, the person’s limb movement can be occluded from the camera, preventing the perception of the gait-based features. To solve this problem, this study proposes a robust and lightweight system for gait-based gender classification. It uses a gait energy image (GEI) for representing the gait of an individual. A discrete cosine transform (DCT) is applied on GEI to generate a gait-based feature vector. Further, this DCT feature vector is applied to XGBoost classifier for performing gender classification. To improve the classification results, the XGBoost parameters are tuned. Finally, the results are compared with the other state-of-the-art approaches. The performance of the proposed system is evaluated on the OU-MVLP dataset. The experiment results show a mean CCR (correct classification rate) of 95.33% for the gender classification. The results obtained from various viewpoints of OU-MVLP illustrate the robustness of the proposed system for gait-based gender classification. Full article
(This article belongs to the Special Issue Feature Papers for AI)
Show Figures

Figure 1

Article
Can Interpretable Reinforcement Learning Manage Prosperity Your Way?
AI 2022, 3(2), 526-537; https://doi.org/10.3390/ai3020030 - 13 Jun 2022
Cited by 2 | Viewed by 2247
Abstract
Personalisation of products and services is fast becoming the driver of success in banking and commerce. Machine learning holds the promise of gaining a deeper understanding of and tailoring to customers’ needs and preferences. Whereas traditional solutions to financial decision problems frequently rely [...] Read more.
Personalisation of products and services is fast becoming the driver of success in banking and commerce. Machine learning holds the promise of gaining a deeper understanding of and tailoring to customers’ needs and preferences. Whereas traditional solutions to financial decision problems frequently rely on model assumptions, reinforcement learning is able to exploit large amounts of data to improve customer modelling and decision-making in complex financial environments with fewer assumptions. Model explainability and interpretability present challenges from a regulatory perspective which demands transparency for acceptance; they also offer the opportunity for improved insight into and understanding of customers. Post-hoc approaches are typically used for explaining pretrained reinforcement learning models. Based on our previous modeling of customer spending behaviour, we adapt our recent reinforcement learning algorithm that intrinsically characterizes desirable behaviours and we transition to the problem of prosperity management. We train inherently interpretable reinforcement learning agents to give investment advice that is aligned with prototype financial personality traits which are combined to make a final recommendation. We observe that the trained agents’ advice adheres to their intended characteristics, they learn the value of compound growth, and, without any explicit reference, the notion of risk as well as improved policy convergence. Full article
(This article belongs to the Special Issue Feature Papers for AI)
Show Figures

Figure 1

Article
Navigation Map-Based Artificial Intelligence
AI 2022, 3(2), 434-464; https://doi.org/10.3390/ai3020026 - 12 May 2022
Cited by 4 | Viewed by 3960
Abstract
A biologically inspired cognitive architecture is described which uses navigation maps (i.e., spatial locations of objects) as its main data elements. The navigation maps are also used to represent higher-level concepts as well as to direct operations to perform on other navigation maps. [...] Read more.
A biologically inspired cognitive architecture is described which uses navigation maps (i.e., spatial locations of objects) as its main data elements. The navigation maps are also used to represent higher-level concepts as well as to direct operations to perform on other navigation maps. Incoming sensory information is mapped to local sensory navigation maps which then are in turn matched with the closest multisensory maps, and then mapped onto a best-matched multisensory navigation map. Enhancements of the biologically inspired feedback pathways allow the intermediate results of operations performed on the best-matched multisensory navigation map to be fed back, temporarily stored, and re-processed in the next cognitive cycle. This allows the exploration and generation of cause-and-effect behavior. In the re-processing of these intermediate results, navigation maps can, by core analogical mechanisms, lead to other navigation maps which offer an improved solution to many routine problems the architecture is exposed to. Given that the architecture is brain-inspired, analogical processing may also form a key mechanism in the human brain, consistent with psychological evidence. Similarly, for conventional artificial intelligence systems, analogical processing as a core mechanism may possibly allow enhanced performance. Full article
(This article belongs to the Special Issue Feature Papers for AI)
Show Figures

Figure 1

Article
Distributed Big Data Analytics Method for the Early Prediction of the Neonatal 5-Minute Apgar Score before or during Birth and Ranking the Risk Factors from a National Dataset
AI 2022, 3(2), 371-389; https://doi.org/10.3390/ai3020023 - 21 Apr 2022
Cited by 2 | Viewed by 2188
Abstract
One-minute and five-minute Apgar scores are good measures to assess the health status of newborns. A five-minute Apgar score can predict the risk of some disorders such as asphyxia, encephalopathy, cerebral palsy and ADHD. The early prediction of Apgar score before or during [...] Read more.
One-minute and five-minute Apgar scores are good measures to assess the health status of newborns. A five-minute Apgar score can predict the risk of some disorders such as asphyxia, encephalopathy, cerebral palsy and ADHD. The early prediction of Apgar score before or during birth and ranking the risk factors can be helpful to manage and reduce the probability of birth producing low Apgar scores. Therefore, the main aim of this study is the early prediction of the neonate 5-min Apgar score before or during birth and ranking the risk factors for a big national dataset using big data analytics methods. In this study, a big dataset including 60 features describing birth cases registered in Iranian maternal and neonatal (IMAN) registry from 1 April 2016 to 1 January 2017 is collected. A distributed big data analytics method for the early prediction of neonate Apgar score and a distributed big data feature ranking method for ranking the predictors of neonate Apgar score are proposed in this study. The main aim of this study is to provide the ability to predict birth cases with low Apgar scores by analyzing the features that describe prenatal properties before or during birth. The top 14 features were identified in this study and used for training the classifiers. Our proposed stack ensemble outperforms the compared classifiers with an accuracy of 99.37 ± 1.06, precision of 99.37 ± 1.06, recall of 99.50 ± 0.61 and F-score of 99.41 ± 0.70 (for confidence interval of 95%) to predict low, moderate and high 5-min Apgar scores. Among the top predictors, fetal height around the baby’s head and fetal weight denote fetal growth status. Fetal growth restrictions can lead to low or moderate 5-min Apgar score. Moreover, hospital type and medical science university are healthcare system-related factors that can be managed via improving the quality of healthcare services all over the country. Full article
(This article belongs to the Special Issue Feature Papers for AI)
Show Figures

Figure 1

Article
Performance Evaluation of Deep Neural Network Model for Coherent X-ray Imaging
AI 2022, 3(2), 318-330; https://doi.org/10.3390/ai3020020 - 18 Apr 2022
Viewed by 3984
Abstract
We present a supervised deep neural network model for phase retrieval of coherent X-ray imaging and evaluate the performance. A supervised deep-learning-based approach requires a large amount of pre-training datasets. In most proposed models, the various experimental uncertainties are not considered when the [...] Read more.
We present a supervised deep neural network model for phase retrieval of coherent X-ray imaging and evaluate the performance. A supervised deep-learning-based approach requires a large amount of pre-training datasets. In most proposed models, the various experimental uncertainties are not considered when the input dataset, corresponding to the diffraction image in reciprocal space, is generated. We explore the performance of the deep neural network model, which is trained with an ideal quality of dataset, when it faces real-like corrupted diffraction images. We focus on three aspects of data qualities such as a detection dynamic range, a degree of coherence and noise level. The investigation shows that the deep neural network model is robust to a limited dynamic range and partially coherent X-ray illumination in comparison to the traditional phase retrieval, although it is more sensitive to the noise than the iteration-based method. This study suggests a baseline capability of the supervised deep neural network model for coherent X-ray imaging in preparation for the deployment to the laboratory where diffraction images are acquired. Full article
(This article belongs to the Special Issue Feature Papers for AI)
Show Figures

Figure 1

Article
Detection in Adverse Weather Conditions for Autonomous Vehicles via Deep Learning
AI 2022, 3(2), 303-317; https://doi.org/10.3390/ai3020019 - 18 Apr 2022
Cited by 11 | Viewed by 6638
Abstract
Weather detection systems (WDS) have an indispensable role in supporting the decisions of autonomous vehicles, especially in severe and adverse circumstances. With deep learning techniques, autonomous vehicles can effectively identify outdoor weather conditions and thus make appropriate decisions to easily adapt to new [...] Read more.
Weather detection systems (WDS) have an indispensable role in supporting the decisions of autonomous vehicles, especially in severe and adverse circumstances. With deep learning techniques, autonomous vehicles can effectively identify outdoor weather conditions and thus make appropriate decisions to easily adapt to new conditions and environments. This paper proposes a deep learning (DL)-based detection framework to categorize weather conditions for autonomous vehicles in adverse or normal situations. The proposed framework leverages the power of transfer learning techniques along with the powerful Nvidia GPU to characterize the performance of three deep convolutional neural networks (CNNs): SqueezeNet, ResNet-50, and EfficientNet. The developed models have been evaluated on two up-to-date weather imaging datasets, namely, DAWN2020 and MCWRD2018. The combined dataset has been used to provide six weather classes: cloudy, rainy, snowy, sandy, shine, and sunrise. Experimentally, all models demonstrated superior classification capacity, with the best experimental performance metrics recorded for the weather-detection-based ResNet-50 CNN model scoring 98.48%, 98.51%, and 98.41% for detection accuracy, precision, and sensitivity. In addition to this, a short detection time has been noted for the weather-detection-based ResNet-50 CNN model, involving an average of 5 (ms) for the time-per-inference step using the GPU component. Finally, comparison with other related state-of-art models showed the superiority of our model which improved the classification accuracy for the six weather conditions classifiers by a factor of 0.5–21%. Consequently, the proposed framework can be effectively implemented in real-time environments to provide decisions on demand for autonomous vehicles with quick, precise detection capacity. Full article
(This article belongs to the Special Issue Feature Papers for AI)
Show Figures

Figure 1

Article
A Technology Acceptance Model Survey of the Metaverse Prospects
AI 2022, 3(2), 285-302; https://doi.org/10.3390/ai3020018 - 11 Apr 2022
Cited by 26 | Viewed by 19096
Abstract
The technology acceptance model is a widely used model to investigate whether users will accept or refuse a new technology. The Metaverse is a 3D world based on virtual reality simulation to express real life. It can be considered the next generation of [...] Read more.
The technology acceptance model is a widely used model to investigate whether users will accept or refuse a new technology. The Metaverse is a 3D world based on virtual reality simulation to express real life. It can be considered the next generation of using the internet. In this paper, we are going to investigate variables that may affect users’ acceptance of Metaverse technology and the relationships between those variables by applying the extended technology acceptance model to investigate many factors (namely self-efficiency, social norm, perceived curiosity, perceived pleasure, and price). The goal of understanding these factors is to know how Metaverse developers might enhance this technology to meet users’ expectations and let the users interact with this technology better. To this end, a sample of 302 educated participants of different ages was chosen to answer an online Likert scale survey ranging from 1 (strongly disagree) to 5 (strongly agree). The study found that, first, self-efficiency, perceived curiosity, and perceived pleasure positively influence perceived ease of use. Secondly, social norms, perceived pleasure, and perceived ease of use positively influences perceived usefulness. Third, perceived ease of use and perceived usefulness positively influence attitude towards Metaverse technology use, which overall will influence behavioral intention. Fourth, the relationship between price and behavioral intention was significant and negative. Finally, the study found that participants with an age of less than 20 years were the most positively accepting of Metaverse technology. Full article
(This article belongs to the Special Issue Feature Papers for AI)
Show Figures

Figure 1

Article
Enhancement of Partially Coherent Diffractive Images Using Generative Adversarial Network
AI 2022, 3(2), 274-284; https://doi.org/10.3390/ai3020017 - 11 Apr 2022
Cited by 2 | Viewed by 2132
Abstract
We present a deep learning-based generative model for the enhancement of partially coherent diffractive images. In lensless coherent diffractive imaging, a highly coherent X-ray illumination is required to image an object at high resolution. Non-ideal experimental conditions result in a partially coherent X-ray [...] Read more.
We present a deep learning-based generative model for the enhancement of partially coherent diffractive images. In lensless coherent diffractive imaging, a highly coherent X-ray illumination is required to image an object at high resolution. Non-ideal experimental conditions result in a partially coherent X-ray illumination, lead to imperfections of coherent diffractive images recorded on a detector, and ultimately limit the capability of lensless coherent diffractive imaging. The previous approaches, relying on the coherence property of illumination, require preliminary experiments or expensive computations. In this article, we propose a generative adversarial network (GAN) model to enhance the visibility of fringes in partially coherent diffractive images. Unlike previous approaches, the model is trained to restore the latent sharp features from blurred input images without finding coherence properties of illumination. We demonstrate that the GAN model performs well with both coherent diffractive imaging and ptychography. It can be applied to a wide range of imaging techniques relying on phase retrieval of coherent diffraction patterns. Full article
(This article belongs to the Special Issue Feature Papers for AI)
Show Figures

Figure 1

Article
Evolution towards Smart and Software-Defined Internet of Things
AI 2022, 3(1), 100-123; https://doi.org/10.3390/ai3010007 - 21 Feb 2022
Cited by 10 | Viewed by 4248
Abstract
The Internet of Things (IoT) is a mesh network of interconnected objects with unique identifiers that can transmit data and communicate with one another without the need for human intervention. The IoT has brought the future closer to us. It has opened up [...] Read more.
The Internet of Things (IoT) is a mesh network of interconnected objects with unique identifiers that can transmit data and communicate with one another without the need for human intervention. The IoT has brought the future closer to us. It has opened up new and vast domains for connecting not only people, but also all kinds of simple objects and phenomena all around us. With billions of heterogeneous devices connected to the Internet, the network architecture must evolve to accommodate the expected increase in data generation while also improving the security and efficiency of connectivity. Traditional IoT architectures are primitive and incapable of extending functionality and productivity to the IoT infrastructure’s desired levels. Software-Defined Networking (SDN) and virtualization are two promising technologies for cost-effectively handling the scale and versatility required for IoT. In this paper, we discussed traditional IoT networks and the need for SDN and Network Function Virtualization (NFV), followed by an analysis of SDN and NFV solutions for implementing IoT in various ways. Full article
(This article belongs to the Special Issue Feature Papers for AI)
Show Figures

Figure 1

Review

Jump to: Research, Other

Review
AI-Based Computer Vision Techniques and Expert Systems
AI 2023, 4(1), 289-302; https://doi.org/10.3390/ai4010013 - 23 Feb 2023
Cited by 1 | Viewed by 4145
Abstract
Computer vision is a branch of computer science that studies how computers can ‘see’. It is a field that provides significant value for advancements in academia and artificial intelligence by processing images captured with a camera. In other words, the purpose of computer [...] Read more.
Computer vision is a branch of computer science that studies how computers can ‘see’. It is a field that provides significant value for advancements in academia and artificial intelligence by processing images captured with a camera. In other words, the purpose of computer vision is to impart computers with the functions of human eyes and realise ‘vision’ among computers. Deep learning is a method of realising computer vision using image recognition and object detection technologies. Since its emergence, computer vision has evolved rapidly with the development of deep learning and has significantly improved image recognition accuracy. Moreover, an expert system can imitate and reproduce the flow of reasoning and decision making executed in human experts’ brains to derive optimal solutions. Machine learning, including deep learning, has made it possible to ‘acquire the tacit knowledge of experts’, which was not previously achievable with conventional expert systems. Machine learning ‘systematises tacit knowledge’ based on big data and measures phenomena from multiple angles and in large quantities. In this review, we discuss some knowledge-based computer vision techniques that employ deep learning. Full article
(This article belongs to the Special Issue Feature Papers for AI)
Review
Recent Advances in Infrared Face Analysis and Recognition with Deep Learning
AI 2023, 4(1), 199-233; https://doi.org/10.3390/ai4010009 - 07 Feb 2023
Viewed by 4272
Abstract
Besides the many advances made in the facial detection and recognition fields, face recognition applied to visual images (VIS-FR) has received increasing interest in recent years, especially in the field of communication, identity authentication, public safety and to address the risk of terrorism [...] Read more.
Besides the many advances made in the facial detection and recognition fields, face recognition applied to visual images (VIS-FR) has received increasing interest in recent years, especially in the field of communication, identity authentication, public safety and to address the risk of terrorism and crime. These systems however encounter important problems in the presence of variations in pose, expression, age, occlusion, disguise, and lighting as these factors significantly reduce the recognition accuracy. To prevent problems in the visible spectrum, several researchers have recommended the use of infrared images. This paper provides an updated overview of deep infrared (IR) approaches in face recognition (FR) and analysis. First, we present the most widely used databases, both public and private, and the various metrics and loss functions that have been proposed and used in deep infrared techniques. We then review deep face analysis and recognition/identification methods proposed in recent years. In this review, we show that infrared techniques have given interesting results for face recognition, solving some of the problems encountered with visible spectrum techniques. We finally identify some weaknesses of current infrared FR approaches as well as many future research directions to address the IR FR limitations. Full article
(This article belongs to the Special Issue Feature Papers for AI)
Show Figures

Figure 1

Review
Augmented Behavioral Annotation Tools, with Application to Multimodal Datasets and Models: A Systematic Review
AI 2023, 4(1), 128-171; https://doi.org/10.3390/ai4010007 - 28 Jan 2023
Viewed by 4298
Abstract
Annotation tools are an essential component in the creation of datasets for machine learning purposes. Annotation tools have evolved greatly since the turn of the century, and now commonly include collaborative features to divide labor efficiently, as well as automation employed to amplify [...] Read more.
Annotation tools are an essential component in the creation of datasets for machine learning purposes. Annotation tools have evolved greatly since the turn of the century, and now commonly include collaborative features to divide labor efficiently, as well as automation employed to amplify human efforts. Recent developments in machine learning models, such as Transformers, allow for training upon very large and sophisticated multimodal datasets and enable generalization across domains of knowledge. These models also herald an increasing emphasis on prompt engineering to provide qualitative fine-tuning upon the model itself, adding a novel emerging layer of direct machine learning annotation. These capabilities enable machine intelligence to recognize, predict, and emulate human behavior with much greater accuracy and nuance, a noted shortfall of which have contributed to algorithmic injustice in previous techniques. However, the scale and complexity of training data required for multimodal models presents engineering challenges. Best practices for conducting annotation for large multimodal models in the most safe and ethical, yet efficient, manner have not been established. This paper presents a systematic literature review of crowd and machine learning augmented behavioral annotation methods to distill practices that may have value in multimodal implementations, cross-correlated across disciplines. Research questions were defined to provide an overview of the evolution of augmented behavioral annotation tools in the past, in relation to the present state of the art. (Contains five figures and four tables). Full article
(This article belongs to the Special Issue Feature Papers for AI)
Show Figures

Figure 1

Review
End-to-End Transformer-Based Models in Textual-Based NLP
AI 2023, 4(1), 54-110; https://doi.org/10.3390/ai4010004 - 05 Jan 2023
Cited by 5 | Viewed by 8029
Abstract
Transformer architectures are highly expressive because they use self-attention mechanisms to encode long-range dependencies in the input sequences. In this paper, we present a literature review on Transformer-based (TB) models, providing a detailed overview of each model in comparison to the Transformer’s standard [...] Read more.
Transformer architectures are highly expressive because they use self-attention mechanisms to encode long-range dependencies in the input sequences. In this paper, we present a literature review on Transformer-based (TB) models, providing a detailed overview of each model in comparison to the Transformer’s standard architecture. This survey focuses on TB models used in the field of Natural Language Processing (NLP) for textual-based tasks. We begin with an overview of the fundamental concepts at the heart of the success of these models. Then, we classify them based on their architecture and training mode. We compare the advantages and disadvantages of popular techniques in terms of architectural design and experimental value. Finally, we discuss open research, directions, and potential future work to help solve current TB application challenges in NLP. Full article
(This article belongs to the Special Issue Feature Papers for AI)
Show Figures

Figure 1

Review
A Review of the Potential of Artificial Intelligence Approaches to Forecasting COVID-19 Spreading
AI 2022, 3(2), 493-511; https://doi.org/10.3390/ai3020028 - 19 May 2022
Cited by 10 | Viewed by 3373
Abstract
The spread of SARS-CoV-2 can be considered one of the most complicated patterns with a large number of uncertainties and nonlinearities. Therefore, analysis and prediction of the distribution of this virus are one of the most challenging problems, affecting the planning and managing [...] Read more.
The spread of SARS-CoV-2 can be considered one of the most complicated patterns with a large number of uncertainties and nonlinearities. Therefore, analysis and prediction of the distribution of this virus are one of the most challenging problems, affecting the planning and managing of its impacts. Although different vaccines and drugs have been proved, produced, and distributed one after another, several new fast-spreading SARS-CoV-2 variants have been detected. This is why numerous techniques based on artificial intelligence (AI) have been recently designed or redeveloped to forecast these variants more effectively. The focus of such methods is on deep learning (DL) and machine learning (ML), and they can forecast nonlinear trends in epidemiological issues appropriately. This short review aims to summarize and evaluate the trustworthiness and performance of some important AI-empowered approaches used for the prediction of the spread of COVID-19. Sixty-five preprints, peer-reviewed papers, conference proceedings, and book chapters published in 2020 were reviewed. Our criteria to include or exclude references were the performance of these methods reported in the documents. The results revealed that although methods under discussion in this review have suitable potential to predict the spread of COVID-19, there are still weaknesses and drawbacks that fall in the domain of future research and scientific endeavors. Full article
(This article belongs to the Special Issue Feature Papers for AI)
Show Figures

Figure 1

Review
Hybrid Deep Learning Techniques for Predicting Complex Phenomena: A Review on COVID-19
AI 2022, 3(2), 416-433; https://doi.org/10.3390/ai3020025 - 06 May 2022
Cited by 11 | Viewed by 3609
Abstract
Complex phenomena have some common characteristics, such as nonlinearity, complexity, and uncertainty. In these phenomena, components typically interact with each other and a part of the system may affect other parts or vice versa. Accordingly, the human brain, the Earth’s global climate, the [...] Read more.
Complex phenomena have some common characteristics, such as nonlinearity, complexity, and uncertainty. In these phenomena, components typically interact with each other and a part of the system may affect other parts or vice versa. Accordingly, the human brain, the Earth’s global climate, the spreading of viruses, the economic organizations, and some engineering systems such as the transportation systems and power grids can be categorized into these phenomena. Since both analytical approaches and AI methods have some specific characteristics in solving complex problems, a combination of these techniques can lead to new hybrid methods with considerable performance. This is why several types of research have recently been conducted to benefit from these combinations to predict the spreading of COVID-19 and its dynamic behavior. In this review, 80 peer-reviewed articles, book chapters, conference proceedings, and preprints with a focus on employing hybrid methods for forecasting the spreading of COVID-19 published in 2020 have been aggregated and reviewed. These documents have been extracted from Google Scholar and many of them have been indexed on the Web of Science. Since there were many publications on this topic, the most relevant and effective techniques, including statistical models and deep learning (DL) or machine learning (ML) approach, have been surveyed in this research. The main aim of this research is to describe, summarize, and categorize these effective techniques considering their restrictions to be used as trustable references for scientists, researchers, and readers to make an intelligent choice to use the best possible method for their academic needs. Nevertheless, considering the fact that many of these techniques have been used for the first time and need more evaluations, we recommend none of them as an ideal way to be used in their project. Our study has shown that these methods can hold the robustness and reliability of statistical methods and the power of computation of DL ones. Full article
(This article belongs to the Special Issue Feature Papers for AI)
Show Figures

Figure 1

Other

Jump to: Research, Review

Brief Report
A Pilot Study on the Use of Generative Adversarial Networks for Data Augmentation of Time Series
AI 2022, 3(4), 789-795; https://doi.org/10.3390/ai3040047 - 26 Sep 2022
Viewed by 1527
Abstract
Data augmentation is needed to use Deep Learning methods for the typically small time series datasets. There is limited literature on the evaluation of the performance of the use of Generative Adversarial Networks for time series data augmentation. We describe and discuss the [...] Read more.
Data augmentation is needed to use Deep Learning methods for the typically small time series datasets. There is limited literature on the evaluation of the performance of the use of Generative Adversarial Networks for time series data augmentation. We describe and discuss the results of a pilot study that extends a recent evaluation study of two families of data augmentation methods for time series (i.e., transformation-based methods and pattern-mixing methods), and provide recommendations for future work in this important area of research. Full article
(This article belongs to the Special Issue Feature Papers for AI)
Show Figures

Figure 1

Back to TopTop