Journal Description
AI
AI
is an international, peer-reviewed, open access journal on artificial intelligence (AI), including broad aspects of cognition and reasoning, perception and planning, machine learning, intelligent robotics, and applications of AI, published quarterly online by MDPI.
- Open Access— free to download, share, and reuse content. Authors receive recognition for their contribution when the paper is reused.
- Rapid Publication: manuscripts are peer-reviewed and a first decision provided to authors approximately 21.7 days after submission; acceptance to publication is undertaken in 3.3 days (median values for papers published in this journal in the second half of 2021).
- Recognition of Reviewers: APC discount vouchers, optional signed peer review, and reviewer names published annually in the journal.
- Benefits of Publishing: Our ambition is to achieve an Impact Factor in the range of 3.643-4.89. The journal aspires to be included in Scopus by 2022, and the first quartile in Computer Science and Artificial Intelligence by 2025.
Latest Articles
A Review of the Potential of Artificial Intelligence Approaches to Forecasting COVID-19 Spreading
AI 2022, 3(2), 493-511; https://doi.org/10.3390/ai3020028 - 19 May 2022
Abstract
The spread of SARS-CoV-2 can be considered one of the most complicated patterns with a large number of uncertainties and nonlinearities. Therefore, analysis and prediction of the distribution of this virus are one of the most challenging problems, affecting the planning and managing
[...] Read more.
The spread of SARS-CoV-2 can be considered one of the most complicated patterns with a large number of uncertainties and nonlinearities. Therefore, analysis and prediction of the distribution of this virus are one of the most challenging problems, affecting the planning and managing of its impacts. Although different vaccines and drugs have been proved, produced, and distributed one after another, several new fast-spreading SARS-CoV-2 variants have been detected. This is why numerous techniques based on artificial intelligence (AI) have been recently designed or redeveloped to forecast these variants more effectively. The focus of such methods is on deep learning (DL) and machine learning (ML), and they can forecast nonlinear trends in epidemiological issues appropriately. This short review aims to summarize and evaluate the trustworthiness and performance of some important AI-empowered approaches used for the prediction of the spread of COVID-19. Sixty-five preprints, peer-reviewed papers, conference proceedings, and book chapters published in 2020 were reviewed. Our criteria to include or exclude references were the performance of these methods reported in the documents. The results revealed that although methods under discussion in this review have suitable potential to predict the spread of COVID-19, there are still weaknesses and drawbacks that fall in the domain of future research and scientific endeavors.
Full article
(This article belongs to the Special Issue Feature Papers for AI)
►
Show Figures
Open AccessReview
Cybernetic Hive Minds: A Review
by
and
AI 2022, 3(2), 465-492; https://doi.org/10.3390/ai3020027 - 16 May 2022
Abstract
Insect swarms and migratory birds are known to exhibit something known as a hive mind, collective consciousness, and herd mentality, among others. This has inspired a whole new stream of robotics known as swarm intelligence, where small-sized robots perform tasks in coordination. The
[...] Read more.
Insect swarms and migratory birds are known to exhibit something known as a hive mind, collective consciousness, and herd mentality, among others. This has inspired a whole new stream of robotics known as swarm intelligence, where small-sized robots perform tasks in coordination. The social media and smartphone revolution have helped people collectively work together and organize in their day-to-day jobs or activism. This revolution has also led to the massive spread of disinformation amplified during the COVID-19 pandemic by alt-right Neo Nazi Cults like QAnon and their counterparts from across the globe, causing increases in the spread of infection and deaths. This paper presents the case for a theoretical cybernetic hive mind to explain how existing cults like QAnon weaponize group think and carry out crimes using social media-based alternate reality games. We also showcase a framework on how cybernetic hive minds have come into existence and how the hive mind might evolve in the future. We also discuss the implications of these hive minds for the future of free will and how different malfeasant entities have utilized these technologies to cause problems and inflict harm by various forms of cyber-crimes and predict how these crimes can evolve in the future.
Full article
(This article belongs to the Special Issue Standards and Ethics in AI)
►▼
Show Figures

Figure 1
Open AccessArticle
Navigation Map-Based Artificial Intelligence
AI 2022, 3(2), 434-464; https://doi.org/10.3390/ai3020026 - 12 May 2022
Abstract
A biologically inspired cognitive architecture is described which uses navigation maps (i.e., spatial locations of objects) as its main data elements. The navigation maps are also used to represent higher-level concepts as well as to direct operations to perform on other navigation maps.
[...] Read more.
A biologically inspired cognitive architecture is described which uses navigation maps (i.e., spatial locations of objects) as its main data elements. The navigation maps are also used to represent higher-level concepts as well as to direct operations to perform on other navigation maps. Incoming sensory information is mapped to local sensory navigation maps which then are in turn matched with the closest multisensory maps, and then mapped onto a best-matched multisensory navigation map. Enhancements of the biologically inspired feedback pathways allow the intermediate results of operations performed on the best-matched multisensory navigation map to be fed back, temporarily stored, and re-processed in the next cognitive cycle. This allows the exploration and generation of cause-and-effect behavior. In the re-processing of these intermediate results, navigation maps can, by core analogical mechanisms, lead to other navigation maps which offer an improved solution to many routine problems the architecture is exposed to. Given that the architecture is brain-inspired, analogical processing may also form a key mechanism in the human brain, consistent with psychological evidence. Similarly, for conventional artificial intelligence systems, analogical processing as a core mechanism may possibly allow enhanced performance.
Full article
(This article belongs to the Special Issue Feature Papers for AI)
►▼
Show Figures

Figure 1
Open AccessReview
Hybrid Deep Learning Techniques for Predicting Complex Phenomena: A Review on COVID-19
by
, , , , , , , , , , and
AI 2022, 3(2), 416-433; https://doi.org/10.3390/ai3020025 - 06 May 2022
Abstract
Complex phenomena have some common characteristics, such as nonlinearity, complexity, and uncertainty. In these phenomena, components typically interact with each other and a part of the system may affect other parts or vice versa. Accordingly, the human brain, the Earth’s global climate, the
[...] Read more.
Complex phenomena have some common characteristics, such as nonlinearity, complexity, and uncertainty. In these phenomena, components typically interact with each other and a part of the system may affect other parts or vice versa. Accordingly, the human brain, the Earth’s global climate, the spreading of viruses, the economic organizations, and some engineering systems such as the transportation systems and power grids can be categorized into these phenomena. Since both analytical approaches and AI methods have some specific characteristics in solving complex problems, a combination of these techniques can lead to new hybrid methods with considerable performance. This is why several types of research have recently been conducted to benefit from these combinations to predict the spreading of COVID-19 and its dynamic behavior. In this review, 80 peer-reviewed articles, book chapters, conference proceedings, and preprints with a focus on employing hybrid methods for forecasting the spreading of COVID-19 published in 2020 have been aggregated and reviewed. These documents have been extracted from Google Scholar and many of them have been indexed on the Web of Science. Since there were many publications on this topic, the most relevant and effective techniques, including statistical models and deep learning (DL) or machine learning (ML) approach, have been surveyed in this research. The main aim of this research is to describe, summarize, and categorize these effective techniques considering their restrictions to be used as trustable references for scientists, researchers, and readers to make an intelligent choice to use the best possible method for their academic needs. Nevertheless, considering the fact that many of these techniques have been used for the first time and need more evaluations, we recommend none of them as an ideal way to be used in their project. Our study has shown that these methods can hold the robustness and reliability of statistical methods and the power of computation of DL ones.
Full article
(This article belongs to the Special Issue Feature Papers for AI)
►▼
Show Figures

Figure 1
Open AccessArticle
A Particle Swarm Optimization Backtracking Technique Inspired by Science-Fiction Time Travel
by
and
AI 2022, 3(2), 390-415; https://doi.org/10.3390/ai3020024 - 01 May 2022
Abstract
►▼
Show Figures
Artificial intelligence techniques, such as particle swarm optimization, are used to solve problems throughout society. Optimization, in particular, seeks to identify the best possible decision within a search space. Problematically, particle swarm optimization will sometimes have particles that become trapped inside local minima,
[...] Read more.
Artificial intelligence techniques, such as particle swarm optimization, are used to solve problems throughout society. Optimization, in particular, seeks to identify the best possible decision within a search space. Problematically, particle swarm optimization will sometimes have particles that become trapped inside local minima, preventing them from identifying a global optimal solution. As a solution to this issue, this paper proposes a science-fiction inspired enhancement of particle swarm optimization where an impactful iteration is identified and the algorithm is rerun from this point, with a change made to the swarm. The proposed technique is tested using multiple variations on several different functions representing optimization problems and several standard test functions used to test various particle swarm optimization techniques.
Full article

Figure 1
Open AccessArticle
Distributed Big Data Analytics Method for the Early Prediction of the Neonatal 5-Minute Apgar Score before or during Birth and Ranking the Risk Factors from a National Dataset
AI 2022, 3(2), 371-389; https://doi.org/10.3390/ai3020023 - 21 Apr 2022
Abstract
One-minute and five-minute Apgar scores are good measures to assess the health status of newborns. A five-minute Apgar score can predict the risk of some disorders such as asphyxia, encephalopathy, cerebral palsy and ADHD. The early prediction of Apgar score before or during
[...] Read more.
One-minute and five-minute Apgar scores are good measures to assess the health status of newborns. A five-minute Apgar score can predict the risk of some disorders such as asphyxia, encephalopathy, cerebral palsy and ADHD. The early prediction of Apgar score before or during birth and ranking the risk factors can be helpful to manage and reduce the probability of birth producing low Apgar scores. Therefore, the main aim of this study is the early prediction of the neonate 5-min Apgar score before or during birth and ranking the risk factors for a big national dataset using big data analytics methods. In this study, a big dataset including 60 features describing birth cases registered in Iranian maternal and neonatal (IMAN) registry from 1 April 2016 to 1 January 2017 is collected. A distributed big data analytics method for the early prediction of neonate Apgar score and a distributed big data feature ranking method for ranking the predictors of neonate Apgar score are proposed in this study. The main aim of this study is to provide the ability to predict birth cases with low Apgar scores by analyzing the features that describe prenatal properties before or during birth. The top 14 features were identified in this study and used for training the classifiers. Our proposed stack ensemble outperforms the compared classifiers with an accuracy of 99.37 ± 1.06, precision of 99.37 ± 1.06, recall of 99.50 ± 0.61 and F-score of 99.41 ± 0.70 (for confidence interval of 95%) to predict low, moderate and high 5-min Apgar scores. Among the top predictors, fetal height around the baby’s head and fetal weight denote fetal growth status. Fetal growth restrictions can lead to low or moderate 5-min Apgar score. Moreover, hospital type and medical science university are healthcare system-related factors that can be managed via improving the quality of healthcare services all over the country.
Full article
(This article belongs to the Special Issue Feature Papers for AI)
►▼
Show Figures

Figure 1
Open AccessArticle
The Form in Formal Thought Disorder: A Model of Dyssyntax in Semantic Networking
by
and
AI 2022, 3(2), 353-370; https://doi.org/10.3390/ai3020022 - 20 Apr 2022
Abstract
Formal thought disorder (FTD) is a clinical mental condition that is typically diagnosable by the speech productions of patients. However, this has been a vexing condition for the clinical community, as it is not at all easy to determine what “formal” means in
[...] Read more.
Formal thought disorder (FTD) is a clinical mental condition that is typically diagnosable by the speech productions of patients. However, this has been a vexing condition for the clinical community, as it is not at all easy to determine what “formal” means in the plethora of symptoms exhibited. We present a logic-based model for the syntax–semantics interface in semantic networking that can not only explain, but also diagnose, FTD. Our model is based on description logic (DL), which is well known for its adequacy to model terminological knowledge. More specifically, we show how faulty logical form as defined in DL-based Conception Language (CL) impacts the semantic content of linguistic productions that are characteristic of FTD. We accordingly call this the dyssyntax model.
Full article
(This article belongs to the Special Issue Conceptualization and Semantic Knowledge)
►▼
Show Figures

Figure 1
Open AccessArticle
Shifting Perspectives on AI Evaluation: The Increasing Role of Ethics in Cooperation
AI 2022, 3(2), 331-352; https://doi.org/10.3390/ai3020021 - 19 Apr 2022
Abstract
Evaluating AI is a challenging task, as it requires an operative definition of intelligence and the metrics to quantify it, including amongst other factors economic drivers, depending on specific domains. From the viewpoint of AI basic research, the ability to play a game
[...] Read more.
Evaluating AI is a challenging task, as it requires an operative definition of intelligence and the metrics to quantify it, including amongst other factors economic drivers, depending on specific domains. From the viewpoint of AI basic research, the ability to play a game against a human has historically been adopted as a criterion of evaluation, as competition can be characterized by an algorithmic approach. Starting from the end of the 1990s, the deployment of sophisticated hardware identified a significant improvement in the ability of a machine to play and win popular games. In spite of the spectacular victory of IBM’s Deep Blue over Garry Kasparov, many objections still remain. This is due to the fact that it is not clear how this result can be applied to solve real-world problems or simulate human abilities, e.g., common sense, and also exhibit a form of generalized AI. An evaluation based uniquely on the capacity of playing games, even when enriched by the capability of learning complex rules without any human supervision, is bound to be unsatisfactory. As the internet has dramatically changed the cultural habits and social interaction of users, who continuously exchange information with intelligent agents, it is quite natural to consider cooperation as the next step in AI software evaluation. Although this concept has already been explored in the scientific literature in the fields of economics and mathematics, its consideration in AI is relatively recent and generally covers the study of cooperation between agents. This paper focuses on more complex problems involving heterogeneity (specifically, the cooperation between humans and software agents, or even robots), which are investigated by taking into account ethical issues occurring during attempts to achieve a common goal shared by both parties, with a possible result of either conflict or stalemate. The contribution of this research consists in identifying those factors (trust, autonomy, and cooperative learning) on which to base ethical guidelines in agent software programming, making cooperation a more suitable benchmark for AI applications.
Full article
(This article belongs to the Special Issue Standards and Ethics in AI)
►▼
Show Figures

Figure 1
Open AccessArticle
Performance Evaluation of Deep Neural Network Model for Coherent X-ray Imaging
AI 2022, 3(2), 318-330; https://doi.org/10.3390/ai3020020 - 18 Apr 2022
Abstract
We present a supervised deep neural network model for phase retrieval of coherent X-ray imaging and evaluate the performance. A supervised deep-learning-based approach requires a large amount of pre-training datasets. In most proposed models, the various experimental uncertainties are not considered when the
[...] Read more.
We present a supervised deep neural network model for phase retrieval of coherent X-ray imaging and evaluate the performance. A supervised deep-learning-based approach requires a large amount of pre-training datasets. In most proposed models, the various experimental uncertainties are not considered when the input dataset, corresponding to the diffraction image in reciprocal space, is generated. We explore the performance of the deep neural network model, which is trained with an ideal quality of dataset, when it faces real-like corrupted diffraction images. We focus on three aspects of data qualities such as a detection dynamic range, a degree of coherence and noise level. The investigation shows that the deep neural network model is robust to a limited dynamic range and partially coherent X-ray illumination in comparison to the traditional phase retrieval, although it is more sensitive to the noise than the iteration-based method. This study suggests a baseline capability of the supervised deep neural network model for coherent X-ray imaging in preparation for the deployment to the laboratory where diffraction images are acquired.
Full article
(This article belongs to the Special Issue Feature Papers for AI)
►▼
Show Figures

Figure 1
Open AccessArticle
Detection in Adverse Weather Conditions for Autonomous Vehicles via Deep Learning
AI 2022, 3(2), 303-317; https://doi.org/10.3390/ai3020019 - 18 Apr 2022
Abstract
Weather detection systems (WDS) have an indispensable role in supporting the decisions of autonomous vehicles, especially in severe and adverse circumstances. With deep learning techniques, autonomous vehicles can effectively identify outdoor weather conditions and thus make appropriate decisions to easily adapt to new
[...] Read more.
Weather detection systems (WDS) have an indispensable role in supporting the decisions of autonomous vehicles, especially in severe and adverse circumstances. With deep learning techniques, autonomous vehicles can effectively identify outdoor weather conditions and thus make appropriate decisions to easily adapt to new conditions and environments. This paper proposes a deep learning (DL)-based detection framework to categorize weather conditions for autonomous vehicles in adverse or normal situations. The proposed framework leverages the power of transfer learning techniques along with the powerful Nvidia GPU to characterize the performance of three deep convolutional neural networks (CNNs): SqueezeNet, ResNet-50, and EfficientNet. The developed models have been evaluated on two up-to-date weather imaging datasets, namely, DAWN2020 and MCWRD2018. The combined dataset has been used to provide six weather classes: cloudy, rainy, snowy, sandy, shine, and sunrise. Experimentally, all models demonstrated superior classification capacity, with the best experimental performance metrics recorded for the weather-detection-based ResNet-50 CNN model scoring 98.48%, 98.51%, and 98.41% for detection accuracy, precision, and sensitivity. In addition to this, a short detection time has been noted for the weather-detection-based ResNet-50 CNN model, involving an average of 5 (ms) for the time-per-inference step using the GPU component. Finally, comparison with other related state-of-art models showed the superiority of our model which improved the classification accuracy for the six weather conditions classifiers by a factor of 0.5–21%. Consequently, the proposed framework can be effectively implemented in real-time environments to provide decisions on demand for autonomous vehicles with quick, precise detection capacity.
Full article
(This article belongs to the Special Issue Feature Papers for AI)
►▼
Show Figures

Figure 1
Open AccessFeature PaperArticle
A Technology Acceptance Model Survey of the Metaverse Prospects
AI 2022, 3(2), 285-302; https://doi.org/10.3390/ai3020018 - 11 Apr 2022
Abstract
The technology acceptance model is a widely used model to investigate whether users will accept or refuse a new technology. The Metaverse is a 3D world based on virtual reality simulation to express real life. It can be considered the next generation of
[...] Read more.
The technology acceptance model is a widely used model to investigate whether users will accept or refuse a new technology. The Metaverse is a 3D world based on virtual reality simulation to express real life. It can be considered the next generation of using the internet. In this paper, we are going to investigate variables that may affect users’ acceptance of Metaverse technology and the relationships between those variables by applying the extended technology acceptance model to investigate many factors (namely self-efficiency, social norm, perceived curiosity, perceived pleasure, and price). The goal of understanding these factors is to know how Metaverse developers might enhance this technology to meet users’ expectations and let the users interact with this technology better. To this end, a sample of 302 educated participants of different ages was chosen to answer an online Likert scale survey ranging from 1 (strongly disagree) to 5 (strongly agree). The study found that, first, self-efficiency, perceived curiosity, and perceived pleasure positively influence perceived ease of use. Secondly, social norms, perceived pleasure, and perceived ease of use positively influences perceived usefulness. Third, perceived ease of use and perceived usefulness positively influence attitude towards Metaverse technology use, which overall will influence behavioral intention. Fourth, the relationship between price and behavioral intention was significant and negative. Finally, the study found that participants with an age of less than 20 years were the most positively accepting of Metaverse technology.
Full article
(This article belongs to the Special Issue Feature Papers for AI)
►▼
Show Figures

Figure 1
Open AccessArticle
Enhancement of Partially Coherent Diffractive Images Using Generative Adversarial Network
AI 2022, 3(2), 274-284; https://doi.org/10.3390/ai3020017 - 11 Apr 2022
Abstract
We present a deep learning-based generative model for the enhancement of partially coherent diffractive images. In lensless coherent diffractive imaging, a highly coherent X-ray illumination is required to image an object at high resolution. Non-ideal experimental conditions result in a partially coherent X-ray
[...] Read more.
We present a deep learning-based generative model for the enhancement of partially coherent diffractive images. In lensless coherent diffractive imaging, a highly coherent X-ray illumination is required to image an object at high resolution. Non-ideal experimental conditions result in a partially coherent X-ray illumination, lead to imperfections of coherent diffractive images recorded on a detector, and ultimately limit the capability of lensless coherent diffractive imaging. The previous approaches, relying on the coherence property of illumination, require preliminary experiments or expensive computations. In this article, we propose a generative adversarial network (GAN) model to enhance the visibility of fringes in partially coherent diffractive images. Unlike previous approaches, the model is trained to restore the latent sharp features from blurred input images without finding coherence properties of illumination. We demonstrate that the GAN model performs well with both coherent diffractive imaging and ptychography. It can be applied to a wide range of imaging techniques relying on phase retrieval of coherent diffraction patterns.
Full article
(This article belongs to the Special Issue Feature Papers for AI)
►▼
Show Figures

Figure 1
Open AccessArticle
Distinguishing Malicious Drones Using Vision Transformer
AI 2022, 3(2), 260-273; https://doi.org/10.3390/ai3020016 - 31 Mar 2022
Cited by 2
Abstract
Drones are commonly used in numerous applications, such as surveillance, navigation, spraying pesticides in autonomous agricultural systems, various military services, etc., due to their variable sizes and workloads. However, malicious drones that carry harmful objects are often adversely used to intrude restricted areas
[...] Read more.
Drones are commonly used in numerous applications, such as surveillance, navigation, spraying pesticides in autonomous agricultural systems, various military services, etc., due to their variable sizes and workloads. However, malicious drones that carry harmful objects are often adversely used to intrude restricted areas and attack critical public places. Thus, the timely detection of malicious drones can prevent potential harm. This article proposes a vision transformer (ViT) based framework to distinguish between drones and malicious drones. In the proposed ViT based model, drone images are split into fixed-size patches; then, linearly embeddings and position embeddings are applied, and the resulting sequence of vectors is finally fed to a standard ViT encoder. During classification, an additional learnable classification token associated to the sequence is used. The proposed framework is compared with several handcrafted and deep convolutional neural networks (D-CNN), which reveal that the proposed model has achieved an accuracy of 98.3%, outperforming various handcrafted and D-CNNs models. Additionally, the superiority of the proposed model is illustrated by comparing it with the existing state-of-the-art drone-detection methods.
Full article
(This article belongs to the Special Issue Emerging Trends of Deep Learning in AI: Challenges and Methodologies)
►▼
Show Figures

Figure 1
Open AccessArticle
Reinforcement Learning Your Way: Agent Characterization through Policy Regularization
by
and
AI 2022, 3(2), 250-259; https://doi.org/10.3390/ai3020015 - 24 Mar 2022
Abstract
The increased complexity of state-of-the-art reinforcement learning (RL) algorithms has resulted in an opacity that inhibits explainability and understanding. This has led to the development of several post hoc explainability methods that aim to extract information from learned policies, thus aiding explainability. These
[...] Read more.
The increased complexity of state-of-the-art reinforcement learning (RL) algorithms has resulted in an opacity that inhibits explainability and understanding. This has led to the development of several post hoc explainability methods that aim to extract information from learned policies, thus aiding explainability. These methods rely on empirical observations of the policy, and thus aim to generalize a characterization of agents’ behaviour. In this study, we have instead developed a method to imbue agents’ policies with a characteristic behaviour through regularization of their objective functions. Our method guides the agents’ behaviour during learning, which results in an intrinsic characterization; it connects the learning process with model explanation. We provide a formal argument and empirical evidence for the viability of our method. In future work, we intend to employ it to develop agents that optimize individual financial customers’ investment portfolios based on their spending personalities.
Full article
(This article belongs to the Section AI Systems: Theory and Applications)
►▼
Show Figures

Figure 1
Open AccessReview
Systematic Review of Computer Vision Semantic Analysis in Socially Assistive Robotics
by
, , and
AI 2022, 3(1), 229-249; https://doi.org/10.3390/ai3010014 - 17 Mar 2022
Abstract
The simultaneous surges in the research on socially assistive robotics and that on computer vision can be seen as a result of the shifting and increasing necessities of our global population, especially towards social care with the expanding population in need of socially
[...] Read more.
The simultaneous surges in the research on socially assistive robotics and that on computer vision can be seen as a result of the shifting and increasing necessities of our global population, especially towards social care with the expanding population in need of socially assistive robotics. The merging of these fields creates demand for more complex and autonomous solutions, often struggling with the lack of contextual understanding of tasks that semantic analysis can provide and hardware limitations. Solving those issues can provide more comfortable and safer environments for the individuals in most need. This work aimed to understand the current scope of science in the merging fields of computer vision and semantic analysis in lightweight models for robotic assistance. Therefore, we present a systematic review of visual semantics works concerned with assistive robotics. Furthermore, we discuss the trends and possible research gaps in those fields. We detail our research protocol, present the state of the art and future trends, and answer five pertinent research questions. Out of 459 articles, 22 works matching the defined scope were selected, rated in 8 quality criteria relevant to our search, and discussed in depth. Our results point to an emerging field of research with challenging gaps to be explored by the academic community. Data on database study collection, year of publishing, and the discussion of methods and datasets are displayed. We observe that the current methods regarding visual semantic analysis show two main trends. At first, there is an abstraction of contextual data to enable an automated understanding of tasks. We also observed a clearer formalization of model compaction metrics.
Full article
(This article belongs to the Topic Artificial Intelligence (AI) in Medical Imaging)
►▼
Show Figures

Figure 1
Open AccessArticle
Rule-Enhanced Active Learning for Semi-Automated Weak Supervision
AI 2022, 3(1), 211-228; https://doi.org/10.3390/ai3010013 - 16 Mar 2022
Abstract
A major bottleneck preventing the extension of deep learning systems to new domains is the prohibitive cost of acquiring sufficient training labels. Alternatives such as weak supervision, active learning, and fine-tuning of pretrained models reduce this burden but require substantial human input to
[...] Read more.
A major bottleneck preventing the extension of deep learning systems to new domains is the prohibitive cost of acquiring sufficient training labels. Alternatives such as weak supervision, active learning, and fine-tuning of pretrained models reduce this burden but require substantial human input to select a highly informative subset of instances or to curate labeling functions. REGAL (Rule-Enhanced Generative Active Learning) is an improved framework for weakly supervised text classification that performs active learning over labeling functions rather than individual instances. REGAL interactively creates high-quality labeling patterns from raw text, enabling a single annotator to accurately label an entire dataset after initialization with three keywords for each class. Experiments demonstrate that REGAL extracts up to 3 times as many high-accuracy labeling functions from text as current state-of-the-art methods for interactive weak supervision, enabling REGAL to dramatically reduce the annotation burden of writing labeling functions for weak supervision. Statistical analysis reveals REGAL performs equal or significantly better than interactive weak supervision for five of six commonly used natural language processing (NLP) baseline datasets.
Full article
(This article belongs to the Topic Methods for Data Labelling for Intelligent Systems)
►▼
Show Figures

Figure 1
Open AccessArticle
Abstract Reservoir Computing
AI 2022, 3(1), 194-210; https://doi.org/10.3390/ai3010012 - 10 Mar 2022
Abstract
Noise of any kind can be an issue when translating results from simulations to the real world. We suddenly have to deal with building tolerances, faulty sensors, or just noisy sensor readings. This is especially evident in systems with many free parameters, such
[...] Read more.
Noise of any kind can be an issue when translating results from simulations to the real world. We suddenly have to deal with building tolerances, faulty sensors, or just noisy sensor readings. This is especially evident in systems with many free parameters, such as the ones used in physical reservoir computing. By abstracting away these kinds of noise sources using intervals, we derive a regularized training regime for reservoir computing using sets of possible reservoir states. Numerical simulations are used to show the effectiveness of our approach against different sources of errors that can appear in real-world scenarios and compare them with standard approaches. Our results support the application of interval arithmetics to improve the robustness of mass-spring networks trained in simulations.
Full article
(This article belongs to the Section AI Systems: Theory and Applications)
►▼
Show Figures

Figure 1
Open AccessArticle
Weight-Quantized SqueezeNet for Resource-Constrained Robot Vacuums for Indoor Obstacle Classification
by
AI 2022, 3(1), 180-193; https://doi.org/10.3390/ai3010011 - 09 Mar 2022
Abstract
►▼
Show Figures
With the rapid development of artificial intelligence (AI) theory, particularly deep learning neural networks, robot vacuums equipped with AI power can automatically clean indoor floors by using intelligent programming and vacuuming services. To date, several deep AI models have been proposed to distinguish
[...] Read more.
With the rapid development of artificial intelligence (AI) theory, particularly deep learning neural networks, robot vacuums equipped with AI power can automatically clean indoor floors by using intelligent programming and vacuuming services. To date, several deep AI models have been proposed to distinguish indoor objects between cleanable litter and noncleanable hazardous obstacles. Unfortunately, these existing deep AI models focus entirely on the accuracy enhancement of object classification, and little effort has been made to minimize the memory size and implementation cost of AI models. As a result, these existing deep AI models require far more memory space than a typical robot vacuum can provide. To address this shortcoming, this paper aims to study and find an efficient deep AI model that can achieve a good balance between classification accuracy and memory usage (i.e., implementation cost). In this work, we propose a weight-quantized SqueezeNet model for robot vacuums. This model can classify indoor cleanable litters from noncleanable hazardous obstacles based on the image or video captures from robot vacuums. Furthermore, we collect videos or pictures captured by built-in cameras of robot vacuums and use them to construct a diverse dataset. The dataset contains 20,000 images with a ground-view perspective of dining rooms, kitchens and living rooms for various houses under different lighting conditions. Experimental results show that the proposed deep AI model can achieve comparable object classification accuracy of around 93% while reducing memory usage by at least 22.5 times. More importantly, the memory footprint required by our AI model is only 0.8 MB, indicating that this model can run smoothly on resource-constrained robot vacuums, where low-end processors or microcontrollers are dedicated to running AI algorithms.
Full article

Figure 1
Open AccessArticle
DeepSleep 2.0: Automated Sleep Arousal Segmentation via Deep Learning
by
AI 2022, 3(1), 164-179; https://doi.org/10.3390/ai3010010 - 01 Mar 2022
Abstract
DeepSleep 2.0 is a compact version of DeepSleep, a state-of-the-art, U-Net-inspired, fully convolutional deep neural network, which achieved the highest unofficial score in the 2018 PhysioNet Computing Challenge. The proposed network architecture has a compact encoder/decoder structure containing only 740,551 trainable parameters. The
[...] Read more.
DeepSleep 2.0 is a compact version of DeepSleep, a state-of-the-art, U-Net-inspired, fully convolutional deep neural network, which achieved the highest unofficial score in the 2018 PhysioNet Computing Challenge. The proposed network architecture has a compact encoder/decoder structure containing only 740,551 trainable parameters. The input to the network is a full-length multichannel polysomnographic recording signal. The network has been designed and optimized to efficiently predict nonapnea sleep arousals on held-out test data at a 5 ms resolution level, while not compromising the prediction accuracy. When compared to DeepSleep, the obtained experimental results in terms of gross area under the precision–recall curve (AUPRC) and gross area under the receiver operating characteristic curve (AUROC) suggest a lightweight architecture, which can achieve similar prediction performance at a lower computational cost, is realizable.
Full article
(This article belongs to the Section Medical & Healthcare AI)
►▼
Show Figures

Figure 1
Open AccessArticle
An Artificial Neural Network-Based Approach for Predicting the COVID-19 Daily Effective Reproduction Number Rt in Italy
by
, , , , and
AI 2022, 3(1), 146-163; https://doi.org/10.3390/ai3010009 - 26 Feb 2022
Abstract
Since December 2019, the novel coronavirus disease (COVID-19) has had a considerable impact on the health and socio-economic fabric of Italy. The effective reproduction number Rt is one of the most representative indicators of the contagion status as it reports the number
[...] Read more.
Since December 2019, the novel coronavirus disease (COVID-19) has had a considerable impact on the health and socio-economic fabric of Italy. The effective reproduction number Rt is one of the most representative indicators of the contagion status as it reports the number of new infections caused by an infected subject in a partially immunized population. The task of predicting Rt values forward in time is challenging and, historically, it has been addressed by exploiting compartmental models or statistical frameworks. The present study proposes an Artificial Neural Networks-based approach to predict the Rt temporal trend at a daily resolution. For each Italian region and autonomous province, 21 daily COVID-19 indicators were exploited for the 7-day ahead prediction of the Rt trend by means of different neural network architectures, i.e., Feed Forward, Mono-Dimensional Convolutional, and Long Short-Term Memory. Focusing on Lombardy, which is one of the most affected regions, the predictions proved to be very accurate, with a minimum Root Mean Squared Error (RMSE) ranging from 0.035 at day t + 1 to 0.106 at day t + 7. Overall, the results show that it is possible to obtain accurate forecasts in Italy at a daily temporal resolution instead of the weekly resolution characterizing the official Rt data.
Full article
(This article belongs to the Section Medical & Healthcare AI)
►▼
Show Figures

Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
29 April 2022
Meet Us at the 30th Mediterranean Conference on Control and Automation (MED 2022), 28 June–1 July 2022, Athens, Greece
Meet Us at the 30th Mediterranean Conference on Control and Automation (MED 2022), 28 June–1 July 2022, Athens, Greece

3 December 2021
The 2nd International Electronic Conference on Healthcare (IECH2022)—Submissions Deadline Extension
The 2nd International Electronic Conference on Healthcare (IECH2022)—Submissions Deadline Extension

Topics
Topic in
Sustainability, Systems, AI, Digital, IoT
Toward the New Era of Sustainable Design, Manufacturing and Management
Topic Editors: Yoshiki Shimomura, Shigeru HosonoDeadline: 30 June 2022
Topic in
AI, Cancers, Current Oncology, Diagnostics, Onco
Artificial Intelligence in Cancer Diagnosis and Therapy
Topic Editors: Hamid Khayyam, Ali Madani, Rahele Kafieh, Ali HekmatniaDeadline: 20 December 2022
Topic in
AI, Alloys, Applied Sciences, Materials, Metals
Hybrid Computational Methods in Materials Engineering
Topic Editors: Wojciech Sitek, Jacek Trzaska, Imre FeldeDeadline: 30 January 2023
Topic in
Entropy, Applied Sciences, Healthcare, J. Imaging, Computers, BDCC, AI
Recent Trends in Image Processing and Pattern Recognition
Topic Editors: KC Santosh, Ayush Goyal, Djamila Aouada, Aaisha Makkar, Yao-Yi Chiang, Satish Kumar SinghDeadline: 22 April 2023

Conferences
Special Issues
Special Issue in
AI
Emerging Trends of Deep Learning in AI: Challenges and Methodologies
Guest Editors: Arunabha Mohan Roy, Jayabrata BhaduriDeadline: 31 July 2022
Special Issue in
AI
Computer-Aided Diagnosis
Guest Editors: José Manuel Ferreira Machado, Hugo PeixotoDeadline: 31 August 2022
Special Issue in
AI
Developments in Transfer Learning
Guest Editor: Sourav SenDeadline: 5 October 2022
Special Issue in
AI
Applied Artificial Intelligence in Cyber Security: Theory, Practices and Applications
Guest Editors: Tushar Bhardwaj, Upadhyay Himanshu, Alexander Perez-PonsDeadline: 31 October 2022