Next Issue
Volume 1, September
Previous Issue
Volume 1, March
 
 

AI, Volume 1, Issue 2 (June 2020) – 15 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
17 pages, 506 KiB  
Article
Detecting and Classifying Pests in Crops Using Proximal Images and Machine Learning: A Review
by Jayme Garcia Arnal Barbedo
AI 2020, 1(2), 312-328; https://doi.org/10.3390/ai1020021 - 24 Jun 2020
Cited by 39 | Viewed by 11172
Abstract
Pest management is among the most important activities in a farm. Monitoring all different species visually may not be effective, especially in large properties. Accordingly, considerable research effort has been spent towards the development of effective ways to remotely monitor potential infestations. A [...] Read more.
Pest management is among the most important activities in a farm. Monitoring all different species visually may not be effective, especially in large properties. Accordingly, considerable research effort has been spent towards the development of effective ways to remotely monitor potential infestations. A growing number of solutions combine proximal digital images with machine learning techniques, but since species and conditions associated to each study vary considerably, it is difficult to draw a realistic picture of the actual state of the art on the subject. In this context, the objectives of this article are (1) to briefly describe some of the most relevant investigations on the subject of automatic pest detection using proximal digital images and machine learning; (2) to provide a unified overview of the research carried out so far, with special emphasis to research gaps that still linger; (3) to propose some possible targets for future research. Full article
(This article belongs to the Special Issue Artificial Intelligence in Agriculture)
Show Figures

Figure 1

13 pages, 1379 KiB  
Article
Enhanced Hyperbox Classifier Model for Nanomaterial Discovery
by Jose Isagani B. Janairo, Kathleen B. Aviso, Michael Angelo B. Promentilla and Raymond R. Tan
AI 2020, 1(2), 299-311; https://doi.org/10.3390/ai1020020 - 17 Jun 2020
Cited by 8 | Viewed by 3846
Abstract
Machine learning tools can be applied to peptide-mediated biomineralization, which is an emerging biomimetic technique of creating functional nanomaterials. In particular, they can be used for the discovery of biomineralization peptides, which currently relies on combinatorial enumeration approaches. In this work, an enhanced [...] Read more.
Machine learning tools can be applied to peptide-mediated biomineralization, which is an emerging biomimetic technique of creating functional nanomaterials. In particular, they can be used for the discovery of biomineralization peptides, which currently relies on combinatorial enumeration approaches. In this work, an enhanced hyperbox classifier is developed which can predict if a given peptide sequence has a strong or weak binding affinity towards a gold surface. A mixed-integer linear program is formulated to generate the rule-based classification model. The classifier is optimized to account for false positives and false negatives, and clearly articulates how the classification decision is made. This feature makes the decision-making process transparent, and the results easy to interpret for decision support. The method developed can help accelerate the discovery of more biomineralization peptide sequences, which may expand the utility of peptide-mediated biomineralization as a means for nanomaterial synthesis. Full article
(This article belongs to the Section Chemical Artificial Intelligence)
Show Figures

Figure 1

13 pages, 8936 KiB  
Article
Just Don’t Fall: An AI Agent’s Learning Journey Towards Posture Stabilisation
by Mohammed Hossny and Julie Iskander
AI 2020, 1(2), 286-298; https://doi.org/10.3390/ai1020019 - 15 Jun 2020
Cited by 5 | Viewed by 6648
Abstract
Learning to maintain postural balance while standing requires a significant, fine coordination effort between the neuromuscular system and the sensory system. It is one of the key contributing factors towards fall prevention, especially in the older population. Using artificial intelligence (AI), we can [...] Read more.
Learning to maintain postural balance while standing requires a significant, fine coordination effort between the neuromuscular system and the sensory system. It is one of the key contributing factors towards fall prevention, especially in the older population. Using artificial intelligence (AI), we can similarly teach an agent to maintain a standing posture, and thus teach the agent not to fall. In this paper, we investigate the learning progress of an AI agent and how it maintains a stable standing posture through reinforcement learning. We used the Deep Deterministic Policy Gradient method (DDPG) and the OpenSim musculoskeletal simulation environment based on OpenAI Gym. During training, the AI agent learnt three policies. First, it learnt to maintain the Centre-of-Gravity and Zero-Moment-Point in front of the body. Then, it learnt to shift the load of the entire body on one leg while using the other leg for fine tuning the balancing action. Finally, it started to learn the coordination between the two pre-trained policies. This study shows the potentials of using deep reinforcement learning in human movement studies. The learnt AI behaviour also exhibited attempts to achieve an unplanned goal because it correlated with the set goal (e.g., walking in order to prevent falling). The failed attempts to maintain a standing posture is an interesting by-product which can enrich the fall detection and prevention research efforts. Full article
Show Figures

Graphical abstract

10 pages, 1286 KiB  
Article
Artificial Intelligence Algorithms for Discovering New Active Compounds Targeting TRPA1 Pain Receptors
by Dragos Paul Mihai, Cosmin Trif, Gheorghe Stancov, Denise Radulescu and George Mihai Nitulescu
AI 2020, 1(2), 276-285; https://doi.org/10.3390/ai1020018 - 11 Jun 2020
Cited by 5 | Viewed by 3539
Abstract
Transient receptor potential ankyrin 1 (TRPA1) is a ligand-gated calcium channel activated by cold temperatures and by a plethora of electrophilic environmental irritants (allicin, acrolein, mustard-oil) and endogenously oxidized lipids (15-deoxy-∆12, 14-prostaglandin J2 and 5, 6-eposyeicosatrienoic acid). These oxidized lipids work as agonists, [...] Read more.
Transient receptor potential ankyrin 1 (TRPA1) is a ligand-gated calcium channel activated by cold temperatures and by a plethora of electrophilic environmental irritants (allicin, acrolein, mustard-oil) and endogenously oxidized lipids (15-deoxy-∆12, 14-prostaglandin J2 and 5, 6-eposyeicosatrienoic acid). These oxidized lipids work as agonists, making TRPA1 a key player in inflammatory and neuropathic pain. TRPA1 antagonists acting as non-central pain blockers are a promising choice for future treatment of pain-related conditions having advantages over current therapeutic choices A large variety of in silico methods have been used in drug design to speed up the development of new active compounds such as molecular docking, quantitative structure-activity relationship models (QSAR), and machine learning classification algorithms. Artificial intelligence methods can significantly improve the drug discovery process and it is an attractive field that can bring together computer scientists and experts in drug development. In our paper, we aimed to develop three machine learning algorithms frequently used in drug discovery research: feedforward neural networks (FFNN), random forests (RF), and support vector machines (SVM), for discovering novel TRPA1 antagonists. All three machine learning methods used the same class of independent variables (multilevel neighborhoods of atoms descriptors) as prediction of activity spectra for substances (PASS) software. The model with the highest accuracy and most optimal performance metrics was the random forest algorithm, showing 99% accuracy and 0.9936 ROC AUC. Thus, our study emphasized that simpler and robust machine learning algorithms such as random forests perform better in correctly classifying TRPA1 antagonists since the dimension of the dependent variables dataset is relatively modest. Full article
(This article belongs to the Special Issue AI in Drug Design)
Show Figures

Figure 1

13 pages, 3357 KiB  
Article
Improving Daily Peak Flow Forecasts Using Hybrid Fourier-Series Autoregressive Integrated Moving Average and Recurrent Artificial Neural Network Models
by Mohammad Ebrahim Banihabib, Reihaneh Bandari and Mohammad Valipour
AI 2020, 1(2), 263-275; https://doi.org/10.3390/ai1020017 - 7 Jun 2020
Cited by 12 | Viewed by 3827
Abstract
In multi-purpose reservoirs, to achieve optimal operation, sophisticated models are required to forecast reservoir inflow in both short- and long-horizon times with an acceptable accuracy, particularly for peak flows. In this study, an auto-regressive hybrid model is proposed for long-horizon forecasting of daily [...] Read more.
In multi-purpose reservoirs, to achieve optimal operation, sophisticated models are required to forecast reservoir inflow in both short- and long-horizon times with an acceptable accuracy, particularly for peak flows. In this study, an auto-regressive hybrid model is proposed for long-horizon forecasting of daily reservoir inflow. The model is examined for a one-year horizon forecasting of high-oscillated daily flow time series. First, a Fourier-Series Filtered Autoregressive Integrated Moving Average (FSF-ARIMA) model is applied to forecast linear behavior of daily flow time series. Second, a Recurrent Artificial Neural Network (RANN) model is utilized to forecast FSF-ARIMA model’s residuals. The hybrid model follows the detail of observed flow time variation and forecasted peak flow more accurately than previous models. The proposed model enhances the ability to forecast reservoir inflow, especially in peak flows, compared to previous linear and nonlinear auto-regressive models. The hybrid model has a potential to decrease maximum and average forecasting error by 81% and 80%, respectively. The results of this investigation are useful for stakeholders and water resources managers to schedule optimum operation of multi-purpose reservoirs in controlling floods and generating hydropower. Full article
(This article belongs to the Special Issue Artificial Intelligence in Agriculture)
Show Figures

Figure 1

21 pages, 3244 KiB  
Article
Sieve: An Ensemble Algorithm Using Global Consensus for Binary Classification
by Chongya Song, Alexander Pons and Kang Yen
AI 2020, 1(2), 242-262; https://doi.org/10.3390/ai1020016 - 26 May 2020
Cited by 1 | Viewed by 3739
Abstract
In the field of machine learning, an ensemble approach is often utilized as an effective means of improving on the accuracy of multiple weak base classifiers. A concern associated with these ensemble algorithms is that they can suffer from the Curse of Conflict, [...] Read more.
In the field of machine learning, an ensemble approach is often utilized as an effective means of improving on the accuracy of multiple weak base classifiers. A concern associated with these ensemble algorithms is that they can suffer from the Curse of Conflict, where a classifier’s true prediction is negated by another classifier’s false prediction during the consensus period. Another concern of the ensemble technique is that it cannot effectively mitigate the problem of Imbalanced Classification, where an ensemble classifier usually presents a similar magnitude of bias to the same class as its imbalanced base classifiers. We proposed an improved ensemble algorithm called “Sieve” that overcomes the aforementioned shortcomings through the establishment of the novel concept of Global Consensus. The proposed Sieve ensemble approach was benchmarked against various ensemble classifiers, and was trained using different ensemble algorithms with the same base classifiers. The results demonstrate that better accuracy and stability was achieved. Full article
(This article belongs to the Section AI Systems: Theory and Applications)
Show Figures

Figure 1

13 pages, 3749 KiB  
Article
Carrot Yield Mapping: A Precision Agriculture Approach Based on Machine Learning
by Marcelo Chan Fu Wei, Leonardo Felipe Maldaner, Pedro Medeiros Netto Ottoni and José Paulo Molin
AI 2020, 1(2), 229-241; https://doi.org/10.3390/ai1020015 - 23 May 2020
Cited by 40 | Viewed by 7372
Abstract
Carrot yield maps are an essential tool in supporting decision makers in improving their agricultural practices, but they are unconventional and not easy to obtain. The objective was to develop a method to generate a carrot yield map applying a random forest (RF) [...] Read more.
Carrot yield maps are an essential tool in supporting decision makers in improving their agricultural practices, but they are unconventional and not easy to obtain. The objective was to develop a method to generate a carrot yield map applying a random forest (RF) regression algorithm on a database composed of satellite spectral data and carrot ground-truth yield sampling. Georeferenced carrot yield sampling was carried out and satellite imagery was obtained during crop development. The entire dataset was split into training and test sets. The Gini index was used to find the five most important predictor variables of the model. Statistical parameters used to evaluate model performance were the root mean squared error (RMSE), coefficient of determination (R2) and mean absolute error (MAE). The five most important predictor variables were the near-infrared spectral band at 92 and 79 days after sowing (DAS), green spectral band at 50 DAS and blue spectral band at 92 and 81 DAS. The RF algorithm applied to the entire dataset presented R2, RMSE and MAE values of 0.82, 2.64 Mg ha−1 and 1.74 Mg ha−1, respectively. The method based on RF regression applied to a database composed of spectral bands proved to be accurate and suitable to predict carrot yield. Full article
(This article belongs to the Special Issue Artificial Intelligence in Agriculture)
Show Figures

Figure 1

20 pages, 4741 KiB  
Article
Image Collection Summarization Method Based on Semantic Hierarchies
by Zahra Riahi Samani and Mohsen Ebrahimi Moghaddam
AI 2020, 1(2), 209-228; https://doi.org/10.3390/ai1020014 - 18 May 2020
Cited by 3 | Viewed by 4640
Abstract
The size of internet image collections is increasing drastically. As a result, new techniques are required to facilitate users in browsing, navigation, and summarization of these large volume collections. Image collection summarization methods present users with a set of exemplar images as the [...] Read more.
The size of internet image collections is increasing drastically. As a result, new techniques are required to facilitate users in browsing, navigation, and summarization of these large volume collections. Image collection summarization methods present users with a set of exemplar images as the most representative ones from the initial image collection. In this study, an image collection summarization technique was introduced according to semantic hierarchies among them. In the proposed approach, images were mapped to the nodes of a pre-defined domain ontology. In this way, a semantic hierarchical classifier was used, which finally mapped images to different nodes of the ontology. We made a compromise between the degree of freedom of the classifier and the goodness of the summarization method. The summarization was done using a group of high-level features that provided a semantic measurement of information in images. Experimental outcomes indicated that the introduced image collection summarization method outperformed the recent techniques for the summarization of image collections. Full article
Show Figures

Figure 1

11 pages, 4496 KiB  
Article
A Study on CNN-Based Detection of Psyllids in Sticky Traps Using Multiple Image Data Sources
by Jayme Garcia Arnal Barbedo and Guilherme Barros Castro
AI 2020, 1(2), 198-208; https://doi.org/10.3390/ai1020013 - 18 May 2020
Cited by 8 | Viewed by 3929
Abstract
Deep learning architectures like Convolutional Neural Networks (CNNs) are quickly becoming the standard for detecting and counting objects in digital images. However, most of the experiments found in the literature train and test the neural networks using data from a single image source, [...] Read more.
Deep learning architectures like Convolutional Neural Networks (CNNs) are quickly becoming the standard for detecting and counting objects in digital images. However, most of the experiments found in the literature train and test the neural networks using data from a single image source, making it difficult to infer how the trained models would perform under a more diverse context. The objective of this study was to assess the robustness of models trained using data from a varying number of sources. Nine different devices were used to acquire images of yellow sticky traps containing psyllids and a wide variety of other objects, with each model being trained and tested using different data combinations. The results from the experiments were used to draw several conclusions about how the training process should be conducted and how the robustness of the trained models is influenced by data quantity and variety. Full article
(This article belongs to the Special Issue Artificial Intelligence in Agriculture)
Show Figures

Figure 1

6 pages, 230 KiB  
Viewpoint
Cities of the Future? The Potential Impact of Artificial Intelligence
by Eva Kassens-Noor and Arend Hintze
AI 2020, 1(2), 192-197; https://doi.org/10.3390/ai1020012 - 13 May 2020
Cited by 15 | Viewed by 9999
Abstract
Artificial intelligence (AI), like many revolutionary technologies in human history, will have a profound impact on societies. From this viewpoint, we analyze the combined effects of AI to raise important questions about the future form and function of cities. Combining knowledge from computer [...] Read more.
Artificial intelligence (AI), like many revolutionary technologies in human history, will have a profound impact on societies. From this viewpoint, we analyze the combined effects of AI to raise important questions about the future form and function of cities. Combining knowledge from computer science, urban planning, and economics while reflecting on academic and business perspectives, we propose that the future of cities is far from being a determined one and cities may evolve into ghost towns if the deployment of AI is not carefully controlled. This viewpoint presents a fundamentally different argument, because it expresses a real concern over the future of cities in contrast to the many publications who exclusively assume city populations will increase predicated on the neoliberal urban growth paradigm that has for centuries attracted humans to cities in search of work. Full article
(This article belongs to the Special Issue Artificial Intelligence in the Smart Everything and Everywhere Era)
12 pages, 746 KiB  
Review
Implementation of Artificial Intelligence (AI): A Roadmap for Business Model Innovation
by Wiebke Reim, Josef Åström and Oliver Eriksson
AI 2020, 1(2), 180-191; https://doi.org/10.3390/ai1020011 - 3 May 2020
Cited by 62 | Viewed by 33882
Abstract
Technical advancements within the subject of artificial intelligence (AI) leads towards development of human-like machines, able to operate autonomously and mimic our cognitive behavior. The progress and interest among managers, academics and the public has created a hype among many industries, and many [...] Read more.
Technical advancements within the subject of artificial intelligence (AI) leads towards development of human-like machines, able to operate autonomously and mimic our cognitive behavior. The progress and interest among managers, academics and the public has created a hype among many industries, and many firms are investing heavily to capitalize on the technology through business model innovation. However, managers are left with little support from academia when aiming to implement AI in their firm’s operations, which leads to an increased risk of project failure and unwanted results. This paper aims to provide a deeper understanding of AI and how it can be used as a catalyst for business model innovation. Due to the increasing range and variety of the available published material, a literature review has been performed to gather current knowledge within AI business model innovation. The results are presented in a roadmap to guide the implementation of AI to firm’s operations. Our presented findings suggest four steps when implementing AI: (1) understand AI and organizational capabilities needed for digital transformation; (2) understand current BM, potential for BMI, and business ecosystem role; (3) develop and refine capabilities needed to implement AI; and (4) reach organizational acceptance and develop internal competencies. Full article
(This article belongs to the Special Issue Artificial Intelligence in Customer-Facing Industries)
Show Figures

Figure 1

14 pages, 2148 KiB  
Article
Deep Learning Based Wildfire Event Object Detection from 4K Aerial Images Acquired by UAS
by Ziyang Tang, Xiang Liu, Hanlin Chen, Joseph Hupy and Baijian Yang
AI 2020, 1(2), 166-179; https://doi.org/10.3390/ai1020010 - 27 Apr 2020
Cited by 25 | Viewed by 5721
Abstract
Unmanned Aerial Systems, hereafter referred to as UAS, are of great use in hazard events such as wildfire due to their ability to provide high-resolution video imagery over areas deemed too dangerous for manned aircraft and ground crews. This aerial perspective allows for [...] Read more.
Unmanned Aerial Systems, hereafter referred to as UAS, are of great use in hazard events such as wildfire due to their ability to provide high-resolution video imagery over areas deemed too dangerous for manned aircraft and ground crews. This aerial perspective allows for identification of ground-based hazards such as spot fires and fire lines, and to communicate this information with fire fighting crews. Current technology relies on visual interpretation of UAS imagery, with little to no computer-assisted automatic detection. With the help of big labeled data and the significant increase of computing power, deep learning has seen great successes on object detection with fixed patterns, such as people and vehicles. However, little has been done for objects, such as spot fires, with amorphous and irregular shapes. Additional challenges arise when data are collected via UAS as high-resolution aerial images or videos; an ample solution must provide reasonable accuracy with low delays. In this paper, we examined 4K ( 3840 × 2160 ) videos collected by UAS from a controlled burn and created a set of labeled video sets to be shared for public use. We introduce a coarse-to-fine framework to auto-detect wildfires that are sparse, small, and irregularly-shaped. The coarse detector adaptively selects the sub-regions that are likely to contain the objects of interest while the fine detector passes only the details of the sub-regions, rather than the entire 4K region, for further scrutiny. The proposed two-phase learning therefore greatly reduced time overhead and is capable of maintaining high accuracy. Compared against the real-time one-stage object backbone of YoloV3, the proposed methods improved the mean average precision(mAP) from 0 . 29 to 0 . 67 , with an average inference speed of 7.44 frames per second. Limitations and future work are discussed with regard to the design and the experiment results. Full article
(This article belongs to the Section AI in Autonomous Systems)
Show Figures

Figure 1

10 pages, 800 KiB  
Letter
Artificial Intelligence (AI) Provided Early Detection of the Coronavirus (COVID-19) in China and Will Influence Future Urban Health Policy Internationally
by Zaheer Allam, Gourav Dey and David S. Jones
AI 2020, 1(2), 156-165; https://doi.org/10.3390/ai1020009 - 13 Apr 2020
Cited by 118 | Viewed by 20951
Abstract
Predictive computing tools are increasingly being used and have demonstrated successfulness in providing insights that can lead to better health policy and management. However, as these technologies are still in their infancy stages, slow progress is being made in their adoption for serious [...] Read more.
Predictive computing tools are increasingly being used and have demonstrated successfulness in providing insights that can lead to better health policy and management. However, as these technologies are still in their infancy stages, slow progress is being made in their adoption for serious consideration at national and international policy levels. However, a recent case evidences that the precision of Artificial Intelligence (AI) driven algorithms are gaining in accuracy. AI modelling driven by companies such as BlueDot and Metabiota anticipated the Coronavirus (COVID-19) in China before it caught the world by surprise in late 2019 by both scouting its impact and its spread. From a survey of past viral outbreaks over the last 20 years, this paper explores how early viral detection will reduce in time as computing technology is enhanced and as more data communication and libraries are ensured between varying data information systems. For this enhanced data sharing activity to take place, it is noted that efficient data protocols have to be enforced to ensure that data is shared across networks and systems while ensuring privacy and preventing oversight, especially in the case of medical data. This will render enhanced AI predictive tools which will influence future urban health policy internationally. Full article
(This article belongs to the Section Medical & Healthcare AI)
Show Figures

Figure 1

13 pages, 246 KiB  
Article
Artificial Intelligence (AI) or Intelligence Augmentation (IA): What Is the Future?
by Hossein Hassani, Emmanuel Sirimal Silva, Stephane Unger, Maedeh TajMazinani and Stephen Mac Feely
AI 2020, 1(2), 143-155; https://doi.org/10.3390/ai1020008 - 12 Apr 2020
Cited by 87 | Viewed by 32462
Abstract
Artificial intelligence (AI) is a rapidly growing technological phenomenon that all industries wish to exploit to benefit from efficiency gains and cost reductions. At the macrolevel, AI appears to be capable of replacing humans by undertaking intelligent tasks that were once limited to [...] Read more.
Artificial intelligence (AI) is a rapidly growing technological phenomenon that all industries wish to exploit to benefit from efficiency gains and cost reductions. At the macrolevel, AI appears to be capable of replacing humans by undertaking intelligent tasks that were once limited to the human mind. However, another school of thought suggests that instead of being a replacement for the human mind, AI can be used for intelligence augmentation (IA). Accordingly, our research seeks to address these different views, their implications, and potential risks in an age of increased artificial awareness. We show that the ultimate goal of humankind is to achieve IA through the exploitation of AI. Moreover, we articulate the urgent need for ethical frameworks that define how AI should be used to trigger the next level of IA. Full article
2 pages, 143 KiB  
Editorial
AI: A New Open Access Journal for Artificial Intelligence
by Kenji Suzuki
AI 2020, 1(2), 141-142; https://doi.org/10.3390/ai1020007 - 26 Mar 2020
Cited by 2 | Viewed by 4763
Abstract
As a branch of computer science, artificial intelligence (AI) attempts to understand the essence of intelligence, and produce new kinds of intelligent machines that can respond in a similar way to human intelligence, with broad research areas of machine and deep learning, data [...] Read more.
As a branch of computer science, artificial intelligence (AI) attempts to understand the essence of intelligence, and produce new kinds of intelligent machines that can respond in a similar way to human intelligence, with broad research areas of machine and deep learning, data science, reinforcement learning, data mining, knowledge discovery, knowledge reasoning, speech recognition, natural language processing, language recognition, image recognition, computer vision, planning, robotics, gaming, and so on [...] Full article
Previous Issue
Next Issue
Back to TopTop