Algorithmic Aspects of Neural Networks

A special issue of Algorithms (ISSN 1999-4893). This special issue belongs to the section "Combinatorial Optimization, Graph, and Network Algorithms".

Deadline for manuscript submissions: closed (31 March 2022) | Viewed by 11775

Special Issue Editors


E-Mail Website
Guest Editor
Department of Mathematics and Computer Science, University of Catania, I-95125 Catania, Italy
Interests: algorithms on strings; algorithms on graphs; bioinformatics
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Cognitive Science, University of Messina Italy, 1-98122 Messina, Italy
Interests: cognitive science; philosophy of science; neural computation; moral cognition

E-Mail Website
Guest Editor
Department of Cognitive Science, University of Messina, 1-98122 Messina, Italy
Interests: cognitive science; philosophy of science; natural language processing; text processing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

The recent fast resurgence of Artificial Intelligence, after several decades of unsatisfactory advances, is due to a family of algorithms collected under the term Deep Learning. The amazing success achieved by deep learning was totally unexpected, because it does not include substantial innovations. It is just a derivation from artificial neural networks, a field that was stagnating at the beginning of this century. This surprise has motivated investigation into algorithmic aspects that can explain why deep learning works so well, and so much better than previous neural networks. This Special Issue collects early results of this research.

Prof. Dr. Simone Faro
Dr. Alessio Plebe
Dr. Arianna Pavone
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Algorithms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Artificial intelligence
  • Neural networks
  • Deep learning
  • Learning mechanisms

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

15 pages, 279 KiB  
Article
How Neurons in Deep Models Relate with Neurons in the Brain
by Arianna Pavone and Alessio Plebe
Algorithms 2021, 14(9), 272; https://doi.org/10.3390/a14090272 - 17 Sep 2021
Cited by 3 | Viewed by 1935
Abstract
In dealing with the algorithmic aspects of intelligent systems, the analogy with the biological brain has always been attractive, and has often had a dual function. On the one hand, it has been an effective source of inspiration for their design, while, on [...] Read more.
In dealing with the algorithmic aspects of intelligent systems, the analogy with the biological brain has always been attractive, and has often had a dual function. On the one hand, it has been an effective source of inspiration for their design, while, on the other hand, it has been used as the justification for their success, especially in the case of Deep Learning (DL) models. However, in recent years, inspiration from the brain has lost its grip on its first role, yet it continues to be proposed in its second role, although we believe it is also becoming less and less defensible. Outside the chorus, there are theoretical proposals that instead identify important demarcation lines between DL and human cognition, to the point of being even incommensurable. In this article we argue that, paradoxically, the partial indifference of the developers of deep neural models to the functioning of biological neurons is one of the reasons for their success, having promoted a pragmatically opportunistic attitude. We believe that it is even possible to glimpse a biological analogy of a different kind, in that the essentially heuristic way of proceeding in modern DL development bears intriguing similarities to natural evolution. Full article
(This article belongs to the Special Issue Algorithmic Aspects of Neural Networks)
13 pages, 22083 KiB  
Article
SR-Inpaint: A General Deep Learning Framework for High Resolution Image Inpainting
by Haoran Xu, Xinya Li, Kaiyi Zhang, Yanbai He, Haoran Fan, Sijiang Liu, Chuanyan Hao and Bo Jiang
Algorithms 2021, 14(8), 236; https://doi.org/10.3390/a14080236 - 10 Aug 2021
Cited by 3 | Viewed by 4739
Abstract
Recently, deep learning has enabled a huge leap forward in image inpainting. However, due to the memory and computational limitation, most existing methods are able to handle only low-resolution inputs, typically less than 1 K. With the improvement of Internet transmission capacity and [...] Read more.
Recently, deep learning has enabled a huge leap forward in image inpainting. However, due to the memory and computational limitation, most existing methods are able to handle only low-resolution inputs, typically less than 1 K. With the improvement of Internet transmission capacity and mobile device cameras, the resolution of image and video sources available to users via the cloud or locally is increasing. For high-resolution images, the common inpainting methods simply upsample the inpainted result of the shrinked image to yield a blurry result. In recent years, there is an urgent need to reconstruct the missing high-frequency information in high-resolution images and generate sharp texture details. Hence, we propose a general deep learning framework for high-resolution image inpainting, which first hallucinates a semantically continuous blurred result using low-resolution inpainting and suppresses computational overhead. Then the sharp high-frequency details with original resolution are reconstructed using super-resolution refinement. Experimentally, our method achieves inspiring inpainting quality on 2K and 4K resolution images, ahead of the state-of-the-art high-resolution inpainting technique. This framework is expected to be popularized for high-resolution image editing tasks on personal computers and mobile devices in the future. Full article
(This article belongs to the Special Issue Algorithmic Aspects of Neural Networks)
Show Figures

Figure 1

23 pages, 3403 KiB  
Article
Synthetic Experiences for Accelerating DQN Performance in Discrete Non-Deterministic Environments
by Wenzel Pilar von Pilchau, Anthony Stein and Jörg Hähner
Algorithms 2021, 14(8), 226; https://doi.org/10.3390/a14080226 - 27 Jul 2021
Cited by 4 | Viewed by 1993
Abstract
State-of-the-art Deep Reinforcement Learning Algorithms such as DQN and DDPG use the concept of a replay buffer called Experience Replay. The default usage contains only the experiences that have been gathered over the runtime. We propose a method called Interpolated Experience Replay that [...] Read more.
State-of-the-art Deep Reinforcement Learning Algorithms such as DQN and DDPG use the concept of a replay buffer called Experience Replay. The default usage contains only the experiences that have been gathered over the runtime. We propose a method called Interpolated Experience Replay that uses stored (real) transitions to create synthetic ones to assist the learner. In this first approach to this field, we limit ourselves to discrete and non-deterministic environments and use a simple equally weighted average of the reward in combination with observed follow-up states. We could demonstrate a significantly improved overall mean average in comparison to a DQN network with vanilla Experience Replay on the discrete and non-deterministic FrozenLake8x8-v0 environment. Full article
(This article belongs to the Special Issue Algorithmic Aspects of Neural Networks)
Show Figures

Figure 1

18 pages, 3209 KiB  
Article
Evaluation of Agricultural Investment Climate in CEE Countries: The Application of Back Propagation Neural Network
by Ru Guo, Xiaodong Qiu and Yiyi He
Algorithms 2020, 13(12), 336; https://doi.org/10.3390/a13120336 - 13 Dec 2020
Cited by 1 | Viewed by 1799
Abstract
Evaluation of agricultural investment climate has essential reference value for site selection, operation and risk management of agricultural outward foreign direct investment projects. This study builds a back propagation neural network-based agricultural investment climate evaluation model, which has 22 indicators of four subsystems [...] Read more.
Evaluation of agricultural investment climate has essential reference value for site selection, operation and risk management of agricultural outward foreign direct investment projects. This study builds a back propagation neural network-based agricultural investment climate evaluation model, which has 22 indicators of four subsystems that take political climate, economic climate, social climate, and technological climate as the input vector, and agricultural investment climate rating as the output vector, to evaluate the agricultural investment climate in 16 Central and Eastern European (CEE) countries. The overall spatial distribution characteristics demonstrate that the best agricultural investment climate is in the three Baltic countries, followed by the Visegrad Group and Slovenia sector, and then the Balkan littoral countries. The findings may provide insights for entrepreneurs who aim to invest in agriculture abroad and contribute to the improvement of these countries’ investment climate. Full article
(This article belongs to the Special Issue Algorithmic Aspects of Neural Networks)
Show Figures

Figure 1

Back to TopTop