E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Special Issue "Selected Papers from the 26th International Conference on Artificial Neural Networks - ICANN 2017"

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Information Theory".

Deadline for manuscript submissions: closed (28 February 2018)

Special Issue Editors

Guest Editor
Prof. Dr. Arkadiusz Orłowski

Faculty of Applied Informatics and Mathematics, Warsaw University of Life Sciences - SGGW, ul. Nowoursynowska 166, Warsaw 02-787, Poland
Website1 | Website2 | E-Mail
Interests: machine learning; image analysis and pattern recognition; artificial neural neworks; quan-tum information and reversible computing; classical and quantum entropies; physics of information; distributed computing; modeling of complex systems
Guest Editor
Prof. Dr. Roseli S. Wedemann

Instituto de Matemática e Estatística, Universidade do Estado do Rio de Janeiro, Rua São Francisco Xavier 524, Rio de Janeiro 20550-900, Brazil
Website | E-Mail
Interests: theoretical and computational models of complex systems; biological and artificial neural networks; parallel and distributed computing models; mathematical and computational models of brain and mental processes; computational science; statistical mechanics and complex systems

Special Issue Information

Dear Colleagues,

Research in artificial neural networks involves investigations regarding the computational modeling of connectionist systems, inspired by the functioning of the mind and the brain. Ideas, concepts, methods, and techniques from diverse areas such as learning algorithms, graph theory, and information theory are applied to the study of these complex systems. Models inspired by the functioning of the dynamics of neuronal substrates that describe brain and mental processes, and illustrative simulations of these phenomena, are used both to understand the mind and brain, as well as to approach the issue of the development of artificially intelligent devices. Physical quantities such as entropies that reflect relevant properties of the topologies and dynamics of these complex networks are proposed, measured, and analyzed using theories and methods from statistical mechanics, physics of dynamical systems, information theory, and biological models. The connectionist approach, with artificial neural network models and models based on brain functioning, form the area currently called Computational Neuroscience. With this modeling perspective, enduring issues in the area of Artificial Intelligence, regarding the comprehension of the computability (the mechanics) of the human mind are investigated, and contribute to the current discussion regarding the description of basic mechanisms involved in consciousness and also to the development of intelligent machines.

The International Conference on Artificial Neural Networks (ICANN) is the annual flagship conference of the European Neural Network Society (ENNS). The ideal of ICANN is to bring together researchers from two worlds: information sciences and neurosciences. The scope is wide, ranging from machine learning algorithms to models of real nervous systems. The aim is to facilitate discussions and interactions in the effort towards developing more intelligent computational systems and increasing our understanding of neural and cognitive processes in the brain.

We encourage the authors who have presented an article at the 26th International Conference on Artificial Neural Networks (ICANN 2017) and who feel that their contribution is within the scope of interest of the journal Entropy to submit an original and essential extension of the ICANN paper to be considered for publication.

Prof. Dr. Arkadiusz Orłowski
Prof. Dr. Roseli S. Wedemann
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1500 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (5 papers)

View options order results:
result details:
Displaying articles 1-5
Export citation of selected articles as:

Research

Open AccessArticle Transfer Information Energy: A Quantitative Indicator of Information Transfer between Time Series
Entropy 2018, 20(5), 323; https://doi.org/10.3390/e20050323
Received: 26 February 2018 / Revised: 19 April 2018 / Accepted: 25 April 2018 / Published: 27 April 2018
PDF Full-text (1250 KB) | HTML Full-text | XML Full-text
Abstract
We introduce an information-theoretical approach for analyzing information transfer between time series. Rather than using the Transfer Entropy (TE), we define and apply the Transfer Information Energy (TIE), which is based on Onicescu’s Information Energy. Whereas the TE can be used as a
[...] Read more.
We introduce an information-theoretical approach for analyzing information transfer between time series. Rather than using the Transfer Entropy (TE), we define and apply the Transfer Information Energy (TIE), which is based on Onicescu’s Information Energy. Whereas the TE can be used as a measure of the reduction in uncertainty about one time series given another, the TIE may be viewed as a measure of the increase in certainty about one time series given another. We compare the TIE and the TE in two known time series prediction applications. First, we analyze stock market indexes from the Americas, Asia/Pacific and Europe, with the goal to infer the information transfer between them (i.e., how they influence each other). In the second application, we take a bivariate time series of the breath rate and instantaneous heart rate of a sleeping human suffering from sleep apnea, with the goal to determine the information transfer heart → breath vs. breath → heart. In both applications, the computed TE and TIE values are strongly correlated, meaning that the TIE can substitute the TE for such applications, even if they measure symmetric phenomena. The advantage of using the TIE is computational: we can obtain similar results, but faster. Full article
Figures

Figure 1

Open AccessArticle On the Reduction of Computational Complexity of Deep Convolutional Neural Networks
Entropy 2018, 20(4), 305; https://doi.org/10.3390/e20040305
Received: 22 January 2018 / Revised: 5 April 2018 / Accepted: 17 April 2018 / Published: 23 April 2018
PDF Full-text (574 KB) | HTML Full-text | XML Full-text
Abstract
Deep convolutional neural networks (ConvNets), which are at the heart of many new emerging applications, achieve remarkable performance in audio and visual recognition tasks. Unfortunately, achieving accuracy often implies significant computational costs, limiting deployability. In modern ConvNets it is typical for the convolution
[...] Read more.
Deep convolutional neural networks (ConvNets), which are at the heart of many new emerging applications, achieve remarkable performance in audio and visual recognition tasks. Unfortunately, achieving accuracy often implies significant computational costs, limiting deployability. In modern ConvNets it is typical for the convolution layers to consume the vast majority of computational resources during inference. This has made the acceleration of these layers an important research area in academia and industry. In this paper, we examine the effects of co-optimizing the internal structures of the convolutional layers and underlying implementation of fundamental convolution operation. We demonstrate that a combination of these methods can have a big impact on the overall speedup of a ConvNet, achieving a ten-fold increase over baseline. We also introduce a new class of fast one-dimensional (1D) convolutions for ConvNets using the Toom–Cook algorithm. We show that our proposed scheme is mathematically well-grounded, robust, and does not require any time-consuming retraining, while still achieving speedups solely from convolutional layers with no loss in baseline accuracy. Full article
Figures

Figure 1

Open AccessArticle Optimization of CNN through Novel Training Strategy for Visual Classification Problems
Entropy 2018, 20(4), 290; https://doi.org/10.3390/e20040290
Received: 31 January 2018 / Revised: 30 March 2018 / Accepted: 14 April 2018 / Published: 17 April 2018
PDF Full-text (8832 KB) | HTML Full-text | XML Full-text
Abstract
The convolution neural network (CNN) has achieved state-of-the-art performance in many computer vision applications e.g., classification, recognition, detection, etc. However, the global optimization of CNN training is still a problem. Fast classification and training play a key role in the development of the
[...] Read more.
The convolution neural network (CNN) has achieved state-of-the-art performance in many computer vision applications e.g., classification, recognition, detection, etc. However, the global optimization of CNN training is still a problem. Fast classification and training play a key role in the development of the CNN. We hypothesize that the smoother and optimized the training of a CNN goes, the more efficient the end result becomes. Therefore, in this paper, we implement a modified resilient backpropagation (MRPROP) algorithm to improve the convergence and efficiency of CNN training. Particularly, a tolerant band is introduced to avoid network overtraining, which is incorporated with the global best concept for weight updating criteria to allow the training algorithm of the CNN to optimize its weights more swiftly and precisely. For comparison, we present and analyze four different training algorithms for CNN along with MRPROP, i.e., resilient backpropagation (RPROP), Levenberg–Marquardt (LM), conjugate gradient (CG), and gradient descent with momentum (GDM). Experimental results showcase the merit of the proposed approach on a public face and skin dataset. Full article
Figures

Figure 1

Open AccessArticle Simulation Study on the Application of the Generalized Entropy Concept in Artificial Neural Networks
Entropy 2018, 20(4), 249; https://doi.org/10.3390/e20040249
Received: 25 January 2018 / Revised: 23 March 2018 / Accepted: 30 March 2018 / Published: 3 April 2018
PDF Full-text (5782 KB) | HTML Full-text | XML Full-text
Abstract
Artificial neural networks are currently one of the most commonly used classifiers and over the recent years they have been successfully used in many practical applications, including banking and finance, health and medicine, engineering and manufacturing. A large number of error functions have
[...] Read more.
Artificial neural networks are currently one of the most commonly used classifiers and over the recent years they have been successfully used in many practical applications, including banking and finance, health and medicine, engineering and manufacturing. A large number of error functions have been proposed in the literature to achieve a better predictive power. However, only a few works employ Tsallis statistics, although the method itself has been successfully applied in other machine learning techniques. This paper undertakes the effort to examine the q -generalized function based on Tsallis statistics as an alternative error measure in neural networks. In order to validate different performance aspects of the proposed function and to enable identification of its strengths and weaknesses the extensive simulation was prepared based on the artificial benchmarking dataset. The results indicate that Tsallis entropy error function can be successfully introduced in the neural networks yielding satisfactory results and handling with class imbalance, noise in data or use of non-informative predictors. Full article
Figures

Figure 1

Open AccessArticle An Adaptive Learning Based Network Selection Approach for 5G Dynamic Environments
Entropy 2018, 20(4), 236; https://doi.org/10.3390/e20040236
Received: 8 January 2018 / Revised: 7 March 2018 / Accepted: 24 March 2018 / Published: 29 March 2018
PDF Full-text (1161 KB) | HTML Full-text | XML Full-text
Abstract
Networks will continue to become increasingly heterogeneous as we move toward 5G. Meanwhile, the intelligent programming of the core network makes the available radio resource be more changeable rather than static. In such a dynamic and heterogeneous network environment, how to help terminal
[...] Read more.
Networks will continue to become increasingly heterogeneous as we move toward 5G. Meanwhile, the intelligent programming of the core network makes the available radio resource be more changeable rather than static. In such a dynamic and heterogeneous network environment, how to help terminal users select optimal networks to access is challenging. Prior implementations of network selection are usually applicable for the environment with static radio resources, while they cannot handle the unpredictable dynamics in 5G network environments. To this end, this paper considers both the fluctuation of radio resources and the variation of user demand. We model the access network selection scenario as a multiagent coordination problem, in which a bunch of rationally terminal users compete to maximize their benefits with incomplete information about the environment (no prior knowledge of network resource and other users’ choices). Then, an adaptive learning based strategy is proposed, which enables users to adaptively adjust their selections in response to the gradually or abruptly changing environment. The system is experimentally shown to converge to Nash equilibrium, which also turns out to be both Pareto optimal and socially optimal. Extensive simulation results show that our approach achieves significantly better performance compared with two learning and non-learning based approaches in terms of load balancing, user payoff and the overall bandwidth utilization efficiency. In addition, the system has a good robustness performance under the condition with non-compliant terminal users. Full article
Figures

Figure 1

Back to Top