Next Issue
Previous Issue

Table of Contents

Algorithms, Volume 12, Issue 5 (May 2019)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story (view full-size image) Current medical deformable image registration (DIR) methods optimize the weighted sums of key [...] Read more.
Displaying articles 1-25
Export citation of selected articles as:
Open AccessArticle
Convolution Accelerator Designs Using Fast Algorithms
Algorithms 2019, 12(5), 112; https://doi.org/10.3390/a12050112
Received: 11 March 2019 / Revised: 18 May 2019 / Accepted: 21 May 2019 / Published: 27 May 2019
Viewed by 370 | PDF Full-text (3940 KB) | HTML Full-text | XML Full-text
Abstract
Convolutional neural networks (CNNs) have achieved great success in image processing. However, the heavy computational burden it imposes makes it difficult for use in embedded applications that have limited power consumption and performance. Although there are many fast convolution algorithms that can reduce [...] Read more.
Convolutional neural networks (CNNs) have achieved great success in image processing. However, the heavy computational burden it imposes makes it difficult for use in embedded applications that have limited power consumption and performance. Although there are many fast convolution algorithms that can reduce the computational complexity, they increase the difficulty of practical implementation. To overcome these difficulties, this paper proposes several convolution accelerator designs using fast algorithms. The designs are based on the field programmable gate array (FPGA) and display a better balance between the digital signal processor (DSP) and the logic resource, while also requiring lower power consumption. The implementation results show that the power consumption of the accelerator design based on the Strassen–Winograd algorithm is 21.3% less than that of conventional accelerators. Full article
Figures

Figure 1

Open AccessArticle
A Heuristic Algorithm for the Routing and Scheduling Problem with Time Windows: A Case Study of the Automotive Industry in Mexico
Algorithms 2019, 12(5), 111; https://doi.org/10.3390/a12050111
Received: 3 May 2019 / Revised: 22 May 2019 / Accepted: 22 May 2019 / Published: 25 May 2019
Viewed by 468 | PDF Full-text (1537 KB) | HTML Full-text | XML Full-text
Abstract
This paper investigates a real-world distribution problem arising in the vehicle production industry, particularly in a logistics company, in which cars and vans must be loaded on auto-carriers and then delivered to dealerships. A solution to the problem involves the loading and optimal [...] Read more.
This paper investigates a real-world distribution problem arising in the vehicle production industry, particularly in a logistics company, in which cars and vans must be loaded on auto-carriers and then delivered to dealerships. A solution to the problem involves the loading and optimal routing, without violating the capacity and time window constraints for each auto-carrier. A two-phase heuristic algorithm was implemented to solve the problem. In the first phase the heuristic builds a route with an optimal insertion procedure, and in the second phase the determination of a feasible loading. The experimental results show that the purposed algorithm can be used to tackle the transportation problem in terms of minimizing total traveling distance, loading/unloading operations and transportation costs, facilitating a decision-making process for the logistics company. Full article
(This article belongs to the Special Issue Exact and Heuristic Scheduling Algorithms)
Figures

Figure 1

Open AccessFeature PaperArticle
Counter-Terrorism Video Analysis Using Hash-Based Algorithms
Algorithms 2019, 12(5), 110; https://doi.org/10.3390/a12050110
Received: 15 April 2019 / Revised: 14 May 2019 / Accepted: 20 May 2019 / Published: 24 May 2019
Viewed by 474 | PDF Full-text (2888 KB) | HTML Full-text | XML Full-text
Abstract
The Internet is becoming a major source of radicalization. The propaganda efforts of new extremist groups include creating new propaganda videos from fragments of old terrorist attack videos. This article presents a web-scraping method for retrieving relevant videos and a pHash-based algorithm which [...] Read more.
The Internet is becoming a major source of radicalization. The propaganda efforts of new extremist groups include creating new propaganda videos from fragments of old terrorist attack videos. This article presents a web-scraping method for retrieving relevant videos and a pHash-based algorithm which identifies the original content of a video. Automatic novelty verification is now possible, which can potentially reduce and improve journalist research work, as well as reduce the spreading of fake news. The obtained results have been satisfactory as all original sources of new videos have been identified correctly. Full article
Figures

Figure 1

Open AccessArticle
An Adaptive Procedure for the Global Minimization of a Class of Polynomial Functions
Algorithms 2019, 12(5), 109; https://doi.org/10.3390/a12050109
Received: 15 April 2019 / Revised: 16 May 2019 / Accepted: 20 May 2019 / Published: 23 May 2019
Viewed by 363 | PDF Full-text (438 KB) | HTML Full-text | XML Full-text
Abstract
The paper deals with the problem of global minimization of a polynomial function expressed through the Frobenius norm of two-dimensional or three-dimensional matrices. An adaptive procedure is proposed which applies a Multistart algorithm according to a heuristic approach. The basic step of the [...] Read more.
The paper deals with the problem of global minimization of a polynomial function expressed through the Frobenius norm of two-dimensional or three-dimensional matrices. An adaptive procedure is proposed which applies a Multistart algorithm according to a heuristic approach. The basic step of the procedure consists of splitting the runs of different initial points in segments of fixed length and to interlace the processing order of the various segments, discarding those which appear less promising. A priority queue is suggested to implement this strategy. Various parameters contribute to the handling of the queue, whose length shrinks during the computation, allowing a considerable saving of the computational time with respect to classical procedures. To verify the validity of the approach, a large experimentation has been performed on both nonnegatively constrained and unconstrained problems. Full article
Figures

Figure 1

Open AccessArticle
Real-Time Arm Gesture Recognition Using 3D Skeleton Joint Data
Algorithms 2019, 12(5), 108; https://doi.org/10.3390/a12050108
Received: 29 March 2019 / Revised: 9 May 2019 / Accepted: 15 May 2019 / Published: 20 May 2019
Viewed by 621 | PDF Full-text (1210 KB) | HTML Full-text | XML Full-text
Abstract
In this paper we present an approach towards real-time hand gesture recognition using the Kinect sensor, investigating several machine learning techniques. We propose a novel approach for feature extraction, using measurements on joints of the extracted skeletons. The proposed features extract angles and [...] Read more.
In this paper we present an approach towards real-time hand gesture recognition using the Kinect sensor, investigating several machine learning techniques. We propose a novel approach for feature extraction, using measurements on joints of the extracted skeletons. The proposed features extract angles and displacements of skeleton joints, as the latter move into a 3D space. We define a set of gestures and construct a real-life data set. We train gesture classifiers under the assumptions that they shall be applied and evaluated to both known and unknown users. Experimental results with 11 classification approaches prove the effectiveness and the potential of our approach both with the proposed dataset and also compared to state-of-the-art research works. Full article
(This article belongs to the Special Issue Mining Humanistic Data 2019)
Figures

Figure 1

Open AccessArticle
Pruning Optimization over Threshold-Based Historical Continuous Query
Algorithms 2019, 12(5), 107; https://doi.org/10.3390/a12050107
Received: 4 March 2019 / Revised: 5 May 2019 / Accepted: 18 May 2019 / Published: 19 May 2019
Viewed by 440 | PDF Full-text (3541 KB) | HTML Full-text | XML Full-text
Abstract
With the increase in mobile location service applications, spatiotemporal queries over the trajectory data of moving objects have become a research hotspot, and continuous query is one of the key types of various spatiotemporal queries. In this paper, we study the sub-domain of [...] Read more.
With the increase in mobile location service applications, spatiotemporal queries over the trajectory data of moving objects have become a research hotspot, and continuous query is one of the key types of various spatiotemporal queries. In this paper, we study the sub-domain of the continuous query of moving objects, namely the pruning optimization over historical continuous query based on threshold. Firstly, for the problem that the processing cost of the Mindist-based pruning strategy is too large, a pruning strategy based on extended Minimum Bounding Rectangle overlap is proposed to optimize the processing overhead. Secondly, a best-first traversal algorithm based on E3DR-tree is proposed to ensure that an accurate pruning candidate set can be obtained with accessing as few index nodes as possible. Finally, experiments on real data sets prove that our method significantly outperforms other similar methods. Full article
(This article belongs to the Special Issue Algorithms for Large Scale Data Analysis)
Figures

Figure 1

Open AccessArticle
An Introduction of NoSQL Databases Based on Their Categories and Application Industries
Algorithms 2019, 12(5), 106; https://doi.org/10.3390/a12050106
Received: 31 January 2019 / Revised: 26 April 2019 / Accepted: 9 May 2019 / Published: 16 May 2019
Viewed by 490 | PDF Full-text (5087 KB) | HTML Full-text | XML Full-text
Abstract
The popularization of big data makes the enterprise need to store more and more data. The data in the enterprise’s database must be accessed as fast as possible, but the Relational Database (RDB) has the speed limitation due to the join operation. Many [...] Read more.
The popularization of big data makes the enterprise need to store more and more data. The data in the enterprise’s database must be accessed as fast as possible, but the Relational Database (RDB) has the speed limitation due to the join operation. Many enterprises have changed to use a NoSQL database, which can meet the requirement of fast data access. However, there are more than hundreds of NoSQL databases. It is important to select a suitable NoSQL database for a certain enterprise because this decision will affect the performance of the enterprise operations. In this paper, fifteen categories of NoSQL databases will be introduced to find out the characteristics of every category. Some principles and examples are proposed to choose an appropriate NoSQL database for different industries. Full article
Figures

Figure 1

Open AccessArticle
Achievement of Automatic Copper Wire Elongation System
Algorithms 2019, 12(5), 105; https://doi.org/10.3390/a12050105
Received: 17 April 2019 / Revised: 11 May 2019 / Accepted: 13 May 2019 / Published: 15 May 2019
Viewed by 458 | PDF Full-text (7529 KB) | HTML Full-text | XML Full-text
Abstract
Copper wire is a major conduction material that carries a variety of signals in industry. Presently, automatic wire elongating machines to produce very thin wiresare available for manufacturing. However, the original wires for the elongating process to thin sizes need heating, drawing and [...] Read more.
Copper wire is a major conduction material that carries a variety of signals in industry. Presently, automatic wire elongating machines to produce very thin wiresare available for manufacturing. However, the original wires for the elongating process to thin sizes need heating, drawing and then threadingthrough the die molds by the manpower before the machine starts to work. This procedure repeatsuntil the wire threads through all various die molds. To replace the manpower, this paper aims to develop an automatic wire die molds threading system for the wire elongation process. Three pneumatic grippers are designed in the proposed system. The first gripper is used to clamp the wire. The second gripper fixed in the rotating mechanism is to draw the heated wire. The third gripper is used to move the wire for threading through the dies mold. The force designed for drawing the wire can be adjusted via the gear ratio. The experimental results confirm that the proposed system can accomplish the wiredies mold threading processin term of robustness, rapidness and accuracy. Full article
Figures

Figure 1

Open AccessArticle
Balanced Parallel Exploration of Orthogonal Regions
Algorithms 2019, 12(5), 104; https://doi.org/10.3390/a12050104
Received: 28 January 2019 / Revised: 26 April 2019 / Accepted: 9 May 2019 / Published: 15 May 2019
Viewed by 461 | PDF Full-text (451 KB) | HTML Full-text | XML Full-text
Abstract
We consider the use of multiple mobile agents to explore an unknown area. The area is orthogonal, such that all perimeter lines run both vertically and horizontally. The area may consist of unknown rectangular holes which are non-traversable internally. For the sake of [...] Read more.
We consider the use of multiple mobile agents to explore an unknown area. The area is orthogonal, such that all perimeter lines run both vertically and horizontally. The area may consist of unknown rectangular holes which are non-traversable internally. For the sake of analysis, we assume that the area is discretized into N points allowing the agents to move from one point to an adjacent one. Mobile agents communicate through face-to-face communication when in adjacent points. The objective of exploration is to develop an online algorithm that will explore the entire area while reducing the total work of all k agents, where the work is measured as the number of points traversed. We propose splitting the exploration into two alternating tasks, perimeter and room exploration. The agents all begin with the perimeter scan and when a room is found they transition to room scan after which they continue with perimeter scan until the next room is found and so on. Given the total traversable points N, our algorithm completes in total O ( N ) work with each agent performing O ( N / k ) work, namely the work is balanced. If the rooms are hole-free the exploration time is also asymptotically optimal, O ( N / k ) . To our knowledge, this is the first agent coordination algorithm that considers simultaneously work balancing and small exploration time. Full article
Figures

Figure 1

Open AccessArticle
Improved Neural Networks Based on Mutual Information via Information Geometry
Algorithms 2019, 12(5), 103; https://doi.org/10.3390/a12050103
Received: 18 February 2019 / Revised: 1 May 2019 / Accepted: 2 May 2019 / Published: 13 May 2019
Viewed by 528 | PDF Full-text (1252 KB) | HTML Full-text | XML Full-text
Abstract
This paper presents a new algorithm based on the theory of mutual information and information geometry. This algorithm places emphasis on adaptive mutual information estimation and maximum likelihood estimation. With the theory of information geometry, we adjust the mutual information along the geodesic [...] Read more.
This paper presents a new algorithm based on the theory of mutual information and information geometry. This algorithm places emphasis on adaptive mutual information estimation and maximum likelihood estimation. With the theory of information geometry, we adjust the mutual information along the geodesic line. Finally, we evaluate our proposal using empirical datasets that are dedicated for classification and regression. The results show that our algorithm contributes to a significant improvement over existing methods. Full article
Figures

Figure 1

Open AccessCommunication
A Theoretical Framework to Determine RHP Zero Dynamics in Sequential Interacting Sub-Systems
Algorithms 2019, 12(5), 102; https://doi.org/10.3390/a12050102
Received: 8 April 2019 / Revised: 4 May 2019 / Accepted: 6 May 2019 / Published: 10 May 2019
Viewed by 510 | PDF Full-text (765 KB) | HTML Full-text | XML Full-text
Abstract
A theoretical framework for determining the dynamics of interacting sub-systems is proposed in this paper. Specifically, a systematic analysis is performed that results in an indication about whether an MP or an NMP dynamics occurs in the analyzed process during operation. The analysis [...] Read more.
A theoretical framework for determining the dynamics of interacting sub-systems is proposed in this paper. Specifically, a systematic analysis is performed that results in an indication about whether an MP or an NMP dynamics occurs in the analyzed process during operation. The analysis stems from the physical process description and the degree of coupling between sub-systems. The presented methodology is generalized for n sub-systems with sequential interaction (i.e., in which the coupling is unidirectional and occurs between consecutive sub-systems), and the outcome is useful investigation tool prior to the controller design phase. Given the generality of the approach, the theoretical framework is valid for any dynamic process with interacting sub-systems in the context of LTI systems. Full article
Open AccessArticle
An Adaptive Derivative Estimator for Fault-Detection Using a Dynamic System with a Suboptimal Parameter
Algorithms 2019, 12(5), 101; https://doi.org/10.3390/a12050101
Received: 20 December 2018 / Revised: 11 April 2019 / Accepted: 22 April 2019 / Published: 10 May 2019
Viewed by 511 | PDF Full-text (807 KB) | HTML Full-text | XML Full-text
Abstract
This paper deals with an approximation of a first derivative of a signal using a dynamic system of the first order. After formulating the problem, a proposition and a theorem are proven for a possible approximation structure, which consists of a dynamic system. [...] Read more.
This paper deals with an approximation of a first derivative of a signal using a dynamic system of the first order. After formulating the problem, a proposition and a theorem are proven for a possible approximation structure, which consists of a dynamic system. In particular, a proposition based on a Lyapunov approach is proven to show the convergence of the approximation. The proven theorem is a constructive one and shows directly the suboptimality condition in the presence of noise. Based on these two results, an adaptive algorithm is conceived to calculate the derivative of a signal with convergence in infinite time. Results are compared with an approximation of the derivative using an adaptive Kalman filter (KF). Full article
Figures

Figure 1

Open AccessArticle
Evolutionary Machine Learning for Multi-Objective Class Solutions in Medical Deformable Image Registration
Algorithms 2019, 12(5), 99; https://doi.org/10.3390/a12050099
Received: 30 March 2019 / Revised: 1 May 2019 / Accepted: 7 May 2019 / Published: 9 May 2019
Viewed by 613 | PDF Full-text (1409 KB) | HTML Full-text | XML Full-text
Abstract
Current state-of-the-art medical deformable image registration (DIR) methods optimize a weighted sum of key objectives of interest. Having a pre-determined weight combination that leads to high-quality results for any instance of a specific DIR problem (i.e., a class solution) would facilitate clinical application [...] Read more.
Current state-of-the-art medical deformable image registration (DIR) methods optimize a weighted sum of key objectives of interest. Having a pre-determined weight combination that leads to high-quality results for any instance of a specific DIR problem (i.e., a class solution) would facilitate clinical application of DIR. However, such a combination can vary widely for each instance and is currently often manually determined. A multi-objective optimization approach for DIR removes the need for manual tuning, providing a set of high-quality trade-off solutions. Here, we investigate machine learning for a multi-objective class solution, i.e., not a single weight combination, but a set thereof, that, when used on any instance of a specific DIR problem, approximates such a set of trade-off solutions. To this end, we employed a multi-objective evolutionary algorithm to learn sets of weight combinations for three breast DIR problems of increasing difficulty: 10 prone-prone cases, 4 prone-supine cases with limited deformations and 6 prone-supine cases with larger deformations and image artefacts. Clinically-acceptable results were obtained for the first two problems. Therefore, for DIR problems with limited deformations, a multi-objective class solution can be machine learned and used to compute straightforwardly multiple high-quality DIR outcomes, potentially leading to more efficient use of DIR in clinical practice. Full article
(This article belongs to the Special Issue Evolutionary Algorithms in Health Technologies)
Figures

Figure 1

Open AccessArticle
Free Surface Flow Simulation by a Viscous Numerical Cylindrical Tank
Algorithms 2019, 12(5), 98; https://doi.org/10.3390/a12050098
Received: 24 March 2019 / Revised: 22 April 2019 / Accepted: 6 May 2019 / Published: 9 May 2019
Viewed by 492 | PDF Full-text (1921 KB) | HTML Full-text | XML Full-text
Abstract
In order to numerically investigate the free surface flow evolution in a cylindrical tank, a regular structured grid system in the cylindrical coordinates is usually applied to solve control equations based on the incompressible two-phase flow model. Since the grid spacing in the [...] Read more.
In order to numerically investigate the free surface flow evolution in a cylindrical tank, a regular structured grid system in the cylindrical coordinates is usually applied to solve control equations based on the incompressible two-phase flow model. Since the grid spacing in the azimuthal direction is proportionate to the radial distance in a regular structured grid system, very small grid spacing would be obtained in the azimuthal direction and it would require a very small computational time step to satisfy the stability restriction. Moreover, serious mass disequilibrium problems may happen through the convection of the free surface with the Volume of Fluid (VOF) method. Therefore in the present paper, the zonal embedded grid technique was implemented to overcome those problems by gradually adjusting the mesh resolution in different grid blocks. Over the embedded grid system, a finite volume algorithm was developed to solve the Navier–Stokes equations in the three-dimensional cylindrical coordinates. A high-resolution scheme was applied to resolve the free surface between the air and water phases based on the VOF method. Computation of liquid convection under a given velocity field shows that the VOF method implemented with a zonal embedded grid is more advanced in keeping mass continuity than that with regular structured grid system. Furthermore, the proposed model was also applied to simulate the sharp transient evolution of circular dam breaking flow. The simulation results were validated against the commercial software Fluent, which shows a good agreement, and the proposed model does not yield any free surface oscillation. Full article
Figures

Figure 1

Open AccessArticle
A New Method of Applying Data Engine Technology to Realize Neural Network Control
Algorithms 2019, 12(5), 97; https://doi.org/10.3390/a12050097
Received: 25 March 2019 / Revised: 26 April 2019 / Accepted: 5 May 2019 / Published: 9 May 2019
Viewed by 483 | PDF Full-text (7922 KB) | HTML Full-text | XML Full-text
Abstract
This paper presents a novel diagonal recurrent neural network hybrid controller based on the shared memory of real-time database structure. The controller uses Data Engine (DE) technology, through the establishment of a unified and standardized software architecture and real-time database in different control [...] Read more.
This paper presents a novel diagonal recurrent neural network hybrid controller based on the shared memory of real-time database structure. The controller uses Data Engine (DE) technology, through the establishment of a unified and standardized software architecture and real-time database in different control stations, effectively solves many problems caused by technical standard, communication protocol, and programming language in actual industrial application: the advanced control algorithm and control system co-debugging difficulties, algorithm implementation and update inefficiency, and high development and operation and maintenance costs effectively fill the current technical gap. More importantly, the control algorithm development uses a unified visual graphics configuration programming environment, effectively solving the problem of integrated control of heterogeneous devices; and has the advantages of intuitive configuration and transparent data processing process, reducing the difficulty of the advanced control algorithms debugging in engineering applications. In this paper, the application of a neural network hybrid controller based on DE in motor speed measurement and control system shows that the system has excellent control characteristics and anti-disturbance ability, and provides an integrated method for neural network control algorithm in a practical industrial control system, which is the major contribution of this article. Full article
(This article belongs to the Special Issue High Performance Reconfigurable Computing)
Figures

Figure 1

Open AccessArticle
A Variable Block Insertion Heuristic for Solving Permutation Flow Shop Scheduling Problem with Makespan Criterion
Algorithms 2019, 12(5), 100; https://doi.org/10.3390/a12050100
Received: 8 April 2019 / Revised: 3 May 2019 / Accepted: 6 May 2019 / Published: 9 May 2019
Viewed by 589 | PDF Full-text (2798 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
In this paper, we propose a variable block insertion heuristic (VBIH) algorithm to solve the permutation flow shop scheduling problem (PFSP). The VBIH algorithm removes a block of jobs from the current solution. It applies an insertion local search to the partial solution. [...] Read more.
In this paper, we propose a variable block insertion heuristic (VBIH) algorithm to solve the permutation flow shop scheduling problem (PFSP). The VBIH algorithm removes a block of jobs from the current solution. It applies an insertion local search to the partial solution. Then, it inserts the block into all possible positions in the partial solution sequentially. It chooses the best one amongst those solutions from block insertion moves. Finally, again an insertion local search is applied to the complete solution. If the new solution obtained is better than the current solution, it replaces the current solution with the new one. As long as it improves, it retains the same block size. Otherwise, the block size is incremented by one and a simulated annealing-based acceptance criterion is employed to accept the new solution in order to escape from local minima. This process is repeated until the block size reaches its maximum size. To verify the computational results, mixed integer programming (MIP) and constraint programming (CP) models are developed and solved using very recent small VRF benchmark suite. Optimal solutions are found for 108 out of 240 instances. Extensive computational results on the VRF large benchmark suite show that the proposed algorithm outperforms two variants of the iterated greedy algorithm. 236 out of 240 instances of large VRF benchmark suite are further improved for the first time in this paper. Ultimately, we run Taillard’s benchmark suite and compare the algorithms. In addition to the above, three instances of Taillard’s benchmark suite are also further improved for the first time in this paper since 1993. Full article
(This article belongs to the Special Issue Metaheuristic Algorithms in Optimization and Applications (volume 2))
Figures

Figure 1

Open AccessFeature PaperArticle
Triplet Loss Network for Unsupervised Domain Adaptation
Algorithms 2019, 12(5), 96; https://doi.org/10.3390/a12050096
Received: 25 March 2019 / Revised: 30 April 2019 / Accepted: 2 May 2019 / Published: 8 May 2019
Viewed by 704 | PDF Full-text (3842 KB) | HTML Full-text | XML Full-text
Abstract
Domain adaptation is a sub-field of transfer learning that aims at bridging the dissimilarity gap between different domains by transferring and re-using the knowledge obtained in the source domain to the target domain. Many methods have been proposed to resolve this problem, using [...] Read more.
Domain adaptation is a sub-field of transfer learning that aims at bridging the dissimilarity gap between different domains by transferring and re-using the knowledge obtained in the source domain to the target domain. Many methods have been proposed to resolve this problem, using techniques such as generative adversarial networks (GAN), but the complexity of such methods makes it hard to use them in different problems, as fine-tuning such networks is usually a time-consuming task. In this paper, we propose a method for unsupervised domain adaptation that is both simple and effective. Our model (referred to as TripNet) harnesses the idea of a discriminator and Linear Discriminant Analysis (LDA) to push the encoder to generate domain-invariant features that are category-informative. At the same time, pseudo-labelling is used for the target data to train the classifier and to bring the same classes from both domains together. We evaluate TripNet against several existing, state-of-the-art methods on three image classification tasks: Digit classification (MNIST, SVHN, and USPC datasets), object recognition (Office31 dataset), and traffic sign recognition (GTSRB and Synthetic Signs datasets). Our experimental results demonstrate that (i) TripNet beats almost all existing methods (having a similar simple model like it) on all of these tasks; and (ii) for models that are significantly more complex (or hard to train) than TripNet, it even beats their performance in some cases. Hence, the results confirm the effectiveness of using TripNet for unsupervised domain adaptation in image classification. Full article
(This article belongs to the Special Issue Deep Learning for Image and Video Understanding)
Figures

Figure 1

Open AccessArticle
A Source Domain Extension Method for Inductive Transfer Learning Based on Flipping Output
Algorithms 2019, 12(5), 95; https://doi.org/10.3390/a12050095
Received: 9 April 2019 / Revised: 24 April 2019 / Accepted: 3 May 2019 / Published: 7 May 2019
Viewed by 493 | PDF Full-text (632 KB) | HTML Full-text | XML Full-text
Abstract
Transfer learning aims for high accuracy by applying knowledge of source domains for which data collection is easy in order to target domains where data collection is difficult, and has attracted attention in recent years because of its significant potential to enable the [...] Read more.
Transfer learning aims for high accuracy by applying knowledge of source domains for which data collection is easy in order to target domains where data collection is difficult, and has attracted attention in recent years because of its significant potential to enable the application of machine learning to a wide range of real-world problems. However, since the technique is user-dependent, with data prepared as a source domain which in turn becomes a knowledge source for transfer learning, it often involves the adoption of inappropriate data. In such cases, the accuracy may be reduced due to “negative transfer.” Thus, in this paper, we propose a novel transfer learning method that utilizes the flipping output technique to provide multiple labels in the source domain. The accuracy of the proposed method is statistically demonstrated to be significantly better than that of the conventional transfer learning method, and its effect size is as high as 0.9, showing high performance. Full article
Figures

Figure 1

Open AccessArticle
A Cyclical Non-Linear Inertia-Weighted Teaching–Learning-Based Optimization Algorithm
Algorithms 2019, 12(5), 94; https://doi.org/10.3390/a12050094
Received: 6 March 2019 / Revised: 17 April 2019 / Accepted: 23 April 2019 / Published: 3 May 2019
Viewed by 602 | PDF Full-text (427 KB) | HTML Full-text | XML Full-text
Abstract
After the teaching–learning-based optimization (TLBO) algorithm was proposed, many improved algorithms have been presented in recent years, which simulate the teaching–learning phenomenon of a classroom to effectively solve global optimization problems. In this paper, a cyclical non-linear inertia-weighted teaching–learning-based optimization (CNIWTLBO) algorithm is [...] Read more.
After the teaching–learning-based optimization (TLBO) algorithm was proposed, many improved algorithms have been presented in recent years, which simulate the teaching–learning phenomenon of a classroom to effectively solve global optimization problems. In this paper, a cyclical non-linear inertia-weighted teaching–learning-based optimization (CNIWTLBO) algorithm is presented. This algorithm introduces a cyclical non-linear inertia weighted factor into the basic TLBO to control the memory rate of learners, and uses a non-linear mutation factor to control the learner’s mutation randomly during the learning process. In order to prove the significant performance of the proposed algorithm, it is tested on some classical benchmark functions and the comparison results are provided against the basic TLBO, some variants of TLBO and some other well-known optimization algorithms. The experimental results show that the proposed algorithm has better global search ability and higher search accuracy than the basic TLBO, some variants of TLBO and some other algorithms as well, and can escape from the local minimum easily, while keeping a faster convergence rate. Full article
Figures

Figure 1

Open AccessArticle
Power Control and Channel Allocation Algorithm for Energy Harvesting D2D Communications
by and
Algorithms 2019, 12(5), 93; https://doi.org/10.3390/a12050093
Received: 16 February 2019 / Revised: 24 March 2019 / Accepted: 25 April 2019 / Published: 3 May 2019
Viewed by 542 | PDF Full-text (2146 KB) | HTML Full-text | XML Full-text
Abstract
This paper assumes that multiple device-to-device (D2D) users can reuse the same uplink channel and base station (BS) supplies power to D2D transmitters by means of wireless energy transmission; the optimization problem aims at maximizing the total capacity of D2D users, and proposes [...] Read more.
This paper assumes that multiple device-to-device (D2D) users can reuse the same uplink channel and base station (BS) supplies power to D2D transmitters by means of wireless energy transmission; the optimization problem aims at maximizing the total capacity of D2D users, and proposes a power control and channel allocation algorithm for the energy harvesting D2D communications underlaying the cellular network. This algorithm firstly uses a heuristic dynamic clustering method to cluster D2D users and those in the same cluster can share the same channel. Then, D2D users in the same cluster are modeled as a non-cooperative game, the expressions of D2D users’ transmission power and energy harvesting time are derived by using the Karush–Kuhn–Tucker (KKT) condition, and the optimal transmission power and energy harvesting time are allocated to D2D users by the joint iteration optimization method. Finally, we use the Kuhn–Munkres (KM) algorithm to achieve the optimal matching between D2D clusters and cellular channel to maximize the total capacity of D2D users. Simulation results show that the proposed algorithm can effectively improve the system performance. Full article
Figures

Figure 1

Open AccessArticle
Optical Flow Estimation with Occlusion Detection
Algorithms 2019, 12(5), 92; https://doi.org/10.3390/a12050092
Received: 19 March 2019 / Revised: 23 April 2019 / Accepted: 23 April 2019 / Published: 1 May 2019
Viewed by 589 | PDF Full-text (3853 KB) | HTML Full-text | XML Full-text
Abstract
The dense optical flow estimation under occlusion is a challenging task. Occlusion may result in ambiguity in optical flow estimation, while accurate occlusion detection can reduce the error. In this paper, we propose a robust optical flow estimation algorithm with reliable occlusion detection. [...] Read more.
The dense optical flow estimation under occlusion is a challenging task. Occlusion may result in ambiguity in optical flow estimation, while accurate occlusion detection can reduce the error. In this paper, we propose a robust optical flow estimation algorithm with reliable occlusion detection. Firstly, the occlusion areas in successive video frames are detected by integrating various information from multiple sources including feature matching, motion edges, warped images and occlusion consistency. Then optimization function with occlusion coefficient and selective region smoothing are used to obtain the optical flow estimation of the non-occlusion areas and occlusion areas respectively. Experimental results show that the algorithm proposed in this paper is an effective algorithm for dense optical flow estimation. Full article
Figures

Figure 1

Open AccessConcept Paper
FASTSET: A Fast Data Structure for the Representation of Sets of Integers
Algorithms 2019, 12(5), 91; https://doi.org/10.3390/a12050091
Received: 9 April 2019 / Revised: 23 April 2019 / Accepted: 24 April 2019 / Published: 1 May 2019
Viewed by 601 | PDF Full-text (1007 KB) | HTML Full-text | XML Full-text
Abstract
We describe a simple data structure for storing subsets of {0,,N1}, with N a given integer, which has optimal time performance for all the main set operations, whereas previous data structures are non-optimal for [...] Read more.
We describe a simple data structure for storing subsets of { 0 , , N 1 } , with N a given integer, which has optimal time performance for all the main set operations, whereas previous data structures are non-optimal for at least one such operation. We report on the comparison of a Java implementation of our structure with other structures of the standard Java Collections. Full article
Figures

Figure 1

Open AccessArticle
Multi-Metaheuristic Competitive Model for Optimization of Fuzzy Controllers
Algorithms 2019, 12(5), 90; https://doi.org/10.3390/a12050090
Received: 26 March 2019 / Revised: 17 April 2019 / Accepted: 23 April 2019 / Published: 28 April 2019
Viewed by 637 | PDF Full-text (3929 KB) | HTML Full-text | XML Full-text
Abstract
This article describes an optimization methodology based on a model of competitiveness between different metaheuristic methods. The main contribution is a strategy to dynamically find the algorithm that obtains the best result based on the competitiveness of methods to solve a specific problem [...] Read more.
This article describes an optimization methodology based on a model of competitiveness between different metaheuristic methods. The main contribution is a strategy to dynamically find the algorithm that obtains the best result based on the competitiveness of methods to solve a specific problem using different performance metrics depending on the problem. The algorithms used in the preliminary tests are: the firefly algorithm (FA), which is inspired by blinking fireflies; wind-driven optimization (WDO), which is inspired by the movement of the wind in the atmosphere, and in which the positions and velocities of the wind packages are updated; and finally, drone squadron optimization (DSO)—the inspiration for this method is new and interesting—based on artifacts, where drones have a command center that sends information to individual drones and updates their software to optimize the objective function. The proposed model helps discover the best method to solve a specific problem, and also reduces the time that it takes to search for methods before finding the one that obtains the most satisfactory results. The main idea is that with this competitiveness approach, methods are tested at the same time until the best one to solve the problem in question is found. As preliminary tests of the model, the optimization of the benchmark mathematical functions and membership functions of a fuzzy controller of an autonomous mobile robot was used. Full article
(This article belongs to the Special Issue Metaheuristic Algorithms in Optimization and Applications (volume 2))
Figures

Figure 1

Open AccessArticle
An Algorithm for Producing Fuzzy Negations via Conical Sections
Algorithms 2019, 12(5), 89; https://doi.org/10.3390/a12050089
Received: 31 March 2019 / Revised: 18 April 2019 / Accepted: 25 April 2019 / Published: 27 April 2019
Viewed by 736 | PDF Full-text (1089 KB) | HTML Full-text | XML Full-text
Abstract
In this paper we introduced a new class of strong negations, which were generated via conical sections. This paper focuses on the fact that simple mathematical and computational processes generate new strong fuzzy negations, through purely geometrical concepts such as the ellipse and [...] Read more.
In this paper we introduced a new class of strong negations, which were generated via conical sections. This paper focuses on the fact that simple mathematical and computational processes generate new strong fuzzy negations, through purely geometrical concepts such as the ellipse and the hyperbola. Well-known negations like the classical negation, Sugeno negation, etc., were produced via the suggested conical sections. The strong negations were a structural element in the production of fuzzy implications. Thus, we have a machine for producing fuzzy implications, which can be useful in many areas, as in artificial intelligence, neural networks, etc. Strong Fuzzy Negations refers to the discrepancy between the degree of difficulty of the effort and the significance of its results. Innovative results may, therefore, derive for use in literature in the specific field of mathematics. These data are, moreover, generated in an effortless, concise, as well as self-evident manner. Full article
Figures

Figure 1

Open AccessFeature PaperReview
Review on Electrical Impedance Tomography: Artificial Intelligence Methods and its Applications
Algorithms 2019, 12(5), 88; https://doi.org/10.3390/a12050088
Received: 13 February 2019 / Revised: 22 April 2019 / Accepted: 23 April 2019 / Published: 26 April 2019
Viewed by 645 | PDF Full-text (261 KB) | HTML Full-text | XML Full-text
Abstract
Electrical impedance tomography (EIT) has been a hot topic among researchers for the last 30 years. It is a new imaging method and has evolved over the last few decades. By injecting a small amount of current, the electrical properties of tissues are [...] Read more.
Electrical impedance tomography (EIT) has been a hot topic among researchers for the last 30 years. It is a new imaging method and has evolved over the last few decades. By injecting a small amount of current, the electrical properties of tissues are determined and measurements of the resulting voltages are taken. By using a reconstructing algorithm these voltages then transformed into a tomographic image. EIT contains no identified threats and as compared to magnetic resonance imaging (MRI) and computed tomography (CT) scans (imaging techniques), it is cheaper in cost as well. In this paper, a comprehensive review of efforts and advancements undertaken and achieved in recent work to improve this technology and the role of artificial intelligence to solve this non-linear, ill-posed problem are presented. In addition, a review of EIT clinical based applications has also been presented. Full article
(This article belongs to the Special Issue Evolutionary Algorithms in Health Technologies)
Algorithms EISSN 1999-4893 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top