Next Article in Journal
Determining Regional-Scale Groundwater Recharge with GRACE and GLDAS
Previous Article in Journal
A Wind Speed Retrieval Model for Sentinel-1A EW Mode Cross-Polarization Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Efficient Framework for Remote Sensing Parallel Processing: Integrating the Artificial Bee Colony Algorithm and Multiagent Technology

1
Institute of Remote Sensing and Digital Earth, Chinese Academy of Sciences, Beijing 100094, China
2
Department of Geography, University of South Carolina, Columbia, SC 29208, USA
*
Author to whom correspondence should be addressed.
Remote Sens. 2019, 11(2), 152; https://doi.org/10.3390/rs11020152
Submission received: 5 December 2018 / Revised: 9 January 2019 / Accepted: 11 January 2019 / Published: 15 January 2019
(This article belongs to the Section Remote Sensing Image Processing)

Abstract

:
Remote sensing (RS) image processing can be converted to an optimization problem, which can then be solved by swarm intelligence algorithms, such as the artificial bee colony (ABC) algorithm, to improve the accuracy of the results. However, such optimization algorithms often result in a heavy computational burden. To realize the intrinsic parallel computing ability of ABC to address the computational challenges of RS optimization, an improved multiagent (MA)-based ABC framework with a reduced communication cost among agents is proposed by utilizing MA technology. Two types of agents, massive bee agents and one administration agent, located in multiple computing nodes are designed. Based on the communication and cooperation among agents, RS optimization computing is realized in a distributed and concurrent manner. Using hyperspectral RS clustering and endmember extraction as case studies, experimental results indicate that the proposed MA-based ABC approach can effectively improve the computing efficiency while maintaining optimization accuracy.

1. Introduction

Image processing is of great importance for remote sensing (RS) applications [1], such as classification [2], clustering [3,4,5], and endmember extraction [6,7]. Recently, many RS image processing problems have been converted to optimization problems to improve the results’ accuracy [8,9]. For example, an RS clustering problem can be converted to an optimization problem that minimizes the distance between the pixel and the cluster center [10] and an RS endmember extraction problem to a problem that minimizes the remixed error [11,12]. Because these RS optimization problems are nonlinear and are difficult to solve using traditional linear approaches, the artificial bee colony (ABC) algorithm, an outstanding swarm intelligence (SI) algorithm, has been widely used for its ability to address nonlinear problems [5,13,14,15,16]. Experiments have demonstrated the improved results achieved by utilizing this intelligent algorithm.
However, using ABC to solve RS optimization problems is a computationally expensive task [17,18] because ABC is an iterative-based stochastic search algorithm that is usually executed sequentially in a central processing unit (CPU). In each iteration, each bee in the population must execute time-consuming operations, such as RS optimization’s fitness evaluation, to obtain new solutions [18]. Therefore, as these operations’ complexity and the RS image volume increase, the computational burden increases substantially, resulting in poor performance.
To contend with the aforementioned computational challenges, efforts have been made to establish parallel computing approaches. The technique of employing graphics processing units (GPUs) is a widely used approach [6,19,20,21]. A GPU has a massively parallel architecture consisting of thousands of small arithmetic logic units (ALUs), which are efficient for handling computing-intensive tasks simultaneously [22]. With GPU-based RS optimization, the data processing behaviors of each individual in SI algorithms that contain large volumes of calculations, especially the most computationally intensive fitness evaluation, are offloaded onto the GPUs’ threads for parallel computation [6,18]. During such parallel computation, an RS image is usually divided into many subimages, and multiple threads execute the same computation on different RS subimages in parallel. However, since in a bee swarm each individual’s behavior is operated on an entire RS image, after multiplying the number of subimages by the number of individuals, the number is often greater than the number of threads that the GPU hardware can provide; as a result, it is hard to implement multiple individuals’ calculation behavior in parallel.
To efficiently achieve parallel execution of individuals’ behavior, a multiagent (MA)-based ABC approach for RS optimization was proposed by utilizing distributed parallel computing based on the CPU [17]. This approach treats food sources and bees in ABC as different agents, which are distributed and concurrently behave in multiple processor units (computers or hosts). By communicating through the network, different agents interact with each other to obtain an optimal solution, thus significantly increasing the computation efficiency. However, the agents’ behaviors designed in [17] are relatively redundant, resulting in increased communication costs for its dispersed agents’ behaviors, which is further analyzed in Section 3.
To further increase the computational efficiency of RS optimization while using the ABC algorithm, this paper proposes an improved MA-based ABC approach by appropriately integrating agents’ behaviors to reduce communication among agents. The effectiveness and efficiency of the new method are demonstrated based on RS image clustering [5] and endmember extraction [16]. The remainder of this paper is organized as follows. Section 2 presents relevant theory pertaining to remote sensing optimization, the ABC algorithm, and multiagent system technology. Then, the basic concept of the improved MA-based ABC approach and the framework design are described in detail in Section 3. Section 4 introduces two RS optimization tasks as case studies, clustering and endmember extraction. The corresponding experiments and results are presented in Section 5 and Section 6. Finally, a discussion and a conclusion are provided in Section 7 and Section 8.

2. Theory

2.1. Remote Sensing Optimization

Many RS-related problems, such as clustering, endmember extraction, and target detection, essentially involve maximizing or minimizing results on certain indexes by computation. For example, for clustering, researchers usually try to minimize the distance among points within a cluster or maximize the distance among multiple classes. In addition, endmember extraction requires maximizing the spectral angle, maximizing the internal maximum volume, or minimizing the external minimum volume of points in spectral spaces. By treating these indexes as objective functions, these problems can be abstracted as optimization problems and expressed as follows:
min / max   f ( x ) st .     x Ω
where f ( x ) is the objective function of an optimization problem, x is a solution, and Ω represents the constraints that the solution must satisfy.

2.2. The ABC Algorithm

The ABC algorithm [23] is a method for finding an optimal solution to an optimization problem by simulating the foraging behavior of a bee colony in nature. In the ABC algorithm, each scout bee randomly generates a feasible solution (food source) initially. Then, employed bees search around their corresponding food sources (feasible solutions) to generate new solutions with the participation of randomly selected neighborhood solutions. Once all of the food sources are updated with those with better fitness values, each onlooker bee pseudorandomly selects a food source (a feasible solution), searches around it to generate a new solution, and updates the food source with the better solution. If a food source is not updated for a long time, it will be abandoned, and a new food source will be obtained by a scout bee’s random selection. The bees’ behaviors will be iterated until an optimal solution is found. The entire procedure is depicted in Figure 1.

2.3. Multiagent System

An agent is a software component that has autonomy in providing an interoperable interface for a system [24]. The use of a multiagent system (MAS) is a technique for modeling complex problems. An MAS is constructed by multiple autonomous agents that interact with each other directly by communication and negotiation or indirectly by influencing the environment to fulfill local and global tasks [24,25,26]. Combining an MAS with swarm intelligence algorithms, such as ABC, in a distributed and parallel manner can be effective to shorten the computational time of a complex optimization problem [27]. Usually, the individuals in swarm intelligence algorithms can be treated as a series of heterogeneous agents in an MAS involving different computing processors with diverse goals, constraints, and behaviors. By collaborating among these agents, the optimal solution can be achieved in a distributed manner. For example, in [17], each artificial bee and food source are implemented as independent software agents who run separately and simultaneously in an MAS, with an administration agent controlling the flow work of the RS clustering algorithm. One major advantage of such an MAS is a reduction in computational time because the computational burdens are offloaded onto different processors. Furthermore, the failure of one agent will not disturb the entire algorithm’s calculat9ion, which is helpful for ensuring the robustness of the optimization framework.

3. Framework Design

The design of the improved MA-based ABC framework mainly consists of three parts: agents’ role design, communication design, and behavior design. This section first elaborates the design of each part and then compares this improved framework with the former framework proposed in [17].

3.1. Agents’ Role Design

Two types of agent roles are designed in this framework, massive bee agents and one administration agent. These agents are located in different computing nodes within the same network through which they can communicate with each other via messages. The agents’ role design is depicted in Figure 2.
In [17], bee agents are only responsible for a neighborhood search to generate new solutions in the employed and onlooker bee phase, which introduces an extra communication cost, as indicated in Section 2. In this paper, we redesign the bee agent with the potential to decrease the frequency of communication. In addition to a neighborhood search, each bee agent has more tasks to be executed to maintain its corresponding solution, which include (1) generating a random solution in the initial phase and the scout bee phase, (2) evaluating the solutions’ fitness, (3) updating the maintenance solution, and (4) recording the number N limit , which indicates that a solution has not been updated, to control the initiation and termination of the scout bee phase. These four behaviors are assigned to food source agents in [17].
Similarly to [17], the administration agent is responsible for the overall control of the algorithm, which includes the following functions: (1) exerting control over agents’ lifecycle, namely, generating new bee agents in different computing nodes during the initial stage and killing them at the end of the algorithm; (2) executing data initialization; (3) determining the solutions participating in the neighborhood search; (4) performing iteration and convergence control; and (5) recording and outputting the optimal solution.

3.2. Agents’ Communication Design

In this paper, a message-passing mechanism is adopted for the smooth implementation of the algorithm. All agents communicate with each other through the network by messages. According to the standard of agent communication language (ACL), each message contains at least five fields: the sender, the receivers, contents, language, and a communicative act [24].
For example, in ABC’s employed bee phase, the administrator agent will pass a neighborhood solution to each bee agent before executing the neighborhood search. Therefore, the sender of the message is the administrator agent, and the receiver is a bee agent. The message content contains the neighborhood solution, which is coded in the language of Java by serialization in our design. Under these circumstances, the sender (the administrator agent) wants the receiver (a bee agent) to perform an action (begin its neighborhood search); thus, the communicative act should be set as REQUEST. However, in certain other situations, the sender only wants the receiver to be aware of a fact, such as a bee agent notifying the administrator agent while completing a scout bee behavior; thus, the communicative act should be set as INFORM.

3.3. Agents’ Behavior Design

The agents’ behavior in an MA framework is tightly coupled with the procedure of the ABC algorithm. There are five phases in ABC: the initial phase, the employed bee phase, the onlooker bee phase, the scout bee phase, and the convergence judgment phase (Figure 3).
(1) Initialization phase
First, we launch an administration agent and set initial parameters, including MA-related data, such as the number of bee agents, the network address list of computing nodes that can participate in the parallel computation, and RS-optimization-related initial data, such as the number of clustering centers in the problem of hyperspectral image clustering.
Then, the administration agent will generate multiple bee agents in different computing nodes according to the parameters of the network address list and pass the RS-optimization-related initial parameters to each bee agent.
After receiving the initial parameters, each bee agent will generate a random solution.
(2) Employed bee phase
First, the administration agent will pass to each bee agent a random neighborhood solution through the network. Then, the k t h bee agent maintaining solution X k = { x k , 1 , x k , 2 , x k , m × L } with fitness f i t k will receive the solution X s as a neighborhood solution, where m × L is the dimension and k s . The k t h bee agent then executes a neighborhood search to generate a new candidate solution X k according to Equation (2)
X k , r = X k , r + Φ k , r × ( X k , r X s , r )
where r is a random dimension index selected from the set { 1 , 2 , , m × L } and Φ k , r is a random number within [ 1 , 1 ] . For a minimal optimization problem, when a new solution X k is generated, its fitness f i t k will be calculated via Equation (3) after its objective function value U ( k ) is obtained. Then, a greedy selection will be used to improve the fitness of the k t h bee agent’s solution. If f i t k is better than the original solution’s fitness f i t k , the solution will be replaced by the new one; otherwise, the parameter N limit = N limit + 1 . Later, the updated solution will be passed to the administrator agent through the network.
f i t k = 1 1 + U ( k )
It should be noted that the objective function calculation U ( k ) is a problem-focused process. How the solution’s objective function value is calculated is irrelevant in the MA framework, since only its function value is needed to evaluate the fitness. However, the objective function calculation could be loosely coupled with the MA-based approach by providing each agent with the calculation interface.
(3) Onlooker bee phase
Once the administrator agent receives all bee agents’ fitness, a random selection probability for each bee agent will be calculated according to Equation (4).
p k = f i t k k = 1 B N f i t k
where p k is the selection probability of the k t h bee agent, f i t k is the fitness value, and B N is the bee agent number. The probability gives a solution with better fitness a greater chance of being selected by an onlooker bee than the solutions with worse fitness.
Then, the administrator agent will pass to each bee agent two solutions, X k and X r , where X k is obtained by roulette wheel selection according to the selection probabilities and X r is selected randomly. Later, each bee agent will execute a neighborhood search according to Equation (2) and calculate the new generated solution’s fitness via Equation (3). If the new solution’s fitness is worse than solution X k , then N limit = N limit + 1 ; otherwise, the new generated solution will be transferred to the k t h bee agent to replace the original solution. Finally, all bee agents’ solution will be transferred to the administrator agent to help it update each bee’s best-so-far solution.
To further improve the parallel computation of the entire framework, the employed and onlooker bee phases could be carried out simultaneously.
(4) Scout bee phase
After the onlooker bee phase, each bee agent will judge whether its parameter N limit exceeds the value of a predefined number l i m i t . If the parameter exceeds the value, the original solution is abandoned, and a new solution will be generated randomly.
(5) Convergence judgment phase
If the iteration meets the convergence condition, the administrator agent will kill all bee agents and export the best-so-far solution in its memory as the optimal solution. Otherwise, the employed bee, onlooker bee, and scout bee operations will be executed repeatedly.

3.4. Computational Complexity

If the numbers of employed and onlooker bees number are both B N , the maximum iteration number is T, the parameter related to a scout bee’s behavior of abandoning a solution is N limit , a solution’s dimension (for example, the number of endmembers and clustering centers in the problem of endmember extraction and clustering) is M , and the number of parallel computing nodes is C ( C B N ) , the time complexity of the framework can be represented in Table 1, where g ( * ) is the complexity of the RS optimization objective function value calculation.

3.5. Comparison

In the MA-based ABC proposed in [17], a food source agent is only responsible for a solution’s maintenance and a bee agent is only responsible for the neighborhood search (shown in Figure 4a). Because a bee agent does not store a solution, whenever it executes a neighborhood search in the employed and onlooker bee phases, it has to solicit two solutions from two different food source agents through the network (shown as step 1 in Figure 4a). Subsequently, the new generated solution should also be passed to its corresponding food source agent to update solutions (shown as step 3 in Figure 4a). The frequent communications in the MA-based ABC reduce the computational performance.
In the improved MA-based ABC framework proposed in this paper, each agent exhibits both behaviors (solution maintenance and neighborhood search), and its neighborhood search can be directly executed on its maintenance solution, which means that only one neighbor solution has to be passed to an agent in all of the employed phases (shown in Figure 4b) and parts of the onlooker bee phases (if one of the two randomly selected solutions for a bee happens to be maintained by the bee). Thus, the frequency of transferring solutions among agents will be effectively reduced, which is helpful for improving the efficiency of parallel computation.
To quantitatively analyze the improvement, the number of transferred solutions among agents in one iteration can be listed as shown in Table 2, which indicates that the improved framework proposed in this paper will spend less time on communication than the former framework [17] does, thus achieving higher efficiency.

4. Case Studies

To validate the effectiveness and efficiency of the proposed MA-based ABC approach while solving the computational challenges of RS optimization, an image clustering problem considering Markov random fields (MRFs) [5] and endmember extraction [16] are taken as case studies.

4.1. RS Optimization for Clustering

The model aims to minimize the total MRF classification discriminant function value, which can be summarized as the following optimization problem:
min U = i = 1 n U i ω i
s . t .   ω i = arg min j { U i j }
U i j = d i j + β b i j = r i c j 2 + β i ( 1 δ ( j , ω i ) )
d i j = d ( r i , c j ) = r i c j 2
b i j = i ( 1 δ ( j , ω i ) )
1 j m
where U i j is the MRF classification discriminant function, a combination of spectral and spatial similarity (shown in (5)). The Euclidean distance d i j between the pixel r i and the cluster center c j , shown in (8), reflects the degree of spectral similarity between r i and class X j . b i j represents the spatial similarity between pixel r i and class X j , which can be obtained by Function (9). i represents a pixel in the neighborhood of r i , ω i represents the class of that pixel in the neighborhood of r i , and δ ( , ) represents the Kronecker function. The parameter β is used to control the influence of the spatial information during classification, i is the number of pixels in the RS image, and j is the number of clusters. The objective function calculation of this model is detailed in [5].

4.2. RS Optimization for Endmember Extraction

One RS optimization for endmember extraction can be modeled to minimize the volume of endmembers and the root-mean-square error (RMSE) value of the extracted results. The model can be expressed as follows.
min f ( E ) = V ( { e ˜ j } j = 1 M ) + μ R M S E ( { r ˜ i } i = 1 N , { e ˜ j } j = 1 M )
r ˜ i = j = 1 M α i j e ˜ j + ε i
j = 1 M α i j = 1 , i
α i j 0 , i , j
V ( { e ˜ j } j = 1 M ) = 1 ( M 1 ) ! | det ( [ 1 1 1 e ˜ 1 e ˜ 2 e ˜ M ] ) |
R M S E ( { r ˜ i } i = 1 N , { e ˜ j } j = 1 M ) = 1 N i = 1 N | | r ˜ i j = 1 M α ^ i j e ˜ j | | 2
{ e ˜ j } j = 1 M represents the M endmembers in the dimension-reduced hyperspectral image { r ˜ i } i = 1 N with N pixels; α i j is the abundance, which represents the proportion of the j -th endmember in the i -th pixel; and ε i is the random error. V ( { e ˜ j } j = 1 M ) is the volume of the simplex whose vertices are { e ˜ j } j = 1 M .

5. Experiments

5.1. Experimental Data

5.1.1. Dataset 1

The Pavia dataset with a spatial resolution of 1.3 m was collected by the Reflective Optics System Imaging Spectrometer (ROSIS) over the University of Pavia, Italy, in 2001. The dataset contains 103 bands (after the removal of the water vapor absorption bands and bands with a low signal-to-noise ratio (SNR)) with a wavelength range of 430–860 nm and covers an area of 610 × 340 pixels. A false color composite image (bands 80, 45, and 10) is shown in Figure 5a. The ground truth dataset, which contains nine classes, is shown in Figure 5b.

5.1.2. Dataset 2

The Indian Pine dataset with a spatial resolution of 20 m was obtained by the airborne visible/infrared imaging spectrometer (AVIRIS) in 1992. The dataset contains 169 bands (after the removal of the water vapor absorption bands and low-SNR bands) with a wavelength range of 400–2500 nm and covers an area of 145 × 145 pixels. A false color composite image (bands 54, 33, and 19) is shown in Figure 6a. The ground truth dataset, which contains 16 classes, is shown in Figure 6b.

5.2. Experimental Design

5.2.1. Comparison Experiments

All experiments were carried out in multiple computing nodes with the same hardware configuration shown in Table 3. The MA framework was developed in the language of Java by using the Java Agent Development (JADE) platform, and all objective function calculations were coded in MATLAB and imported into the MA framework as a JAR package.
For each dataset, two comparison experiments were designed to validate the MA-based ABC approach in two respects: optimal solution accuracy and calculation efficiency. The accuracy of the optimal solutions is evaluated by comparing with the original algorithms implemented on a single computer. The calculation efficiency is evaluated by comparing with the MA framework proposed in [17].
Notably, weight values are involved in both case studies; for example, the β in formulation (7) and μ in formulation (1), which affect the solution quality and the convergence speed. However, the goal of the experiments designed here is to prove that the solution accuracy will not decrease under different frameworks with the same parameter settings but that the calculation efficiency will be improved. Thus, both the weight values in the two case studies are set to 1000 according to [5]. It should be noted that the weight value setting may not be the best choice for all RS datasets and case studies.

5.2.2. Evaluation Criteria

Two types of criteria are used to evaluate the accuracy and efficiency.
(1) Accuracy criteria
• RS optimization for clustering
For the RS optimization problem of image clustering considering MRF, four criteria are chosen to evaluate the results’ accuracy: (1) purity, (2) normalized mutual information (NMI), (3) adjusted random index (ARI), and (4) segmentation accuracy (SA).
Purity is a simple and transparent measure for evaluating how well the clustering matches the ground truth data, which can be calculated as follows:
p u r i t y ( C , G ) = 1 N j max k | c j g k |
where C = { c 1 , c 2 , c J } is the set of clusters and G = { g 1 , g 2 , g K } is the set of classes of the ground truth. c j is the set of image pixels in cluster j , and g k is the set of pixels in class k . The closer the value of purity is to 1, the better the cluster result is. High purity is easy to achieve when the number of clusters is large, but purity cannot be used to trade off the quality of clustering against the number of clusters.
To make this tradeoff, NMI can be introduced.
N M I ( C , G ) = j k P ( c j g k ) log P ( c j g k ) P ( c j ) P ( g k ) j P ( c j ) log P ( c j )
where P ( c j ) , P ( g k ) , and P ( c j g k ) are the probabilities of a pixel being in cluster c j , class g k , and the intersection of c j and g k , respectively. The value of NMI is normalized within the range [0,1]. A large value of NMI corresponds to a high-quality result.
ARI is used to evaluate the degree of consistency between the classification results and the test samples. The clustering result and the ground truth data are two different class partitions of pixels. For an image with n pixels, a contingency table, such as that shown in Table 4, can be obtained by calculating the parameters a, b, c and d according to [28].
The ARI is calculated as follows:
A R I = ( n 2 ) ( a + d ) [ ( a + b ) ( a + c ) + ( c + d ) ( b + d ) ] ( n 2 ) 2 [ ( a + b ) ( a + c ) + ( c + d ) ( b + d ) ]
The value of the ARI lies between 0 and 1. The higher the ARI value is, the better the classification result is.
The criteria for SA, an evaluation index for the classification accuracy, can be obtained by the power of spectral discrimination (PWSD), which is one way to measure the degree of difference between two different cluster centers for the same pixel. For pixel x i and cluster centers c j 1 and c j 2 , the PWSD Ω ( c j 1 , c j 2 ; x i ) is as follows:
Ω ( c j 1 , c j 2 ; x i ) = max { S A M ( c j 1 , x i ) S A M ( c j 2 , x i ) , S A M ( c j 2 , x i ) S A M ( c j 1 , x i ) }
where S A M ( c j , x i ) = cos 1 ( c j T x i c j 2 x i 2 ) is the spectral angle distance between x i and c j . For an RS image, SA can be formulated as follows [29]:
S A = 1 n i = 1 n j = 1 , j ω i m Ω ( c ω i , c j ; x i ) m 1
As the distinction between cluster centers and pixels increases, the corresponding values of PWSD and SA also increase. Therefore, large SA and PWSD values correspond to a high-quality clustering result.
• RS optimization for endmember extraction
For the RS optimization problem of endmember extraction, the RMSE is a commonly used accuracy criterion. The RMSE quantifies the error between the original hyperspectral image and the remixed image, which represents the generalized degree of image information provided by the extracted endmembers [30]. The RMSE is calculated by Equation (16).
(2) Efficiency criteria
We use the classical notions of speedup and computational efficiency.
The speedup S C = T 1 / T C of a distributed application measures how much faster the algorithm runs when it is implemented on multiple computing nodes than it does on a single computing node. The computational efficiency e C = S C / C measures the average speedup value in a computation clustering environment. Here, C is the number of computing nodes, T 1 is the execution time on a single computing node, and T C is the algorithm’s execution time on C computing nodes.

6. Results

6.1. Accuracy

According to the logic of the ABC algorithm, there is no difference in accuracy between the improved framework reported in this paper and the algorithm without MA technology. In other words, the different parallel design will not affect the algorithm’s accuracy, which is validated in this section.

6.1.1. Accuracy of Clustering

The MA-based ABC framework coupled with the ABC-MRF-cluster algorithm in [5] and the ABC-MRF-cluster without using MA technology were run 10 times for comparison.
The median, mean, and standard deviation of the objective function and the t-test value are listed in Table 5 and Table 6. The values of the ARI and PWSD are similar and remain stable, which means that the accuracy of ABC-MRF-cluster classification accelerated by MA technology is nearly the same as that of ABC-MRF-cluster classification executed on a single computer. Additionally, both standard deviations of the objective function, which are much smaller than the mean and median value, prove the algorithm’s stability. Furthermore, the t-test values, which can be used for evaluation if two sets of data are significantly different from each other, are calculated. The p-values of all criteria between the ABC-MRF-cluster and the MA-based ABC are much greater than the threshold value of 0.05, which proves that there is no notable difference between the optimization results of the MA-based ABC framework and the ABC algorithm without MA technology. Therefore, we can conclude that the MA-based approach will not affect the optimization accuracy.

6.1.2. Accuracy of Endmember Extraction (EE)

The MA-based ABC framework improved upon in this paper coupled with the ABC-EE algorithm in [16] and the ABC-EE without using MA technology were run 10 times for comparison. The accuracy statistics are shown in Table 7. The p-value of the RMSE between the ABC-MRF-cluster and MA-based ABC are greater than the threshold value of 0.05, which also proves that there is no notable difference between the optimization results of the MA-based ABC framework and the ABC algorithm without MA technology.

6.2. Efficiency

6.2.1. Efficiency of Clustering

(1) Comparison with the framework proposed in [17]
To validate the improvement in the enhanced MA-based ABC framework, a comparison with the former framework proposed in [17] was made to solve the same RS optimization in the same computing environment and under the same parameter settings. When performing all the experiments, all of the bee agents were uniformly distributed on each computation node and the administrator agent was randomly distributed on one node.
The comparison results are shown in Table 8. In the one-node computation environment, the computation time consumed per iteration of the improved framework is shorter than the former framework, with an average computing efficiency promotion gap of 5.83% for dataset 1 and 6.57% for dataset 2. With the increase in the number of computation nodes, the average efficiency gap between the improved and the former framework becomes increasingly large, exceeding 50% when there are 20 nodes for dataset 1 and dataset 2. The results indicate that the improved MA-based ABC framework is more efficient than the framework proposed in [17].
(2) Influence of the quantity of computation nodes
To better analyze the influence of the quantity of computation nodes participating in the MA-based ABC framework for RS optimization, a series of comparison experiments were performed by setting the population of ABC to 10, 20, and 40 in different computation environments with 1, 2, 5, 10, and 20 nodes. Each experiment was performed 10 times. The statistical results are shown in Figure 7 and Table 8.
As shown in Figure 7, regardless of how many bees are involved in the computation, the average computation time per iteration decreases dramatically as the number of computation nodes increases. Moreover, each bee’s average computation time in one iteration was calculated (Table 8). This finding indicates that the speedup of the improved MA-based ABC algorithm increases significantly with the increase in the number of nodes in the parallel computation environment. However, the computational efficiency of each node’s performance nonlinearly decreases due to the increased communication cost among nodes within the network when adding more nodes.
The statistical values of the improved MA-based ABC framework’s efficiency criteria are presented in Table 9. When increasing the number of computing nodes, the speedup increases significantly since all calculations can be carried out at multiple nodes concurrently. In addition, the results also show that with the increase in the number of nodes, the computational efficiency of a node decreases, as a higher network communication cost to other nodes is generated.

6.2.2. Efficiency of Endmember Extraction

Let there be 60 bees in experiments for both datasets; a comparison between this improved framework and the former framework in [17] can be made in different parallel environments with 1, 2, 5, 10, and 20 computing nodes.
The efficiency statistics of these two frameworks for endmember extraction are recorded in Table 10. Clearly, the computation time per iteration of the improved framework is shorter than that of the former framework for both datasets. In particular, as more nodes are added in the computing environment, the efficiency advantage of the improved framework becomes more prominent.
The speedup and computational efficiency are calculated as depicted in Figure 8. By analyzing lines of the same color, it can be found that, with the increase in the number of computing nodes, the speedup increases dramatically, but the computational efficiency decreases because of the rising communication cost among different nodes. Comparing lines containing the same shapes (circles or rectangles) indicates that the improved framework outperforms the former framework with a higher speedup value and a lower efficiency descent rate. By observing lines of the same type, it can be observed that both the speedup and computational efficiency tendencies of dataset 1 are much better than those of dataset 2. The reason is that the amount of time used to calculate the objective function value for dataset 1 is less than that used to calculate dataset 2 after sampling, and the communication cost for dataset 1 occupies a higher proportion of the total calculation cost, which results in a greater computational improvement by saving on the same communication cost.

7. Discussion

7.1. Stability

In this paper, the failure of a single computing node does not lead to the failure of the overall computation, which provides the proposed computational framework with good computational stability. Such stability is achieved by predefining a time limit t for each node’s calculation. When the management agent does not receive the results returned by bee agents within the time after a calculation instruction is sent, it can be concluded that the node’s calculation or the network communication failed. In this case, the administrator agent can resend calculation instructions to obtain the correct calculation results from bee agents.

7.2. Scalability

The computational framework proposed maintains good scalability. Any newly added bee agent can perform optimization together with other previously deployed bee agents as long as the agent is deployed in the same communication network and registered at the management agent. Therefore, the number of computation nodes of the parallel computation is easily increased, and the computation scale is easily expanded. Similarly, if certain computing nodes are not needed to participate in parallel computing, their network IP addresses can be deleted from the management agent.

7.3. Flexibility

Hyperspectral RS image clustering and endmember extraction were applied to validate the performance of an MA-based ABC approach. However, the parallel computing framework proposed in this paper can be easily applied to many other RS optimization problems. In this framework, all the behaviors of the managing agent, as well as the communication behaviors of the bee agents and the neighborhood search behaviors, are universal and can be used for most RS optimization problems. We can solve different RS optimization problems by modifying each bee agent’s objective function calculation method and its required parameters. Therefore, the parallel computing framework proposed in this paper has good flexibility.

8. Conclusions

In this paper, an improved parallel processing approach involving the integration of an ABC optimization approach and multiagent technology is proposed. Taking hyperspectral RS image clustering and endmember extraction as examples, two types of agents are designed: an administrator agent and multiple bee agents. By executing the behaviors of each agent and the communication among agents, an optimal result without sacrificing accuracy can be obtained by parallel computation with dramatically increased efficiency. Moreover, a series of experiments proves that the improved MA-based ABC framework can achieve a greater enhancement in parallel computational efficiency than the framework proposed in [17] can. Moreover, the integration of MA and GPU technology by offloading each individual’s behaviors to the GPU’s arithmetic logic units under this MA-based ABC framework could be a more efficient approach, which should be further studied.

Author Contributions

Methodology, L.Y.; Resources, X.S.; Software, L.Y.; Validation, X.S.; Writing–original draft, L.Y.; Writing–review & editing, Z.L.

Funding

This research was funded by the Director Foundation of Institute of Remote Sensing and Digital Earth, Chinese Academy of Sciences under Y6SJ2300CX and the National Natural Science Foundation of China under Grant 41571349.

Acknowledgments

We thank Axing Zhu and Qunying Huang for their valuable advices.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Richards, J.A. Remote Sensing Digital Image Analysis—An Introduction; Springer: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  2. Wang, Q.; Meng, Z.; Li, X. Locality adaptive discriminant analysis for spectral–spatial classification of hyperspectral images. IEEE Geosci. Remote Sens. Lett. 2017, 14, 2077–2081. [Google Scholar] [CrossRef]
  3. Peng, X.; Feng, J.; Xiao, S.; Yau, W.; Zhou, J.T.; Yang, S. Structured autoencoders for subspace clustering. IEEE Trans. Image Process. 2018, 27, 5076–5086. [Google Scholar] [CrossRef] [PubMed]
  4. Peng, X.; Yu, Z.; Yi, Z.; Tang, H. Constructing the l2-graph for robust subspace learning and subspace clustering. IEEE Trans. Cybern. 2017, 47, 1053–1066. [Google Scholar] [CrossRef] [PubMed]
  5. Sun, X.; Yang, L.; Gao, L.; Zhang, B.; Li, S.; Li, J. Hyperspectral image clustering method based on artificial bee colony algorithm and markov random fields. J. Appl. Remote Sens. 2015, 9. [Google Scholar] [CrossRef]
  6. Zhang, B.; Gao, J.; Gao, L.; Sun, X. Improvements in the ant colony optimization algorithm for endmember extraction from hyperspectral images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 6, 522–530. [Google Scholar] [CrossRef]
  7. Zou, J.; Lan, J.; Shao, Y. A hierarchical sparsity unmixing method to address endmember variability in hyperspectral image. Remote Sens. Basel 2018, 10, 738. [Google Scholar] [CrossRef]
  8. Zhang, B.; Sun, X.; Gao, L.; Yang, L. Endmember extraction of hyperspectral remote sensing images based on the discrete particle swarm optimization algorithm. IEEE Trans. Geosci. Remote Sens. 2011, 49, 4173–4176. [Google Scholar] [CrossRef]
  9. Ghamisi, P.; Couceiro, M.S.; Ferreira, N.M.; Kumar, L. Use of darwinian particle swarm optimization technique for the segmentation of remote sensing images. In Proceedings of the 2012 IEEE International Geoscience and Remote Sensing Symposium, Munich, Germany, 22–27 July 2012; pp. 4295–4298. [Google Scholar]
  10. Jain, A.K. Data clustering: 50 years beyond k-means. Pattern Recogn. Lett. 2010, 31, 651–666. [Google Scholar] [CrossRef]
  11. Zhang, B.; Sun, X.; Gao, L.; Yang, L. An innovative method of endmember extraction of hyperspectral remote sensing image using elitist ant system (EAS). In Proceedings of the 2012 International Conference on Industrial Control and Electronics Engineering, Xi’an, China, 23–25 August 2012; pp. 1625–1627. [Google Scholar]
  12. Zhang, B.; Sun, X.; Gao, L.; Yang, L. Endmember extraction of hyperspectral remote sensing images based on the ant colony optimization (ACO) algorithm. IEEE Trans. Geosci. Remote Sens. 2011, 49, 2635–2646. [Google Scholar] [CrossRef]
  13. Karaboga, D.; Ozturk, C. A novel clustering approach: Artificial bee colony (ABC) algorithm. Appl. Soft Comput. 2011, 11, 652–657. [Google Scholar] [CrossRef]
  14. Hancer, E.; Ozturk, C.; Karaboga, D. Artificial bee colony based image clustering method. In Proceedings of the 2012 IEEE Congress on Evolutionary Computation, Brisbane, QLD, Australia, 10–15 June 2012. [Google Scholar]
  15. Bhandari, A.K.; Soni, V.; Kumar, A.; Singh, G.K. Artificial bee colony-based satellite image contrast and brightness enhancement technique using dwt-svd. Int. J. Remote Sens. 2014, 35, 1601–1624. [Google Scholar] [CrossRef]
  16. Sun, X.; Yang, L.; Zhang, B.; Gao, L.; Gao, J. An endmember extraction method based on artificial bee colony algorithms for hyperspectral remote sensing images. Remote Sens. Basel 2015, 7, 16363–16383. [Google Scholar] [CrossRef]
  17. Yang, L.; Sun, X.; Peng, L.; Yao, X.; Chi, T. An agent-based artificial bee colony (ABC) algorithm for hyperspectral image endmember extraction in parallel. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 4657–4664. [Google Scholar] [CrossRef]
  18. Tan, Y.; Ding, K. A survey on gpu-based implementation of swarm intelligence algorithms. IEEE Trans. Cybern. 2016, 46, 2028–2041. [Google Scholar] [CrossRef] [PubMed]
  19. Dawson, L.; Stewart, I.A. Accelerating ant colony optimization-based edge detection on the GPU using CUDA. In Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC), Beijing, China, 6–11 July 2014; pp. 1736–1743. [Google Scholar]
  20. Kristiadi, A.; Pranowo, P.; Mudjihartono, P. Parallel particle swarm optimization for image segmentation. In Proceedings of the Second International Conference on Digital Enterprise and Information Systems (DEIS2013), Kuala Lumpur, Malaysia, 4–6 March 2013; pp. 129–135. [Google Scholar]
  21. Nascimento, J.M.P.; Bioucas-Dias, J.M.; Alves, J.M.R.; Silva, V.; Plaza, A. Parallel hyperspectral unmixing on gpus. IEEE Geosci. Remote Sens. Lett. 2013, 11, 666–670. [Google Scholar] [CrossRef]
  22. Owens, J.D.; Houston, M.; Luebke, D.; Green, S.; Stone, J.E.; Phillips, J.C. GPU computing. Proc. IEEE 2008, 96, 879–899. [Google Scholar] [CrossRef]
  23. Karaboga, D.; Gorkemli, B.; Ozturk, C.; Karaboga, N. A comprehensive survey: Artificial bee colony (ABC) algorithm and applications. Artif. Intell. Rev. 2014, 42, 21–57. [Google Scholar] [CrossRef]
  24. Bellifemine, F.; Caire, G.; Greenwood, D. Developing Multi-Agent System with JADE; John Wiley & Sons Ltd.: Hoboken, NJ, USA, 2007. [Google Scholar]
  25. Xiang, W.; Lee, H.P. Ant colony intelligence in multi-agent dynamic manufacturing scheduling. Eng. Appl. Artif. Intell. 2008, 21, 73–85. [Google Scholar] [CrossRef]
  26. Jennings, N.R. On agent-based software engineering. Artif. Intell. 2000, 2, 277–296. [Google Scholar] [CrossRef]
  27. Leung, C.W.; Wong, T.N.; Mak, K.L.; Fung, R.Y.K. Integrated process planning and scheduling by an agent-based ant colony optimization. Comput. Ind. Eng. 2010, 59, 166–180. [Google Scholar] [CrossRef]
  28. Santos, J.; Embrechts, M. On the use of the adjusted rand index as a metric for evaluating supervised classification. In Lecture Notes in Computer Science; Alippi, C., Polycarpou, M., Panayiotou, C., Ellinas, G., Eds.; Springer: Berlin/Heidelberg, Germany, 2009; Volume 5769, pp. 175–184. [Google Scholar]
  29. Bilgin, G.; Erturk, S.; Yildirim, T. Unsupervised classification of hyperspectral-image data using fuzzy approaches that spatially exploit membership relations. IEEE Geosci. Remote Sens. Lett. 2008, 5, 673–677. [Google Scholar] [CrossRef]
  30. Plaza, A.; Martinez, P.; Perez, R.; Plaza, J. A quantitative and comparative analysis of endmember extraction algorithms from hyperspectral data. IEEE Trans. Geosci. Remote Sens. 2004, 42, 650–663. [Google Scholar] [CrossRef]
Figure 1. The procedure of the artificial bee colony (ABC) algorithm.
Figure 1. The procedure of the artificial bee colony (ABC) algorithm.
Remotesensing 11 00152 g001
Figure 2. Agents’ role design.
Figure 2. Agents’ role design.
Remotesensing 11 00152 g002
Figure 3. The overall workflow of agents’ behaviors for remote sensing (RS) clustering.
Figure 3. The overall workflow of agents’ behaviors for remote sensing (RS) clustering.
Remotesensing 11 00152 g003
Figure 4. The major steps in generating a new solution. (a) The approach reported in [17] and (b) an improved version of (a). The steps labeled with * in (a) are redundant with (b).
Figure 4. The major steps in generating a new solution. (a) The approach reported in [17] and (b) an improved version of (a). The steps labeled with * in (a) are redundant with (b).
Remotesensing 11 00152 g004
Figure 5. The Pavia dataset: (a) a false color composite image and (b) the ground truth.
Figure 5. The Pavia dataset: (a) a false color composite image and (b) the ground truth.
Remotesensing 11 00152 g005
Figure 6. The Indian Pine dataset: (a) a false color composite image and (b) the ground truth.
Figure 6. The Indian Pine dataset: (a) a false color composite image and (b) the ground truth.
Remotesensing 11 00152 g006
Figure 7. The average computation time per iteration for different numbers of bees and computing nodes. (a) Dataset 1 and (b) Dataset 2.
Figure 7. The average computation time per iteration for different numbers of bees and computing nodes. (a) Dataset 1 and (b) Dataset 2.
Remotesensing 11 00152 g007
Figure 8. The speedup and computational efficiency of the improved MA-based ABC framework.
Figure 8. The speedup and computational efficiency of the improved MA-based ABC framework.
Remotesensing 11 00152 g008
Table 1. The Time Complexity of the Framework.
Table 1. The Time Complexity of the Framework.
ComplexityDescription
Initialization phase M × B N C Generate B N solutions of M dimensions in parallel.
Employed bee phase T × B N × g ( * ) C Generate B N new solutions by neighborhood search, and calculate the objective function values in T iterations in parallel.
Onlooker bee phase T × B N × g ( * ) C
Scout bee phase B N Calculate B N solutions’ fitness.
[ T K ] × M × B N C In the worst case, B N bees abandon original solutions every K iterations and generate new solutions of M dimensions in parallel.
Total B N × ( 1 + ( 1 + [ T K ] ) × M + 2 × T × g ( * ) C )
Table 2. The Number of Transferred Solutions among Agents in one Iteration.
Table 2. The Number of Transferred Solutions among Agents in one Iteration.
Algorithm PhaseThe Behaviors of AgentsThe Former Framework in [17]The Improved Framework in This Paper
Employed beeThe administrator agents passing solutions to each employed bee agent 2 B N B N (1)
Each employed bee agent passing a solution to a food source agent B N 0
Passing solutions to the administrator agent B N B N
Onlooker beeThe admin agents passing solutions to onlooker bee agents 2 B N B N + B N × p 1 (2)
Each onlooker bee agent passing a solution to a food source agent B N 0
Onlooker bee agents passing solutions to other bee agents0 B N × p 2 (3)
Passing solutions to the administrator agent B N B N
Sum 8 B N 4 B N + 2 B N × ( P 1 + P 2 )
Note: (1) In the employed bee phase, because each bee agent maintains a solution, only one neighborhood solution has to be transferred from the administrator agent to each bee agent; thus, the number of transferred solutions among agents is B N , where B N is the number of bees. (2) In the onlooker bee phase, two solutions ( X k and X r ) must be transferred to each bee agent. If one of the two solutions happens to be the solution maintained by the bee agent, there is no need to transfer the solution. Fraction p 1 ( 0 p 1 1 ) can be imported to describe the probability of this possibility. p 1 = N B N , where N is the number of bees whose maintaining solution happens to be one of the two solutions in their onlookers’ neighborhood search. Thus, the total number of transferred solutions is B N + B N × p 1 . (3) For a bee agent, if its new generated solution X is better than X k , and X k is another bee agent’s maintaining solution, X must be transferred. The fraction p 2 ( 0 p 2 1 ) is defined to describe the probability of this possibility. Thus, the number of transferred solutions under such circumstances is B N × p 2 .
Table 3. The Hardware Configuration Used in the Experiments.
Table 3. The Hardware Configuration Used in the Experiments.
Number of CoresNumber of ThreadsCPU (GHz)Bus Speed (MHz)MultiplierRAM (GB)
443.4100308
Table 4. Contingency Table of the Clustering Result and Ground Truth Data.
Table 4. Contingency Table of the Clustering Result and Ground Truth Data.
Ground Truth DataPixel Pair with the Same Class LabelPixel Pair with Different Class Labels
Clustering Result
Pixel pair with the same class label a b
Pixel pair with different class labels c d
Table 5. Accuracy Statistics of the multiagent (MA)-based ABC Framework and the ABC-MRF-cluster in [5] for Dataset 1.
Table 5. Accuracy Statistics of the multiagent (MA)-based ABC Framework and the ABC-MRF-cluster in [5] for Dataset 1.
ARIPWSDNMIPurity
MA-Based ABC FrameworkABC-MRF-ClusterMA-Based ABC FrameworkABC-MRF-ClusterMA-Based ABC FrameworkABC-MRF-ClusterMA-Based ABC FrameworkABC-MRF-Cluster
Median Value0.30880.31895.97855.83560.55050.55950.57960.5916
Mean Value0.31130.32125.87375.80340.55520.55900.58430.5891
Standard Deviation0.01740.02290.54751.30140.01110.01120.01920.0213
p-value0.09300.81820.22800.4029
MRF, Markov random field.
Table 6. Accuracy Statistics of the MA-based ABC Framework and the ABC-MRF-cluster in [5] for Dataset 2.
Table 6. Accuracy Statistics of the MA-based ABC Framework and the ABC-MRF-cluster in [5] for Dataset 2.
ARIPWSDNMIPurity
MA-Based ABC FrameworkABC-MRF-ClusterMA-Based ABC FrameworkABC-MRF-ClusterMA-Based ABC FrameworkABC-MRF-ClusterMA-Based ABC FrameworkABC-MRF-Cluster
Median Value0.21640.21486.25876.01560.44580.44600.40130.3985
Mean Value0.22100.22066.52336.50730.45100.44920.41180.4031
Standard Deviation0.01320.01450.83421.35080.01190.00950.03230.0238
p-value0.94390.97500.70610.4996
Table 7. Accuracy Statistics of the MA-based ABC Framework and ABC-EE in [16].
Table 7. Accuracy Statistics of the MA-based ABC Framework and ABC-EE in [16].
RMSERMSE for Dataset 1RMSE for Dataset 2
MA-based ABC FrameworkABC-EEMA-Based ABC FrameworkABC-EE
Median Value386.54390.24110.49123.99
Mean Value375.13393.83125.30139.85
Standard Deviation151.98133.1141.3249.03
p-value0.540.19
EE, endmember extraction.
Table 8. Efficiency Statistics of the Improved MA-based ABC Framework and the Framework Proposed in [17] for Clustering.
Table 8. Efficiency Statistics of the Improved MA-based ABC Framework and the Framework Proposed in [17] for Clustering.
Computing EnvironmentPopulation of BeesAverage Computation Time per Iteration for Dataset 1Average Computation Time per Iteration for Dataset 2
Former Framework (second)Improved Framework (second)Promotion GAPAverage Promotion GAPFormer Framework (second)Improved Framework (second)Promotion GAPAverage Promotion GAP
1 Node Computation1024.4323.364.58%5.83%5.595.344.75%6.57%
2050.9147.666.82%11.5810.698.32%
4096.0790.566.08%24.7823.246.64%
2 Nodes Computation1013.7212.2012.49%13.71%3.303.096.88%16.13%
2027.7924.5613.13%7.545.8927.85%
4057.4849.7515.52%13.3111.7213.65%
5 Nodes Computation109.407.1631.24%21.72%1.541.1434.82%25.91%
2014.3512.0519.12%3.162.4628.71%
4027.5423.9914.79%6.735.8914.21%
10 Nodes Computation104.623.2840.92%40.02%1.050.7442.41%48.93%
209.987.2437.77%1.881.2056.09%
4017.4712.3541.38%3.472.3448.30%
20 Nodes Computation10---50.29%---56.91%
207.414.9450.05%1.350.8559.45%
409.546.3450.52%2.051.3354.37%
Note: “-” indicates that no experiments are carried out under the corresponding circumstances.
Table 9. Statistical Values of the Improved MA-based ABC Framework’s Efficiency Criteria.
Table 9. Statistical Values of the Improved MA-based ABC Framework’s Efficiency Criteria.
Computing EnvironmentDataset 1Dataset 2
Average Computation Time per Iteration per BeeSpeedupComputational EfficiencyAverage Computation Time per Iteration per BeeSpeedupComputational Efficiency
1 node2.32775881.001.000.549662831.001.00
2 nodes1.23064271.890.950.2987569971.840.92
5 nodes0.63945093.640.730.1282319764.290.86
10 nodes0.33293336.990.700.0641663378.570.86
20 nodes0.202679111.480.570.03783786114.530.73
Table 10. Efficiency Statistics of the Improved MA-based ABC Framework and the Former Framework Proposed in [17] for Endmember Extraction.
Table 10. Efficiency Statistics of the Improved MA-based ABC Framework and the Former Framework Proposed in [17] for Endmember Extraction.
Computing EnvironmentComputation Time per Iteration for Dataset 1Computation Time per Iteration for Dataset 2
Former Framework (second)Improved Framework (second)Promotion GapFormer Framework (second)Improved Framework (second)Promotion Gap
1 node20.7320.312.03%26.0025.581.66%
2 nodes10.3910.182.08%13.3613.181.40%
5 nodes4.314.134.35%5.925.3211.24%
10 nodes2.262.069.31%3.012.6812.19%
20 nodes1.191.0315.13%1.681.4218.36%

Share and Cite

MDPI and ACS Style

Yang, L.; Sun, X.; Li, Z. An Efficient Framework for Remote Sensing Parallel Processing: Integrating the Artificial Bee Colony Algorithm and Multiagent Technology. Remote Sens. 2019, 11, 152. https://doi.org/10.3390/rs11020152

AMA Style

Yang L, Sun X, Li Z. An Efficient Framework for Remote Sensing Parallel Processing: Integrating the Artificial Bee Colony Algorithm and Multiagent Technology. Remote Sensing. 2019; 11(2):152. https://doi.org/10.3390/rs11020152

Chicago/Turabian Style

Yang, Lina, Xu Sun, and Zhenlong Li. 2019. "An Efficient Framework for Remote Sensing Parallel Processing: Integrating the Artificial Bee Colony Algorithm and Multiagent Technology" Remote Sensing 11, no. 2: 152. https://doi.org/10.3390/rs11020152

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop