Next Article in Journal
Water and Sediment Quality Changes in Mangrove Systems with Shrimp Farms in the Northern Ecuadorean Coast
Next Article in Special Issue
Q-MeaMetaVC: An MVC Solver of a Large-Scale Graph Based on Membrane Evolutionary Algorithms
Previous Article in Journal
Ultrasound Elastography for the Differentiation of Benign and Malignant Solid Renal Masses: A Systematic Review and Meta-Analysis
Previous Article in Special Issue
Density Peaks Clustering Algorithm Based on a Divergence Distance and Tissue—Like P System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Extended Membrane System with Monodirectional Tissue-like P Systems and Enhanced Particle Swarm Optimization for Data Clustering

1
School of Management Engineering, Shandong Jianzhu University, Jinan 250101, China
2
School of Business, Shandong Normal University, Jinan 250399, China
3
School of Computer Science, Qufu Normal University, Rizhao 276826, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(13), 7755; https://doi.org/10.3390/app13137755
Submission received: 5 June 2023 / Revised: 26 June 2023 / Accepted: 29 June 2023 / Published: 30 June 2023
(This article belongs to the Special Issue Membrane Computing and Its Applications)

Abstract

:
In order to establish a highly efficient P system for resolving clustering problems and overcome the computation incompleteness and implementation difficulty of P systems, an attractive clustering membrane system, integrated with enhanced particle swarm optimization (PSO) based on environmental factors and crossover operators and a distributed parallel computing model of monodirectional tissue-like P systems (MTP), is constructed and proposed, which is simply named ECPSO-MTP. In the proposed ECPSO-MTP, two kinds of evolution rules for objects are defined and introduced to rewrite and modify the velocity of objects in different elementary membranes. The velocity updating model uses environmental factors based on partitioning information and randomly replaces global best to improve the clustering performance of ECPSO-MTP. The crossover operator for the position of objects is based on given objects and other objects with crossover probability and is accomplished through the hybridization of the global best of elementary membranes to reject randomness. The membrane structure of ECPSO-MTP is abstracted as a network structure, and the information exchange and resource sharing between different elementary membranes are accomplished by evolutional symport rules with promoters for objects of MTP, including forward and backward communication rules. The evolution and communication mechanisms in ECPSO-MTP are executed repeatedly through iteration. At last, comparison experiments, which are conducted on eight benchmark clustering datasets from artificial datasets and the UCI Machine Learning Repository and eight image segmentation datasets from BSDS500, demonstrate the effectiveness of the proposed ECPSO-MTP.

1. Introduction

Membrane computing (MC) is a crucial area of nature-inspired computation that draws inspiration from the biological mechanisms and phenomena of organisms and was designed and developed by Pặun [1]. The MC of modeling is the abstraction of the construction, activation, cooperation, and function of living cells in tissues or organs and is also known as P systems or membrane systems. In general, the computational model of MC consists of three essential elements, including the structure of membranes, objects, and rules [2]. There are two main types of MC, including Cell-like P systems (CPs) with a hierarchical structure and tissue-like P systems (TPs) or neural-like P systems (NPs) with a net structure [3]. It has been proven that these classic systems and their corresponding variations are Turing complete [4].
TPs are inspired by the construction and communication methods of living cells in tissues or organs. Information interchange between cells is accomplished through channel states [5,6] or symport/antiport rules [7]. Therefore, the substructure of TPs is vividly depicted by an undigraph. The study of TPs has been broadly divided into theoretical and application studies [8]. In theoretical studies, many kinds of TPs based on various biological motivations have been constructed and developed to expand the types of MC [9,10]. The analysis of these extended systems on computing power and computational efficiency is an essential point in theoretical works [11,12]. TPs with evolutional symport/antiport are presented to rewrite and modify objects in the communication process [13,14]. Inspired by the presence of biocatalysts in biological reactions, a variation of TPs with promoters has been proposed to recruit promoters for solving image processing problems [15]. A novel kind of TP, monodirectional tissue-like P systems [16] and simply called MTPs, which is based on the biological fact that objects only move in one direction, has been constructed to enhance computing power in solving NP-complete problems [17,18,19].
The combination of evolutionary computation (EC) and TPs is a prominent feature of application studies and an important component of evolutionary membrane computing (EMC). Membrane-inspired evolutionary algorithms (MIEAs), also known as membrane algorithms (MAs), have become one of the mainstream approaches in EMC based on TPs [20]. MAs based on TPs have been merged with various heuristic algorithms, such as genetic algorithm (GA) [21], differential evolution (DE) [22] and its variations [23], particle swarm optimization (PSO) [24] and its variations [25], ant colony optimization (ACO) [26], and artificial bee colony algorithm (ABC) [27], to comprehensively utilize the strengths of heuristic algorithms with stronger practicality and high robustness, as well as the low complexity and effectiveness of TPs. MAs based on TPs have been successfully applied to resolve various advanced problems in the world [28,29,30,31].
PSO is a type of stochastic optimization approach that was originally initiated and designed by Kennedy and Eberhart [32]. The computing model is based on the collective behavior of birds flocking, and the trajectory of particles is adaptively adjusted according to individual and social experiences to balance exploration and exploitation of the algorithm [33]. Compared to other swarm intelligence (SI) approaches, PSO has fewer adjusting parameters and a simple implementation, which shows great potential for solving practical problems [34,35,36]. However, under the basic framework of SI, PSO can also easily fall into local optima and appear premature [37]. Therefore, several variants of PSO have been designed and developed to improve its search performance [38,39].
A lot of research has been conducted on adjusting the parameters of PSO through various optimization strategies, mainly consisting of adjusting the inertia weight [40,41] and the acceleration coefficients based on self-cognition and social-cognition adjustment [42]. Several kinds of modified position and velocity updating models have been constructed and presented to enhance the local and global optimization capacities of PSO, such as the crossover operator [43,44], which includes the multi-crossover operator [45] and the vertical crossover operator [46], and the mutation operator [47,48]. Furthermore, improved PSO integrates with various learning strategies that have been designed and developed to alter the exemplar particle in the population, including adaptive comprehensive learning strategies [49,50], adaptive learning strategies [51,52], and reinforcement learning strategies [53,54]. Some studies on PSO have utilized the multi-population strategy to split the whole population of particles into multiple smaller subpopulations to avoid prematurity [55,56]. Communication between different subpopulations is allowed to realize information exchange and sharing [57,58]. In order to utilize the advantages of both approaches, various variants of PSO have been proposed that combine PSO with other heuristic algorithms or optimization strategies to enhance the performance of PSO in solving engineering application problems [59,60,61].
Based on the above studies, MAs based on TPs provide a new way of enhancing the performance of PSO, which distinguishes them from previous improvements to PSO. Furthermore, the combination of PSO and TPs is also a new attempt to overcome limitations, such as incomplete basic operations and implementation complexities in the original model. The properties of distributed and parallel processing in P systems are introduced to address the issue of exponential complexity increase in PSOs with problem scale enlargement. Under the computing framework of TPs, the particle population is divided into multiple sub-populations with a network structure, and communication between different sub-populations is achieved through the symport/antiport rules of TPs. Unlike previous related works about MAs, this paper modifies the evolutionary mechanism of PSO to better suit the computing framework of MTP. Each object in the elementary membrane has two evolution rules for updating velocity, which can be selected randomly to avoid premature convergence. Additionally, two evolution rules based on the classic PSO and crossover operator are defined to update the position of objects in different elementary membranes in order to balance the exploration and exploitation of the system. The exchange of information based on global best practices in a single direction is achieved through the specific communication mechanism of MTP, which distinguishes it from the related works on MAs based on TPs.
This work focuses on the combination of extended TPs with PSO for solving clustering problems. A novel variant of MIEAs or MAs with SNS, called ECPSO-MTP, is proposed and designed by integrating MTP and an improved PSO based on environmental factors and crossover operators. The proposed ECPSO-ECP comprises the evolutionary mechanism of modified PSO and the computation framework of MTP to establish a highly efficient P system for solving clustering problems. In ECPSO-MTP, two types of evolution rules are defined and introduced to rewrite and modify objects in different elementary membranes based on environmental factors and crossover operators. The environmental factors utilize the partitioning information to randomly replace the global best of objects and attend to the evolution process to improve the clustering performance of ECPSO-MTP. The crossover operator is achieved by hybridizing the global best of elementary membranes to reject randomness. The information exchange and resource sharing between different elementary membranes in ECPSO-MTP are accomplished through the evolutional symport with promoters for objects of MTP, including forward and backward communication rules. Furthermore, eight test clustering and image datasets are employed in comparison experiments to verify the effectiveness of ECPSO-MTP compared with other existing approaches. The results from comparison experiments demonstrate the efficiency of the proposed ECPSO-MTP.
The structure of this paper is organized as follows: Section 2 provides a brief introduction to the basic framework of TPs with evolutional symport/antiport and promoters and MTPs with evolutional symport and promoters. Section 3 details the evolutionary mechanism of PSO based on environmental factors and crossover operators. Section 4 presents a detailed depiction of the proposed ECPSO-MTP, including its general framework, evolution rules, communication rules, and complexity analysis. Experimental results and discussions of the proposed ECPSO-MTP, which were conducted on eight clustering datasets and compared with five existing methods, are discussed and analyzed in Section 5. Section 6 illustrates the performance of the proposed ECPSO-MTP based on the comparison results of three clustering approaches on eight segmentation images. Finally, the summarization and conclusions for this paper and recommendations for future works are outlined in Section 7.

2. Tissue-like P Systems

2.1. TPs with Evolutional Symport/Antiport and Promoters

As a variant of TPs, TPs with evolutional symport/antiport and promoters are based on the fact of biological reactions [62]. In classic TPs, substances or objects are transmitted through the execution of communication rules in different cells or regions, and communication between two given membranes is realized with the help of symport/antiport rules. In the presence of certain chemical substances, rules with promoters are executed to dynamically change the working manner of the computing model. The purpose of this extended P system is to modify and rewrite objects during the communication process using the framework of evolutionary mechanisms. Therefore, a recognizer TP with evolutional symport/antiport and promoters is designed and defined as a tuple in the following [9]:
Π = ( Γ , ε , μ , ω 1 , , ω m , R , σ i n , σ o u t )
where
(1)
Γ is nonempty finite set of alphabets consisting of elements, and all elements in this alphabet are called objects;
(2)
ε is a finite alphabet set of Γ , which represents objects initially placed in the environment, such that ε Γ ;
(3)
μ is the structure of the P system, which consists of m membranes;
(4)
ω 1 , , ω m are finite multisets of objects over Γ , initially placed in m membranes, where ω i Γ , for 1 i m ;
(5)
R is a finite set of communication rules for the system, which contains two kinds of rules with promoters of the following restrictions.
Evolutional symport rules with promoters: [ u | p ] i [ ] j [ ] i [ u ] j , where 0 i j m , u Γ + , u Γ * , p Γ , | u | > 0 . It can only be applied to a configuration if both the multiset of objects u and promoter objects p appear in the same existing membrane i . When such an evolutional symport rule with promoters associated with membrane i and membrane j is employed, the multiset of objects u under the presence of promoter objects p in membrane i is simultaneously sent to membrane j and evolved into new objects u ;
Evolutional antiport rules with promoters: [ u | p ] i [ v ] j [ v ] i [ u ] j , where 0 i j m , u , v Γ + , u , v Γ * , p Γ , | u | , | v | > 0 . It can only be applied to a configuration if both u and p appear in the same existing membrane i , and another membrane j in the same configuration contains v . When such an evolutional antiport rule with promoters associated with i and j is employed, u under the presence of p in i is simultaneously sent to j and evolved into u , at the same time, v in j is also simultaneously sent to i and evolved into new objects v ;
(6)
σ i n represents an input membrane or region, where σ i n { σ 0 , σ 1 , , σ m } ;
(7)
σ o u t represents an output membrane or region, where σ o u t { σ 0 , σ 1 , , σ m } .
In the TPs with evolutional symport/antiport and promoters, unlike other objects involved in computation, the promoter objects do not directly take part in the execution process of evolutional symport/antiport rules. They are not modified or changed during computation. However, it only alters the order in which the evolutional symport/antiport rules are executed. In the presence of promoter objects, a greater number of corresponding evolutional symport/antiport rules are implemented within the fixation time. Once the promoters are empty, the execution of evolutionary symport/antiport rules will cease to be effective. Overall, the TPs with evolutional symport/antiport and promoters are applied in a maximally parallel manner, and each membrane of the system also works in a maximally parallel way.
Specifically, the evolution rules for objects are introduced in the computing model of TPs, which are based on the biological facts that molecules or substances can be modified and changed in membranes [63]. The evolution of objects is achieved through the execution of rewriting rules, which are of the form: [ u v ] i , where u , v Γ + . It can only be applied to a configuration if the multiset of objects u appears in the existing membrane i . When such an evolution rule is applied, objects u in i evolve into objects v , noted that the position of u remaining unchanged during the evolutionary computation.

2.2. MTPs with Evolutional Symport and Promoters

MTPs are an attractive variant of TPs that are based on the biological phenomenon of molecules or substances moving from high to low concentration in membranes or regions [16]. The computational power of MTPs has been proven in a flat maximally parallel model [18]. Furthermore, restrictive conditions of evolutional symport rules with promoters are introduced in MTPs, which are referred to as MTPs. For any two given membranes or regions, only evolutional symport rules with promoters are permitted to realize the movement and modification of objects in one direction.
The evolutional symport rules with promoters in MTP are given in the form, [ u | p ] i [ λ ] j [ λ ] i [ v ] j , or [ λ | p ] i [ u ] j [ v ] i [ λ ] j , where 0 i j m , u , v Γ + , p Γ , | u | > 0 . The first kind of rule is applied to a configuration if both u and p appear in the same existing membrane i . When such an evolutional symport rule with promoters associated with i and j is employed, under the presence of p , objects u in i are simultaneously sent to j and revised to v . The second kind of rule is applied to a configuration if p and u appear in i and j respectively. When such an evolutional symport rule with promoters associated with i and j is employed, under the presence of p in i , objects u in j are simultaneously sent to i and revised to v .

3. Improved Particle Swarm Optimization

3.1. Environmental Factors

From a biological perspective, the social behavior of birds is affected by their living environment. Specifically, the flying space of birds is constrained by environmental factors within a certain period. Additionally, the characteristics of birds can also affect their living environment, resulting in differences between individuals. To enhance the diversity of species, environmental factors are introduced to the classic model of PSO [59]. At each iteration t + 1 , the new velocity V i ( t + 1 ) of particle i is defined by (1) in the following:
V i ( t + 1 ) = ω V i ( t ) + c 1 r 1 ( X i l b e s t ( t ) X i ( t ) ) + c 2 r 2 ( X g b e s t ( t ) X i ( t ) ) ,       + c 3 r 3 ( X i e b e s t ( t ) X i ( t ) )
where t is the iteration counter. ω represents the inertia weight. c 1 and c 2 are acceleration coefficients based on self-cognition and social cognition. r 1 and r 2 are two uniform random numbers. X i l b e s t ( t ) is the local best of i at t . X g b e s t ( t ) is the global best of i at t . Specifically, c 3 is the positive regulation constant with a uniform random number. X i e b e s t ( t ) is the environmental factor of i at t , which is defined by (2) in the following:
X i e b e s t ( t ) = j = 1 n k y k , j n k ,
where X i e b e s t ( t ) consisting of a finite set of cluster centers for i at t , and the number of finite set is K , X i e b e s t ( t ) = { X i 1 e b e s t ( t ) , X i 2 e b e s t ( t ) , , X i K e b e s t ( t ) } , K is the number of clusters. y k , j is the feature of data point j belonging to the corresponding cluster k , for 1 k K . And n k is the totality of data points attaching to cluster k .

3.2. Crossover Operator

In the standard model of PSO, the trajectory adjustment of particles is based on their local and global experiences. Local and global best are introduced to regulate motion, resulting in rapid information propagation throughout the particle population. However, this rapid propagation in population leads to the standard PSO becoming trapped in local optima. Therefore, a novel variant of the updating method that integrates PSO and GA is proposed to improve performance. An arithmetic crossover operator is introduced into the generation process of the particle population to inject randomness by adding a weighted position between a random particle and a given particle [43]. Specifically, the crossover operator only takes place in a given particle i and random particle r with a crossover probability φ , at t + 1 , the new position X i ( t + 1 ) of i is defined by (3) in the following:
X i ( t + 1 ) = X i ( t ) + γ ( t ) X r ( t + 1 ) ,
where γ ( t ) is the weight parameter γ at t . X r ( t ) is the position corresponds to a randomly selected particle r from particle’s population at t , noted that r i . Additionally, at t + 1 , the crossover probability φ is determined by (4) in the following:
φ ( t + 1 ) = θ φ φ ( t ) ,
where θ φ is the damping parameter. At t + 1 , the weight parameter γ is determined by (5) in the following:
γ ( t + 1 ) = θ γ γ ( t ) ,
where θ γ is the damping factor. Specially, the initial values of the crossover probability φ and weight parameter γ are typically set to 1.

4. The Proposed ECPSO-MTP

In this section, a novel clustering membrane system is designed and developed that combines the computation framework of MTP and the evolutionism of PSO based on environmental factors and a crossover operator and is simply named ECPSO-MTP. The evolution mechanism of the system involves the evolution of objects in different membranes, which is achieved through the velocity updating model based on environmental factors and the position updating model based on the crossover operator. The communication mechanism is accomplished through evolutional symport with promoters of MTP to realize the transport and exchange of objects between membranes. Thus, the computing model of ECPSO-MTP consists of two mechanisms for objects, i.e., the evolution mechanism and the communication mechanism. More details about the proposed ECPSO-MTP are depicted as follows.

4.1. The Common Framework of ECPSO-MTP

The membrane system of the proposed ECPSO-MTP is defined and described as a tuple, which is completely depicted in the following:
Π = ( Γ , ε , μ , ω 1 , , ω m , R , R , σ i n , σ o u t ) ,
where
(1)
Γ is a non-empty, finite alphabet for objects;
(2)
ε Γ is a finite alphabet set for objects that are initially placed in the input membrane;
(3)
μ is the structure of the ECPSO-MTP, which contains m + 4 elementary membranes;
(4)
ω 1 , , ω m are finite multisets for objects that are initially placed in m elementary membranes, where ω i Γ , for 1 i m ;
(5)
R is the finite set of evolution rules for objects in the proposed ECPSO-MTP, where R = { R 1 , R 2 , , R m , R m + 2 } . R i is a finite subset of R associated with membrane i , for 1 i m or i = m + 2 , and is of the form: R i = [ u u ] i , for u , u Γ + . When such an evolution rule R i in i is applied, objects u in i evolve into u ;
(6)
R is the finite set of communication rules for objects in the proposed ECPSO-MTP, where R = { R 1 , R 2 , , R m + 2 } . R i is a finite subset of R associated with i using the evolutional symport with promoters of MTP. Specifically, R i j R i and is of the form: R i , j :   [ u | p ] i [ λ ] j [ λ ] i [ u ] j , or R i , j :   [ λ | p ] i [ u ] j [ u ] i [ λ ] j , where 1 i j m + 2 , u Γ + , u Γ * , p Γ , | u | > 0 ;
(7)
σ i n is the input membrane of the proposed ECPSO-MTP;
(8)
σ o u t is the output membrane of the proposed ECPSO-MTP. When the computation of this extended P system is completed, objects in the output membrane σ o u t will be sent to the environment σ 0 , which is regarded as the final computational result of the system. The membrane structure of the proposed ECPSO-MTP is graphically depicted in Figure 1.
The membrane structure of ECPSO-MTP can be abstracted as the hierarchical structure in mathematics, which is depicted in Figure 1. This extended P system contains m + 4 membranes, including an input membrane σ i n and an output membrane σ o u t , while the others are labeled from 1 to m + 2 . Specifically, if there are no membranes contained within a membrane, it is called an elementary membrane. In this case, membranes σ 1 to σ m + 2 are elementary membranes. Additionally, σ m + 2 is also called a comparison membrane, and the best object selected from σ 1 to σ m + 1 is stored as the best object of σ m + 2 .

4.2. Evolution Rules

In ECPSO-MTP, two types of evolution rules are defined and described for objects in different elementary membranes, including elementary membrane σ o ( 1 o m ) and elementary membrane σ m + 1 , respectively. The evolution rules are used to evolve objects. And the evolutionary mechanism of modified PSO with environmental factors and crossover operators is adopted to accomplish this evolution in different membranes. Specifically, an object u i is composed of two essential components, including velocity V i and position X i , where u i = { V i , X i } .
Two types of evolution rules are introduced to modify the velocity of objects in elementary membranes, based on the velocity updating model of classic PSO and modified PSO with environmental factors. Thus, V i ( t + 1 ) of u i in σ o at t + 1 is defined by (6) in the following:
V i ( t + 1 ) = ω V i ( t ) + c 1 r 1 ( X i l b e s t ( t ) X i ( t ) ) + c 2 r 2 ( X o g b e s t ( t ) X i ( t ) ) ,
Another evolution rule is defined by (7) in the following:
V i ( t + 1 ) = ω V i ( t ) + c 1 r 1 ( X i l b e s t ( t ) X i ( t ) ) + c 2 r 2 ( X i e b e s t ( t ) X i ( t ) ) ,
And ω is determined by (8) in the following:
ω ( t ) = ω min + ( ω max ω min ) ( t / t max ) ,
where ω min and ω max represent the minimum and maximum values of ω . The maximum number of iterations is denoted by t max . c 1 and c 2 are typically set to 2. X i l b e s t ( t ) represents the local best of u i , and is also indicated as u i l b e s t ( t ) . X o g b e s t ( t ) represents the global best of u i in σ o , and is also denoted by u o g b e s t ( t ) . X i e b e s t ( t ) is the environmental factor of u i and is determined by Equation (2).
The position updating model of the classic PSO is employed to modify the position of objects, and X i ( t + 1 ) of u i in σ o is defined by (9) in the following:
X i ( t + 1 ) = X i ( t ) + V i ( t + 1 ) ,
Then, u i l b e s t ( t + 1 ) of u i at t + 1 is described by (10) in the following:
u i l b e s t ( t + 1 ) = { X i ( t + 1 ) ,   if   f ( X i ( t + 1 ) ) < f ( X i l b e s t ( t ) ) , X i l b e s t ( t ) ,   otherwise
where f ( ) represents the fitness of the fitness function. And u o g b e s t ( t + 1 ) of σ o at t + 1 is described by (11) in the following:
u o g b e s t ( t + 1 ) = { X i l b e s t ( t + 1 ) ,   if   f ( X i l b e s t ( t + 1 ) ) < f ( X o g b e s t ( t ) ) X o g b e s t ( t ) ,   otherwise
In addition, the position updating model based on the crossover operator is adopted to modify the position of objects in σ m + 1 using the Equations (3)–(5). And the global best u m + 1 g b e s t ( t + 1 ) of σ m + 1 at t + 1 is the best position in σ m + 1 .

4.3. Communication Rules

In ECPSO-MTP, evolutional symport with promoters of MTP is introduced to facilitate communication between elementary membranes. Two kinds of communication rules for objects through the conveyor direction of information, including forward and backward communication rules, are defined and adopted to enable communication to take place in different elementary membranes. More details about these communication rules are depicted as follows.

4.3.1. Forward Communication Rules

Under the hierarchical structure of the proposed ECPSO-MTP, forward communication rules are applied to establish transitive relationships between elementary membranes. There are two types of communication rules in the onward direction, including σ o ( 1 o m ) to σ m + 1 and σ m + 1 to σ m + 2 . It is noted that there is no interaction between elementary membranes at the same level of membrane structure.
The forward communication rules from σ o to σ m + 1 are given in the form, R o , m + 1 :   [ u o g b e s t ( t ) | p ] o [ λ ] m + 1 [ λ ] o [ u o ( t ) ] m + 1 , for 1 o m . It only can be applied on a configuration if both u o g b e s t ( t ) and p appear in σ o at t . When this communication rule associated with σ o and σ m + 1 is applied, under the presence of p , u o g b e s t ( t ) in σ o is simultaneously sent to σ m + 1 and evolve into u o ( t ) . Therefore, the totality of objects from σ 1 to σ m in σ m + 1 is m .
The forward communication rules from σ m + 1 to σ m + 2 are given in the form, R m + 1 , m + 2 :   [ u o l b e s t ( t ) | p ] m + 1 [ λ ] m + 2 [ λ ] m + 1 [ u o ( t ) ] m + 2 , for 1 o m . It only can be applied on a configuration if both u o l b e s t ( t ) and p appear in σ m + 1 at t . When this communication rule associated with σ m + 1 and σ m + 2 is applied, under the presence of p , u o l b e s t ( t ) in σ m + 1 is simultaneously sent to σ m + 2 and revised to u o ( t ) . Thus, the best object from σ m + 1 is selected and stored as the global best u m + 2 g b e s t ( t ) of σ m + 2 at t .

4.3.2. Backward Communication Rules

Under the hierarchical structure of proposed ECPSO-MTP, backward communication rules based on the backward direction of information are applied from σ o ( 1 o m ) to σ m + 2 . The backward communication rules from σ o to σ m + 2 are given in the form, R o , m + 2 :   [ λ | q ] o [ u m + 2 g b e s t ( t ) ] m + 2 [ u o g b e s t ( t ) ] o [ λ ] m + 2 , for 1 o m . It only can be applied on a configuration if both q and u m + 2 g b e s t ( t ) are respectively present in σ o and σ m + 2 at t , noted that q p . When this communication rule associated with σ o and σ m + 2 is applied, under the presence of q in σ o , u m + 2 g b e s t ( t ) in σ m + 2 is simultaneously sent to σ o and evolves into u o g b e s t ( t ) .

4.4. Computation of Proposed ECPSO-MTP

(1)
Initialization
<1> Parameters initialized
In the proposed ECPSO-MTP, the values of adjusting parameters are preset by adding the following values to the existing ones, including N and n o , where N is the totality of objects in the system, n o is the totality of objects in σ o ( 1 o m ). The static membrane structure of the proposed ECPSO-MTP is shown in Figure 1 in more details. All objects involved in the computational procedure of the system are located in σ i n during initialization;
<2> Velocity and position initialized
To initialize the velocity and position for all objects in σ i n , a random generation strategy in search space is employed. Then these objects are sent to the elementary membrane σ 1 to σ m respectively, where n o = N / m ;
<3> Local and global best updated
The local best u i l b e s t ( 1 i n o ) and global best u o g b e s t in σ o are generated by the Equations (10) and (11);
(2)
Evolution mechanism for σ 1 to σ m
<1> Velocity and position updated
The evolution rules for the velocity updating of all objects in σ o , based on classic PSO and modified PSO with environmental factors, are randomly employed according to Equations (6)–(8), and the position updating is determined by Equation (9);
<2> Local and global best updated
The local best u i l b e s t ( 1 i n o ) and global best u o g b e s t in σ o are generated by the Equations (10) and (11);
(3)
Forward communication mechanism from σ o to σ m + 1
Under the presence of both p and u o g b e s t in σ o , the first type of forward communication rules are employed to transmit u o g b e s t from σ o to σ m + 1 and evolve it into u o in σ m + 1 . Specifically, the promoter objects p can be described as logical judgements, where p = { t < t max } ;
(4)
Evolution mechanism for σ m + 1
<1> Position updated
The evolution rules in σ m + 1 based on the crossover operator are employed to update objects in σ m + 1 according to Equations (3)–(5);
<2> Local best updated
Update u o l b e s t for all objects in σ m + 1 according to Equation (10);
(5)
Forward commutation mechanism from σ m + 1 to σ m + 2
Under the presence of both p and u o l b e s t in σ m + 1 , the second type of forward communication rules is employed to transmit u o l b e s t from σ m + 1 to σ m + 2 and evolve it into u o in σ m + 2 . The best object from σ m + 1 is regarded as the global best u m + 2 g b e s t of σ m + 2 . Then u m + 2 g b e s t is sent to σ o u t as the computed result of the system at the moment;
(6)
Backward communication mechanism from σ o to σ m + 2
Under the presence of q in σ o and u m + 2 g b e s t in σ m + 2 , the backward communication rules from σ o to σ m + 2 are utilized to transmit u m + 2 g b e s t from σ m + 2 to σ o and evolve it into u o g b e s t in σ o . Specifically, the promoter objects q can be described as logical judgements, where q = { f ( u m + 2 g b e s t ) < f ( u o g b e s t ) } . The communication relationships within the system are graphically depicted in Figure 2, with black arrows indicating the direction of communication;
(7)
Termination and output
The evolution-communication mechanism in ECPSO-MTP is executed repeatedly through iteration until the maximum number of iterations is attained. When the system stops, the last object of u m + 2 g b e s t in σ o u t is transmit to the environment σ 0 , which can be represented by the final computed result of the proposed ECPSO-MTP. The pseudocode for the computation of the proposed ECPSO-MTP is depicted in the Algorithm 1 as follows.
Algorithm 1 ECPSO-MTP
Input: N , t max ,   c 1 ,   c 2 ,   r 1 ,   r 2 ,   ω min ,   ω max ,   θ φ ,   θ γ m ;
(1) Initialization
 <1> Velocity and position initialized
  for i = 1 to N
   Velocity of object u i :   V i = r a n d ( s l , s u ) ;
   Position of object u i : X i = r a n d ( s l , s u ) ;
  end
 <2> Local and global best updated
  for o = 1 to m
   for i = 1 to n
    Update local best u i l b e s t of objects u i according to Equation (10);
    Update global best u o g b e s t of object u i according to Equation (11);
   end
  end
(2) Evolution mechanism for σ 1 to σ m
 <1> Velocity and position updated
  for o = 1 to m
   for i = 1 to n
    Update velocity V i of object u i based on a randomly selection strategy according to Equations (6)–(8);
    Update position X i of object u i according to Equation (9);
   end
   end
 <2> Local and global best updated
(3) Forward communication mechanism from σ o to σ m + 1
 if p = { t < t max }
  for o = 1 to m
    R o , m + 1 :   [ u o g b e s t ( t ) | p ] o [ λ ] m + 1 [ λ ] o [ u o ( t ) ] m + 1 ;
  end
 end
(4) Evolution mechanism for σ m + 1
 <1> Position updated
  Update position X i of object u i in σ m + 1 according to Equations (3)–(5);
 <2> Local best updated
(5) Forward commutation mechanism from σ m + 1 to σ m + 2
 if p = { t < t max }
   R m + 1 , m + 2 :   [ u o l b e s t ( t ) | p ] m + 1 [ λ ] m + 2 [ λ ] m + 1 [ u o ( t ) ] m + 2 ;
 end
(6) Backward communication mechanism from σ o to σ m + 2
 if q = { f ( u m + 2 g b e s t ) < f ( u o g b e s t ) }
  for o = 1 to m
    R o , m + 2 :   [ λ | q ] o [ u m + 2 g b e s t ( t ) ] m + 2 [ u o g b e s t ( t ) ] o [ λ ] m + 2 ;
  end
 end
(7) Termination and output
 if t > t max
  Best position of P system: u m + 2 g b e s t ;
  Best Fitness of P system: f ( u m + 2 g b e s t ) ;
 end
Output: Best position of P system; best fitness of P system;

4.5. Complexity Analysis

The complexity of ECPSO-MTP for solving clustering problems is discussed and analyzed in this subsection. Firstly, some clarifications are given as follows: M represents the totality of data points in the dataset and D represents the dimension of data points. K represents the totality of clusters in the dataset, usually D M , and K M .
The computation of the proposed ECPSO-MTP consists of three main steps, including initialization, evolution, and communication phases. In the initialization phase, the computation time is mostly determined by the calculation cost of the fitness function, and the time needed for distance computation for one object is K M D , which can be simplified to M . The computation time for all objects in the system is n M , and the complexity for the initialized state of ECPSO-MTP is O ( n M ) . In the evolution phase, the time of evolution rules needed by executing once in σ o is n M . The computation time of the evolution phase for σ 1 to σ m under the distributed and parallel processing of P systems is n M . And the computation time of the evolution phase for σ m + 1 is m M . Then the complexity of the evolution state of ECPSO-MTP is O ( ( n + m ) M ) . In the communication phase, the number of communication rules needed to be executed once in ECPSO-MTP is set to 1. The communication time from σ o to σ m + 1 , σ m + 1 to σ m + 2 , σ o to σ m + 2 , is equal to 3, and the complexity of the communication state of ECPSO-MTP is O ( 3 ) . Therefore, the time of the whole system by executing one iteration is ( n + m ) M + 3 , and the cumulative time of the system is n M + ( ( n + m ) M + 3 ) t max . Then the complexity of the proposed ECPSO-MTP is O ( n M t max ) .

5. The Proposed ECPSO-MTP for Data Clustering

In this section, computational experiments with accuracy and convergence speed are introduced in order to verify the effectiveness of the proposed ECPSO-MTP, and a suite of commonly used datasets, including Artificial and UCI datasets, are adopted. To further demonstrate the validity of the proposed ECPSO-MTP, five previous clustering approaches are employed as comparison approaches. All clustering approaches are executed on a Dell desktop computer equipped with an Intel 8.00 GHz i7-8550U processor and 16 GB of RAM, operating under the Windows 11 environment.

5.1. Test Datasets

Test datasets, including artificial and UCI datasets, are commonly used to testify to the effectiveness of clustering approaches. In this comparison experiment, eight datasets, consisting of two artificial and six UCI datasets, have been previously reported and adopted as benchmarks in research. Two artificial datasets, i.e., Data_9_2 and Square4, are generated manually by existing literature [64]. Six UCI datasets, i.e., Iris, Newthyroid, Seeds, Yeast, Glass, and Wine, are from the UCI Machine Learning Repository [65]. Additional details of these test datasets are briefly described in Table 1.

5.2. Comparison with Other Exisitng Approaches

Five existing clustering approaches, including classic PSO, genetic algorithm (GA), differential evolution (DE), PSO with environmental factors (EPSO) [59], and PSO with an enhanced learning strategy and crossover operator (PSO-LC) [43], are compared in this computational experiment. And EPSO and PSO-LC employ different strategies to improve performance. EPSO integrates environmental factors based on a partitioning of the dataset into the generation process of particles. Three strategies, such as altering the exemplar particles, modifying adjusting parameters, including inertia weight and learning factors, and integrating PSO with GA, are applied in PSO-LC. The values of the employed parameters used in these comparative approaches are presented in Table 2.
Specifically, mean squared error (MSE) [67], as the fitness function of comparative clustering approaches, is introduced in the comparison experiments, which is determined by (12) in the following:
min c 1 , c 2 , , c K   f = min f ( c 1 , c 2 , , c K ) = min 1 M i = 1 K j = 1 M ρ i j x j c i 2 ,
where ρ is the partition matrix of the dataset, if ρ i j = 1 , where indicates that data point x j belongs to cluster i , otherwise, ρ i j = 0 . x j represents the j -th data point in the dataset, for 1 j M . c i is the clustering center of cluster i , for 1 i K . The convergence results of the six tested comparative approaches on eight datasets for typical runs are depicted in Figure 3.
It can be clearly observed in Figure 3 that the proposed ECPSO-MTP achieves mostly the best MSE values on eight test datasets. To eliminate randomness, each test clustering method was independently implemented 50 times. Simple statistical results, including the worst values (Worst), best values (Best), mean values (Mean), and standard deviations (SD) of MSE, for these optimization approaches on the test datasets are presented in Table 3.
Compared to other clustering approaches, the proposed ECPSO-MTP mostly achieves the best performance, as shown in Table 3. The external index, i.e., Purity [68], is introduced as an evaluation criterion for the clustering results obtained by these approaches, which is defined by (13) in the following [69]:
P u r i t y = 1 M i = 1 K max j | D i C j | ,
where D i is the i -th cluster obtained by a clustering approach. C j is the j -th label of the real cluster. | D i C j | is the totality of data points that belong to both D i and C j . The clustering results of Purity obtained by these clustering approaches from eight datasets are given in Table 4, and overall, ECPSO-MTP has the best performance among the six clustering approaches. Therefore, all these comparison results validate the clustering efficiency of the proposed ECPSO-MTP.

5.3. Friedman Test Statistics

To investigate the statistical significance of the proposed ECPSO-MTP, the Friedman test is employed in this analysis, and the average of MSE obtained by test clustering approaches in comparison experiments is also introduced as the reference object [70]. In this case, the null hypothesis assumes that all test comparative approaches have equal values of MSE in the experiment. More details about the Friedman test in mathematical description are given as follows [71]:
The average MSE obtained by comparative approaches on eight datasets is ranked from smallest to largest. Let r i j is the rank associated with approach j on dataset i , for 1 i 8 , 1 j 6 . Noted that if r i j = 1 , which represents the lowest value of MSE among these approaches. 1 2 ( p + 1 ) represents the average of these ranks, and its value is equal to 3 in this case, where p is the totality of test clustering approaches, and, p = 6 . The Friedman test statistic is defined by (14) in the following:
χ r 2 = 12 n p ( p + 1 ) j = 1 p ( i = 1 n r i j ) 2 3 n ( p + 1 ) ,
where n is the totality of datasets, and, n = 8 . The ranks of the average obtained by the test clustering approaches are presented in Table 5. The value of the Friedman test statistic χ r 2 is 38.393, as shown in Table 5. At the significance level of α = 5 % , the null hypothesis is rejected with p 1 degrees of freedom, and the critical value of χ 2 is 11.070. Therefore, the proposed ECPSO-MTP is statistically superior to other clustering approaches in terms of the MSE measure.

6. The Proposed ECPSO-MTP for Image Segmentation

In this section, comparison experiments with accuracy that were conducted on some test images are introduced to further investigate the clustering performance of the proposed ECPSO-MTP. Eight images from the public image segmentation dataset of BSDS500 are employed as test images for clustering approaches, and three compared approaches, including classic PSO, K-means, and spectral clustering (SC), are adopted to demonstrate the effectiveness of the proposed ECPSO-MTP.

6.1. Test Images

In the clustering experiment, eight test images, including Lawn, Agaric, Church, Castle, Elephants, Lane, Starfish, and Pyramid, which have been reported and provided from the Berkeley Segmentation Dataset and Benchmark, are utilized as we have mentioned above [72]. The size of all test images is set to 481 × 321. And these test images are clearly depicted in Figure 4. It is noted that the number of clusters in the Lawn and Agaric images is set to 2, where K = 2 , the number of clusters of Church, Castle, Elephants, and Lane images is set to 3, where K = 3 , and the number of clusters in the Starfish and Pyramid images is set to 4, where K = 4 [25]. More details about the label information on these images are graphically depicted in Figure 5.

6.2. Comparison with Other Clustering Approaches

In particular, the simple linear iterative clustering (SLIC) technique is first introduced to segment the test images [73] before conducting the clustering experiment. The total number of superpixels is set to 200 for each test image. The segmented images obtained by applying SLIC to the eight test images are given in Figure 6. Then clustering approaches, including the proposed ECPSO-MTP and compared approaches such as K-means, SC, and classic PSO, are employed to cluster the superpixels. The achievable segmentation results of these clustering approaches on the test images are shown in Figure 7, Figure 8, Figure 9, Figure 10, Figure 11, Figure 12, Figure 13 and Figure 14.
The proposed ECPSO-MTP exhibits the best segmentation quality on mostly test images, as shown in Figure 7, Figure 8, Figure 9, Figure 10, Figure 11, Figure 12, Figure 13 and Figure 14. Furthermore, each test clustering approach was independently implemented 50 times to reduce the influence of random factors. Simple statistical results of Purity from these clustering approaches are presented in Table 6, which validate the effectiveness of the proposed ECPSO-MTP.

6.3. Friedman Test Statistics

In the Friedman statistical test, the average of Purity obtained by these comparative approaches in the segmentation experiment is used as the preference object. The null hypothesis assumes that all clustering approaches have equal Purity [71]. The ranks of the average of comparative approaches are presented in Table 7. The Friedman test statistic χ r 2 , computed using Table 7 according to Equation (14), is 24.000. Therefore, at a significance level of 5%, the null hypothesis is rejected with 3 degrees of freedom due to the critical value of χ 2 is equal to 7.815. Thus, the proposed ECPSO-MTP is statistically superior to the other clustering approaches in terms of Purity.

7. Conclusions

In this paper, an attractive extended TP combining with MTP and an improved PSO based on environmental factors and a crossover operator is constructed and proposed, referred to as ECPSO-MTP. The proposed ECPSO-MTP aims to establish a highly efficient P system for solving clustering problems that is integrated with the computing framework of MTP and the evolutionism of PSO. Two types of evolution rules are defined and described in ECPSO-MTP, which are applied to different elementary membranes to evolve objects, including evolution rules with environmental factors and evolution rules with crossover operators. The environmental factor, which uses partitioning information, is randomly allocated to objects to replace global best and participate in the evolution process. The crossover operator is accomplished through the hybridization of the global best of elementary membranes to reject randomness. Information exchange and resource sharing between elementary membranes are accomplished by evolutional symport with promoters of MTP, including forward and backward communication rules. At last, comparison experiments are conducted on benchmark clustering and image segmentation datasets to validate the clustering efficiency of the proposed ECPSO-MTP.
The distributed and parallel processing of P systems provides an efficient way for solving complex problems with polynomial or linear complexity. However, the application of P systems is limited by computational incompleteness and implementation difficulty. To address these limitations, many variations of P systems have been designed and developed, particularly extended P systems integrated with EC. The proposed ECPSO-MTP is an instance of such an extended P system. The monodirectional connection between elementary membranes in ECPSO-MTP is straightforward and simple to implement. Future studies will explore the combination of this novel variant of P systems with other SI approaches. In addition, reducing the complexity of this extended P system may become an important area of research in the future. Finally, it should be noted that only low-dimensional or small datasets were used for clustering experiments, and the clustering performance of ECPSO-MTP may be limited by high-dimensional or large datasets. Much additional research is necessary to effectively utilize this extended P system for resolving complex clustering problems.

Author Contributions

Conceptualization, X.L.; methodology and writing—original draft preparation, L.W. and Q.R.; validation, J.Q.; formal analysis, L.G.; data curation, Y.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The two artificial datasets used in the experiments were provided from the Indian Statistical Institute, Artificial datasets, which can be accessed at: https://www.isical.ac.in/content/research-data (accessed on 16 June 2020). The six UCI datasets used in the experiments were provided from the UCI Machine Learning Repository, UCI datasets, which can be accessed at: http://archive.ics.uci.edu/ml/datasets.php (accessed on 20 July 2022). The eight segmentation images used in the experiments were provided from the Berkely Computer Vision Group, the Berkely segmentation dataset and benchmark (BSDS500), which can be accessed at: https://www2.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/segbench/ (accessed on 13 October 2021).

Acknowledgments

This research project was supported by the Doctoral Foundation of Shandong Jianzhu University, with the number of Doctoral Foundation is X21008Z.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Păun, G. Membrane computing: An introduction. Theor. Comput. Sci. 2002, 287, 73–100. [Google Scholar] [CrossRef] [Green Version]
  2. Păun, G. Computing with membranes. J. Comput. Syst. Sci. 2000, 61, 108–143. [Google Scholar] [CrossRef] [Green Version]
  3. Pan, L.; Zeng, L.; Song, T. Membrane Computing an Introduction, 1st ed.; Huazhong University of Science and Technology Press: Wuhan, China, 2012; pp. 1–10. [Google Scholar]
  4. Păun, G. Membrane computing and economics: A General View. Int. J. Comput. Commun. Control. 2016, 11, 105–112. [Google Scholar] [CrossRef]
  5. Freund, R.; Păun, G.; Mario, J. Tissue P systems with channel states. Theor. Comput. Sci. 2005, 330, 101–116. [Google Scholar] [CrossRef] [Green Version]
  6. Li, Y.Y.; Song, B.S.; Zeng, X.X. Rule synchronization for monodirectional tissue-like P systems with channel states. Inf. Comput. 2022, 285, 104895. [Google Scholar] [CrossRef]
  7. Păun, A.; Păun, G. The power of communication P systems with symport/antiport. New Gener. Comput. 2002, 20, 295–305. [Google Scholar] [CrossRef]
  8. Jiang, Z.N.; Liu, X.Y.; Zang, W.K. A kernel-based intuitionistic weight fuzzy k-modes algorithm using coupled chained P system combines DNA genetic rules for categorical data. Neurocomputing 2023, 528, 84–96. [Google Scholar] [CrossRef]
  9. Song, B.; Hu, Y.; Adorna, H.N.; Xu, F. A quick survey of tissue-like P systems. Rom. J. Inf. Sci. Technol. 2018, 21, 310–321. [Google Scholar]
  10. Kujur, S.S.; Sahana, S.K. Medical image registration utilizing tissue P systems. Front. Pharmacol. 2022, 13, 949872. [Google Scholar] [CrossRef]
  11. Martin, O.D.; Cabrera, V.L.; Mario, J.P. The environment as a frontier of efficiency in tissue P systems with communication rules. Theor. Comput. Sci. 2023, 956, 113812. [Google Scholar] [CrossRef]
  12. Pan, L.Q.; Song, B.S.; Zandron, C. On the computational efficiency of tissue P systems with evolutional symport/antiport rules. Knowl. Based Syst. 2023, 262, 110266. [Google Scholar] [CrossRef]
  13. Luo, Y.; Guo, P.; Jiang, Y.; Zhang, Y. Timed homeostasis tissue-Like P systems with evolutional symport/antiport rules. IEEE Access 2020, 8, 131414–131424. [Google Scholar] [CrossRef]
  14. Orellana-Martín, D.; Valencia-Cabrera, L.; Song, B.; Pan, L.; Pérez-Jiménez, M.J. Tissue P systems with evolutional communication rules with two objects in the left-hand side. Nat. Comput. 2023, 22, 119–132. [Google Scholar] [CrossRef]
  15. Song, B.S.; Pan, L.Q. The computational power of tissue-like P systems with promoters. Theor. Comput. Sci. 2016, 641, 43–52. [Google Scholar] [CrossRef]
  16. Song, B.; Zeng, X.; Jiang, M.; Pérez-Jiménez, M.J. Monodirectional tissue P systems with promoters. IEEE Trans. Cybern. 2021, 51, 438–450. [Google Scholar] [CrossRef]
  17. Song, B.S.; Zeng, X.X.; Paton, R.A. Monodirectional tissue P systems with channel states. Inf. Sci. 2021, 546, 206–219. [Google Scholar] [CrossRef]
  18. Song, B.S.; Li, K.L.; Zeng, X.X. Monodirectional evolutional symport tissue P systems with promoters and cell division. IEEE Trans. Parallel Distrib. Syst. 2022, 33, 332–342. [Google Scholar] [CrossRef]
  19. Song, B.S.; Li, K.L.; Zeng, X.X. Monodirectional evolutional symport tissue P systems with channel states and cell division. Sci. China-Inf. Sci. 2023, 66, 139104. [Google Scholar] [CrossRef]
  20. Zhang, G.; Jiménez, M.; Gheorghe, G. Real-Life Applications with Membrane Computing, 1st ed.; Springer Press: Berlin/Heidelberg, Germany, 2017; pp. 11–12. [Google Scholar]
  21. Tian, X.; Liu, X.Y. Improved hybrid heuristic algorithm inspired by tissue-like membrane system to solve job shop scheduling problem. Processes 2021, 20, 219. [Google Scholar] [CrossRef]
  22. Zhang, G.X.; Chen, J.X.; Gheorghe, M. QA hybrid approach based on differential evolution and tissue membrane systems for solving constrained manufacturing parameter optimization problems. Appl. Soft Comput. 2013, 13, 1528–1542. [Google Scholar] [CrossRef]
  23. Peng, H.; Shi, P.; Wang, J.; Riscos-Núñez, A.; Pérez-Jiménez, M.J. Multiobjective fuzzy clustering approach based on tissue-like membrane systems. Knowl. Based Syst. 2017, 125, 74–82. [Google Scholar] [CrossRef]
  24. Lagos-Eulogio, P.; Seck-Tuoh-Mora, J.C.; Hernandez-Romero, N.; Medina-Marin, J. A new design method for adaptive IIR system identification using hybrid CPSO and DE. Nonlinear Dyn. 2017, 88, 2371–2389. [Google Scholar] [CrossRef]
  25. Wang, L.; Liu, X.; Qu, J.; Zhao, Y.; Jiang, Z.; Wang, N. An extended tissue-like P System based on membrane systems and quantum-behaved particle swarm optimization for image segmentation. Processes 2022, 10, 287. [Google Scholar] [CrossRef]
  26. Luo, Y.G.; Guo, P.; Zhang, M.Z. A framework of ant colony P system. IEEE Access 2019, 7, 157655–157666. [Google Scholar] [CrossRef]
  27. Peng, H.; Wang, J. A hybrid approach based on tissue P systems and artificial bee colony for IIR system identification. Neural Comput. Appl. 2017, 28, 2675–2685. [Google Scholar] [CrossRef]
  28. Chen, H.J.; Liu, X.Y. An improved multi-view spectral clustering based on tissue-like P systems. Sci. Rep. 2022, 12, 18616. [Google Scholar] [CrossRef]
  29. Sharif, E.A.; Agoyi, M. Using tissue-like P system to solve the nurse rostering problem at the medical centre of the national university of malaysia. Appl. Nanosci. 2022, 2022, 3145. [Google Scholar] [CrossRef]
  30. Issac, T.; Silas, S.; Rajsingh, E.B. Investigative prototyping a tissue P system for solving distributed task assignment problem in heterogeneous wireless sensor network. J. King Saud Univ. -Comput. Inf. Sci. 2022, 34, 3685–3702. [Google Scholar] [CrossRef]
  31. Chen, H.J.; Liu, X.Y. Reweighted multi-view clustering with tissue-like P system. PLoS ONE 2023, 18, e269878. [Google Scholar] [CrossRef]
  32. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the IEEE International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar]
  33. Kassoul, K.; Zufferey, N.; Cheikhrouhou, N.; Belhaouari, S.B. Exponential particle swarm optimization for global optimization. IEEE Access 2022, 10, 78320–78344. [Google Scholar] [CrossRef]
  34. Wang, D.S.; Tan, D.P.; Liu, L. Particle swarm optimization algorithm: An overview. Soft Comput. 2017, 22, 387–408. [Google Scholar] [CrossRef]
  35. Bi, J.X.; Zhao, M.O.; Chai, D.S. PSOSVRPos: WiFi indoor positioning using SVR optimized by PSO. Expert Syst. Appl. 2023, 222, 119778. [Google Scholar] [CrossRef]
  36. Anbarasi, M.P.; Kanthalakshmi, S. Power maximization in standalone photovoltaic system: An adaptive PSO approach. Soft Comput. 2023, 27, 8223–8232. [Google Scholar] [CrossRef]
  37. Peng, J.; Li, Y.; Kang, H.; Shen, Y.; Sun, X.; Chen, Q. Impact of population topology on particle swarm optimization and its variants: An information propagation perspective. Swarm Evol. Comput. 2022, 69, 100990. [Google Scholar] [CrossRef]
  38. Harrison, K.; Engelbrecht, A.; Berman, B. Self-adaptive particle swarm optimization: A review and analysis of convergence. Swarm Intell. 2018, 12, 187–226. [Google Scholar] [CrossRef] [Green Version]
  39. Huang, C.; Zhou, X.; Ran, X.; Liu, Y.; Deng, W.; Deng, W. Co-evolutionary competitive swarm optimizer with three-phase for large-scale complex optimization problem. Inf. Sci. 2023, 619, 2–18. [Google Scholar] [CrossRef]
  40. Chen, Z.; Wang, Y.; Chan, T.H.; Li, X.; Zhao, S. A particle swarm optimization algorithm with sigmoid increasing inertia weight for structural damage identification. Appl. Sci. Basel 2022, 12, 3429. [Google Scholar] [CrossRef]
  41. Wang, J.; Wang, X.; Li, X.; Yi, J. A hybrid particle swarm optimization algorithm with dynamic adjustment of inertia weight based on a new feature selection method to optimize SVM parameters. Entropy 2023, 25, 531. [Google Scholar] [CrossRef] [PubMed]
  42. Duan, Y.; Chen, N.; Chang, L.; Ni, Y.; Kumar, S.V.N.S.; Zhang, P. CAPSO: Chaos adaptive particle swarm optimization algorithm. IEEE Access 2022, 10, 29393–29405. [Google Scholar] [CrossRef]
  43. Molaei, S.; Moazen, H.; Najjar-Ghabel, S.; Farzinvash, L. Particle swarm optimization with an enhanced learning strategy and crossover operator. Knowl. Based Syst. 2021, 215, 106768. [Google Scholar] [CrossRef]
  44. Pan, L.Q.; Zhao, Y.; Li, L.H. Neighborhood-based particle swarm optimization with discrete crossover for nonlinear equation systems. Swarm Evol. Comput. 2022, 69, 101019. [Google Scholar] [CrossRef]
  45. Das, P.K.; Jena, P.K. Multi-robot path planning using improved particle swarm optimization algorithm through novel evolutionary operators. Appl. Soft Comput. 2020, 92, 106312. [Google Scholar] [CrossRef]
  46. Pu, H.; Song, T.; Schonfeld, P.; Li, W.; Zhang, H.; Hu, J.; Peng, X.; Wang, J. Mountain railway alignment optimization using stepwise & hybrid particle swarm optimization incorporating genetic operators. Appl. Soft Comput. 2019, 78, 41–57. [Google Scholar]
  47. Gu, Q.; Wang, Q.; Chen, L.; Li, X.; Li, X. A dynamic neighborhood balancing-based multi-objective particle swarm optimization for multi-modal problems. Expert Syst. Appl. 2022, 205, 117313. [Google Scholar] [CrossRef]
  48. Moazen, H.; Molaei, S.; Farzinvash, L.; Sabaei, M. PSO-ELPM: PSO with elite learning, enhanced parameter updating, and exponential mutation operator. Inf. Sci. 2023, 628, 70–91. [Google Scholar] [CrossRef]
  49. Lynn, N.; Suganthan, P.N. Heterogeneous comprehensive learning particle swarm optimization with enhanced exploration and exploitation. Swarm Evol. Comput. 2015, 24, 11–24. [Google Scholar] [CrossRef]
  50. Cao, Y.L.; Zhang, H. Comprehensive learning particle swarm optimization algorithm with local search for multimodal functions. IEEE Trans. Evol. Comput. 2019, 23, 718–731. [Google Scholar] [CrossRef]
  51. Wang, F.; Zhang, H.; Li, K.; Lin, Z.; Yang, J.; Shen, X.-L. A hybrid particle swarm optimization algorithm using adaptive learning strategy. Inf. Sci. 2018, 436, 162–177. [Google Scholar] [CrossRef]
  52. Zhou, X.; Zhou, S.; Han, Y.; Zhu, S. Levy flight-based inverse adaptive comprehensive learning particle swarm optimization. Math. Biosci. Eng. 2022, 19, 5241–5268. [Google Scholar] [CrossRef]
  53. Ge, Q.; Guo, C.; Jiang, H.; Lu, Z.; Yao, G.; Zhang, J.; Hua, Q. Industrial power load forecasting method based on reinforcement learning and PSO-LSSVM. IEEE Trans. Cybern. 2022, 52, 1112–1124. [Google Scholar] [CrossRef]
  54. Li, W.; Liang, P.; Sun, B.; Sun, Y.; Huang, Y. Reinforcement learning-based particle swarm optimization with neighborhood differential mutation strategy. Swarm Evol. Comput. 2023, 78, 101274. [Google Scholar] [CrossRef]
  55. Lu, L.W.; Zhang, J.; Sheng, J.A. Enhanced multi-swarm cooperative particle swarm optimizer. Swarm Evol. Comput. 2022, 69, 100989. [Google Scholar] [CrossRef]
  56. Li, D.; Wang, L.; Guo, W.; Zhang, M.; Hu, B.; Wu, Q. A particle swarm optimizer with dynamic balance of convergence and diversity for large-scale optimization. Appl. Soft Comput. 2023, 32, 109852. [Google Scholar] [CrossRef]
  57. Zhang, Y.; Gong, D.W.; Ding, Z.H. Handling multi-objective optimization problems with a multi-swarm cooperative particle swarm optimizer. Expert Syst. Appl. 2011, 38, 13933–13941. [Google Scholar] [CrossRef]
  58. Li, T.; Shi, J.; Deng, W.; Hu, Z. Pyramid particle swarm optimization with novel strategies of competition and cooperation. Appl. Soft Comput. 2022, 121, 108731. [Google Scholar] [CrossRef]
  59. Song, W.; Ma, W.; Qiao, G. Particle swarm optimization algorithm with environmental factors for clustering analysis. Soft Comput. 2017, 21, 283–293. [Google Scholar] [CrossRef]
  60. Nenavath, H.; Jatoth, E.K. Hybridizing sine cosine algorithm with differential evolution for global optimization and object tracking. Appl. Soft Comput. 2018, 62, 1019–1043. [Google Scholar] [CrossRef]
  61. Pozna, C.; Precup, R.-E.; Horvath, E.; Petriu, E.M. Hybrid particle filter-particle swarm optimization algorithm and application to fuzzy controlled servo systems. IEEE Trans. Fuzzy Syst. 2022, 30, 4286–4297. [Google Scholar] [CrossRef]
  62. Song, B.S.; Zhang, C.; Pan, L.Q. Tissue-like P systems with evolutional symport/antiport rules. Inf. Sci. 2017, 378, 177–193. [Google Scholar] [CrossRef]
  63. Pan, T.; Xu, J.; Jiang, S.; Xu, F. Cell-like spiking neural P systems with evolution rules. Soft Comput. 2019, 23, 5401–5409. [Google Scholar] [CrossRef]
  64. Artificial Datasets. Available online: https://www.isical.ac.in/content/research-data (accessed on 16 June 2020).
  65. UCI Repository of Machine Learning Databases. Available online: http://archive.ics.uci.edu/ml/datasets.php (accessed on 20 July 2022).
  66. Peng, H.; Wang, J.; Shi, P.; Pérez-Jiménez, M.J.; Riscos-Núñez, A. An extended membrane system with active membranes to solve automatic fuzzy clustering problems. Int. J. Neural Syst. 2016, 26, 1650004. [Google Scholar] [CrossRef]
  67. Malinen, M.I.; Franti, P. Clustering by analytic functions. Inf. Sci. 2012, 217, 31–38. [Google Scholar] [CrossRef]
  68. Tang, Y.; Huang, J.; Pedrycz, W.; Li, B.; Ren, F. A fuzzy cluster validity index induced by triple center relation. IEEE Trans. Cybern. 2023, 99, 1–13. [Google Scholar] [CrossRef] [PubMed]
  69. Ge, F.H.; Liu, X.Y. Density peaks clustering algorithm based on a divergence distance and tissue-Like P System. Appl. Sci. Basel 2023, 13, 2293. [Google Scholar] [CrossRef]
  70. Kumari, B.; Kumar, S. Chaotic gradient artificial bee colony for text clustering. Soft Comput. 2016, 20, 1113–1126. [Google Scholar]
  71. Friedman, M. The use of Ranks to avoid the assumption of normality implicit in the analysis of variance. Publ. Am. Stat. Assoc. 1937, 32, 675–701. [Google Scholar] [CrossRef]
  72. The Berkeley Segmentation Dataset and Benchmark. Available online: https://www2.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/segbench/ (accessed on 13 October 2021).
  73. Achanta, R.; Shaji, A.; Smith, K.; Lucchi, A.; Fua, P.; Süsstrunk, S. SLIC superpixels compared to state-of-the-art superipixel methods. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 2274–2282. [Google Scholar] [CrossRef] [Green Version]
Figure 1. The membrane structure of the proposed ECPSO-MTP.
Figure 1. The membrane structure of the proposed ECPSO-MTP.
Applsci 13 07755 g001
Figure 2. The communication relationship in the proposed ECPSO-MTP.
Figure 2. The communication relationship in the proposed ECPSO-MTP.
Applsci 13 07755 g002
Figure 3. Comparison of convergence results from six test clustering approaches on eight datasets. (a) Data_9_2; (b) Square4; (c) Iris; (d) Newthyroid; (e) Seeds; (f) Yeast; (g) Glass; (h) Wine.
Figure 3. Comparison of convergence results from six test clustering approaches on eight datasets. (a) Data_9_2; (b) Square4; (c) Iris; (d) Newthyroid; (e) Seeds; (f) Yeast; (g) Glass; (h) Wine.
Applsci 13 07755 g003aApplsci 13 07755 g003b
Figure 4. Eight test images. (a) Lawn; (b) Agaric; (c) Church; (d) Castle; (e) Elephants; (f) Lane; (g) Starfish; (h) Pyramid.
Figure 4. Eight test images. (a) Lawn; (b) Agaric; (c) Church; (d) Castle; (e) Elephants; (f) Lane; (g) Starfish; (h) Pyramid.
Applsci 13 07755 g004aApplsci 13 07755 g004b
Figure 5. The labels of eight test images. (a) Lawn; (b) Agaric; (c) Church; (d) Castle; (e) Elephants; (f) Lane; (g) Starfish; (h) Pyramid.
Figure 5. The labels of eight test images. (a) Lawn; (b) Agaric; (c) Church; (d) Castle; (e) Elephants; (f) Lane; (g) Starfish; (h) Pyramid.
Applsci 13 07755 g005
Figure 6. Segmentation results obtained by SLIC for eight test images. (a) Lawn; (b) Agaric; (c) Church; (d) Castle; (e) Elephants; (f) Lane; (g) Starfish; (h) Pyramid.
Figure 6. Segmentation results obtained by SLIC for eight test images. (a) Lawn; (b) Agaric; (c) Church; (d) Castle; (e) Elephants; (f) Lane; (g) Starfish; (h) Pyramid.
Applsci 13 07755 g006aApplsci 13 07755 g006b
Figure 7. Comparison of clustering results on the Lawn image. (a) K-means; (b) SC; (c) PSO; (d) ECPSO-MTP.
Figure 7. Comparison of clustering results on the Lawn image. (a) K-means; (b) SC; (c) PSO; (d) ECPSO-MTP.
Applsci 13 07755 g007
Figure 8. Comparison of clustering results on the Agaric image. (a) K-means; (b) SC; (c) PSO; (d) ECPSO-MTP.
Figure 8. Comparison of clustering results on the Agaric image. (a) K-means; (b) SC; (c) PSO; (d) ECPSO-MTP.
Applsci 13 07755 g008
Figure 9. Comparison of clustering results on the Church image. (a) K-means; (b) SC; (c) PSO; (d) ECPSO-MTP.
Figure 9. Comparison of clustering results on the Church image. (a) K-means; (b) SC; (c) PSO; (d) ECPSO-MTP.
Applsci 13 07755 g009
Figure 10. Comparison of clustering results on the Castle image. (a) K-means; (b) SC; (c) PSO; (d) ECPSO-MTP.
Figure 10. Comparison of clustering results on the Castle image. (a) K-means; (b) SC; (c) PSO; (d) ECPSO-MTP.
Applsci 13 07755 g010
Figure 11. Comparison of clustering results on the Elephants image. (a) K-means; (b) SC; (c) PSO; (d) ECPSO-MTP.
Figure 11. Comparison of clustering results on the Elephants image. (a) K-means; (b) SC; (c) PSO; (d) ECPSO-MTP.
Applsci 13 07755 g011
Figure 12. Comparison of clustering results on the Lane image. (a) K-means; (b) SC; (c) PSO; (d) ECPSO-MTP.
Figure 12. Comparison of clustering results on the Lane image. (a) K-means; (b) SC; (c) PSO; (d) ECPSO-MTP.
Applsci 13 07755 g012
Figure 13. Comparison of clustering results on the Starfish image. (a) K-means; (b) SC; (c) PSO; (d) ECPSO-MTP.
Figure 13. Comparison of clustering results on the Starfish image. (a) K-means; (b) SC; (c) PSO; (d) ECPSO-MTP.
Applsci 13 07755 g013
Figure 14. Comparison of clustering results on the Pyramid image. (a) K-means; (b) SC; (c) PSO; (d) ECPSO-MTP.
Figure 14. Comparison of clustering results on the Pyramid image. (a) K-means; (b) SC; (c) PSO; (d) ECPSO-MTP.
Applsci 13 07755 g014
Table 1. Description of eight test datasets in the comparison experiment.
Table 1. Description of eight test datasets in the comparison experiment.
DatasetsData InstancesFeaturesClusters
Data_9_290029
Square4100024
Iris15043
Newthyroid21553
Seeds21073
Yeast1484810
Glass21496
Wine178133
Table 2. Preferences in the comparison experiments.
Table 2. Preferences in the comparison experiments.
ParametersComparative Approaches
PSOGADEEPSOPSO-LCECPSO-MTP
N 200200200200200200
t max 200200200200200200
c 1 ,   c 2 2,2NN0.6,3(0,1)2,2
c 3 NNN(0,1)NN
r 1 ,   r 2 (0,1)NN(0,1)N(0,1)
r 3 NNN(0,1)NN
ω min ,   ω max 0.4,1.2NN0.4,0.6N0.4,1.2
θ ω NNNN0.99N
P c N0.60.6NNN
P m N0.02NNNN
θ φ NNNN0.9940.994
θ γ NNNN0.9950.995
F NN(0.5,1)NNN
m NNNNN10 [66]
Table 3. Performance of six clustering approaches on test datasets (MSE).
Table 3. Performance of six clustering approaches on test datasets (MSE).
DatasetsStatisticsComparative Approaches
PSOGADEEPSOPSO-LCECPSO-MTP
Data_9_2Worst0.72600.55970.65050.52400.62230.5248
Best0.64760.52400.52300.52150.52180.5212
Mean0.68540.54070.53610.52220.53430.5214
S.D.0.01840.00790.02750.00060.02520.0007
Square4Worst7.31647.07366.98076.95206.95256.9520
Best6.97806.97336.95206.95196.95196.9519
Mean7.11007.01956.95696.95196.95206.9519
S.D.0.08620.02740.00516.81 × 10−68.59 × 10−55.49 × 10−6
IrisWorst1.01580.58730.95780.66190.57290.5263
Best0.52680.54500.52660.52630.52870.5263
Mean0.56630.56210.54330.53570.54180.5263
S.D.0.11780.00990.06140.02360.01031.09 × 10−5
NewthyroidWorst152.0870157.1877175.1488136.4225138.1924132.9317
Best135.9901133.9973133.1129132.8665135.0995132.8379
Mean143.5047139.2553137.6728133.9052136.5018132.8426
S.D.4.30434.64207.27241.22670.73460.0187
SeedsWorst4.81883.09233.08383.03873.23462.7968
Best2.82612.92532.83062.81192.79752.7968
Mean3.29563.00592.92152.88622.90422.7968
S.D.0.75280.03820.04670.05300.10702.03 × 10−6
YeastWorst0.08220.07970.06610.04650.06680.0341
Best0.05210.05780.05860.03880.04940.0306
Mean0.07210.06740.06230.04270.05850.0315
S.D.0.00820.00760.00160.00190.00430.0008
GlassWorst5.04273.39022.76522.23453.83831.6684
Best2.70252.19302.46611.72292.33121.5705
Mean3.51142.65362.63262.02582.63591.5795
S.D.0.59150.30200.07130.11980.28700.0241
WineWorst13,616.940813,415.009214,752.672813,362.169313,396.014713,318.5101
Best13,318.667513,327.027813,320.717813,325.921713,327.515613,318.4817
Mean13,407.773613,361.643313,355.527413,340.495213,353.158313,318.4858
S.D.61.983518.9552201.65088.358318.64500.0057
Table 4. Performance of six clustering approaches on eight datasets (Purity).
Table 4. Performance of six clustering approaches on eight datasets (Purity).
DatasetsStatisticsComparative Approaches
PSOGADEEPSOPSO-LCECPSO-MTP
Data_9_2Mean0.84130.91280.91790.92120.91440.9214
S.D.0.03820.00780.03040.00220.02010.0015
Square4Mean0.93130.93330.93460.93500.93500.9350
S.D.0.00330.00270.00183.36 × 10−160.00013.36 × 10−16
IrisMean0.87710.89230.88570.89070.89320.8933
S.D.0.05510.01060.03190.00560.01070.0026
NewthyroidMean0.80070.83610.85360.85410.85020.8605
S.D.0.04480.01440.02730.03920.02987.85 × 10−16
SeedsMean0.85100.88790.89290.89520.89470.8976
S.D.0.09060.00770.00650.01190.00847.85 × 10−16
YeastMean0.34940.35590.39250.43850.37850.5044
S.D.0.02600.02660.02870.02350.01790.0215
GlassMean0.49110.49670.51850.53550.49650.5877
S.D.0.04500.01990.01540.02220.01520.0062
WineMean0.70130.70160.70180.70220.70220.7022
S.D.0.00580.00220.00412.24 × 10−162.24 × 10−162.24 × 10−16
Table 5. Computation results obtained by Friedman test statistics (Mean).
Table 5. Computation results obtained by Friedman test statistics (Mean).
DatasetsComparative Approaches
PSOGADEEPSOPSO-LCECPSO-MTP
Data_9_2654231
Square4654131
Iris654231
Newthyroid654231
Seeds654231
Yeast654231
Glass653241
Wine654231
Total Rank48403115258
Average Rank653.8751.8753.1251
Deviation2.51.50.375−1.625−0.375−2.5
Table 6. Performance of four clustering approaches on eight test images (Purity).
Table 6. Performance of four clustering approaches on eight test images (Purity).
ImageStatisticsComparative Approaches
K-MeansSCPSOECPSO-MTP
LawnWorst0.99330.99370.99340.9943
Best0.99330.99370.99340.9943
Mean0.99330.99370.99340.9943
S.D.2.28 × 10−162.34 × 10−165.61 × 10−163.42 × 10−16
AgaricWorst0.94770.95170.94850.9567
Best0.94770.95170.94850.9567
Mean0.94770.95170.94850.9567
S.D.1.24 × 10−162.65 × 10−164.49 × 10−163.16 × 10−16
ChurchWorst0.78920.89020.78760.7967
Best0.86460.89020.89030.8962
Mean0.85930.89020.86410.8903
S.D.0.01471.12 × 10−160.04450.0239
CastleWorst0.93020.94110.93270.9413
Best0.93770.94110.93770.9413
Mean0.93460.94110.93680.9413
S.D.0.00333.36 × 10−160.00194.49 × 10−16
ElephantsWorst0.69300.88810.76690.7624
Best0.92480.89400.78200.9248
Mean0.78020.88820.78070.8978
S.D.0.08620.00080.00290.0558
LaneWorst0.82050.96680.83180.8241
Best0.97300.97300.97300.9795
Mean0.91240.96760.95370.9696
S.D.0.06680.00270.04820.0671
StarfishWorst0.48050.52320.49100.5911
Best0.58600.72590.62870.8435
Mean0.55970.62330.58290.6344
S.D.0.04060.06200.01830.0639
PyramidWorst0.72000.73660.74970.7914
Best0.76610.83940.75940.8410
Mean0.75330.76480.75510.8354
S.D.0.01570.04210.00380.0073
Table 7. Computation results of the Friedman test statistics (Mean).
Table 7. Computation results of the Friedman test statistics (Mean).
ImagesComparative Approaches
K-MeansSCPSOECPSO-MTP
Lawn4231
Agaric4231
Church4231
Castle4231
Elephants4231
Lane4231
Starfish4231
Pyramid4231
Total Rank3216248
Average Rank4231
Deviation1.5−0.50.5−1.5
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, L.; Liu, X.; Qu, J.; Zhao, Y.; Gao, L.; Ren, Q. An Extended Membrane System with Monodirectional Tissue-like P Systems and Enhanced Particle Swarm Optimization for Data Clustering. Appl. Sci. 2023, 13, 7755. https://doi.org/10.3390/app13137755

AMA Style

Wang L, Liu X, Qu J, Zhao Y, Gao L, Ren Q. An Extended Membrane System with Monodirectional Tissue-like P Systems and Enhanced Particle Swarm Optimization for Data Clustering. Applied Sciences. 2023; 13(13):7755. https://doi.org/10.3390/app13137755

Chicago/Turabian Style

Wang, Lin, Xiyu Liu, Jianhua Qu, Yuzhen Zhao, Liang Gao, and Qianqian Ren. 2023. "An Extended Membrane System with Monodirectional Tissue-like P Systems and Enhanced Particle Swarm Optimization for Data Clustering" Applied Sciences 13, no. 13: 7755. https://doi.org/10.3390/app13137755

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop