Next Article in Journal
Structural Reliability Assessment of Dual RC Buildings for Different Shear Wall Configuration
Previous Article in Journal
The Mechanical Properties and Micro-Mechanism of Xanthan Gum–Coconut Shell Fiber Composite Amended Soil
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Decentralized Microenvironment-Based Particle Swarm Optimization Algorithm for Insect-Intelligent Building Platform

by
Zhenya Zhang
1,
Shaojie Xu
2,
Ping Wang
2,
Hongmei Cheng
3,* and
Shuguang Zhang
4
1
Anhui Province Key Laboratory of Intelligent Building & Building Energy Saving, Anhui Jianzhu University, Hefei 230022, China
2
School of Electronic and Information Engineering, Anhui Jianzhu University, Hefei 230022, China
3
School of Economics and Management, Anhui Jianzhu University, Hefei 230022, China
4
Department of Statistics and Finance, School of Management, University of Science and Technology of China, Hefei 230026, China
*
Author to whom correspondence should be addressed.
Buildings 2025, 15(11), 1778; https://doi.org/10.3390/buildings15111778
Submission received: 27 March 2025 / Revised: 11 May 2025 / Accepted: 20 May 2025 / Published: 23 May 2025
(This article belongs to the Section Construction Management, and Computers & Digitization)

Abstract

Information processing and control tasks in each unit within a building are typically completed internally, and information processing within a building naturally exhibits distributed characteristics. The insect-intelligent building platform is an intelligent building platform, and the concept of an insect-intelligent building platform adapts to the distributed processing requirements of building information, achieving features such as self-organization and self-adaptation of the intelligent building platform. The microenvironment network serves as a fundamental network that supports the implementation of an insect-intelligent building platform. Update mechanisms for the velocity and position of particles for PSO in the microenvironment are given. This paper also proposes a Microenvironment-based Particle Swarm Optimization (MPSO) algorithm with those update mechanisms to quickly solve optimization problems using PSO under a microenvironment network. Experimental results show that, within microenvironment networks, both synchronous and asynchronous implementations of MPSO algorithms can solve optimization problems faster than the PSO algorithm while maintaining a precision similar to that of the PSO algorithm.

1. Introduction

Building is a general term for buildings and structures. It is a human-made environment designed and constructed to satisfy social life needs by applying known materials and technologies, specific scientific laws, Feng Shui concepts, and aesthetic principles [1]. Over decades of development, building automation systems (BAS) have matured with well-established fieldbus technologies, device ecosystems, and standardized engineering specifications that govern all phases from system design to commissioning. However, contemporary BAS implementations predominantly adopt a centralized-decentralized control architecture for functionally partitioned subsystems, which inherently restricts synergistic operations among subsystems. This structural limitation compromises connection reliability, real-time responsiveness, and system-level integration—critical metrics that remain suboptimal in practice [1,2,3]. Recent years have witnessed notable advancements in intelligent spatial unit control paradigms leveraging distributed architectures, offering promising solutions to these long-standing interoperability challenges [1,2,3,4,5,6,7,8,9,10,11].
In general, if the main body of a building is divided into several non-overlapping building spatial units, the building can be considered a composite of all building spatial units [1,2,3,4,5,6,7]. Similarly, if each component of a building facility is considered a building facility unit, the building facility can be regarded as a combination of several facility units [3,4,5,6,7,8,9]. If each building spatial unit and building facility unit in a building are treated as one building unit in the building, the function of the building is a combination of the functions of each building unit, and the maintenance of the building’s function relies on the maintenance of the functions of each building unit [1,4,5,6,7].
For all spatial units within a building, considering whether these spatial units are adjacent, the entire set of spatial units forms a single spatial unit network. Similarly, the building facility can be regarded as a network of these facility units by considering whether its constituent facility units are connected via pipelines, power lines, or other types of lines or pipes. Since each facility unit within a building is deployed within an appropriate spatial unit, by using the criterion of whether a facility unit is deployed within a spatial unit, the entire network of facility units is coupled with the spatial unit network in space. For each building spatial unit and each building facility unit in a building is treated as one building unit; a building can be viewed as a network of interconnected building units [1,3,4,5,6]. Given that building units naturally possess spatial distribution and local characteristics, if the maintenance of a building unit’s function is viewed as an internal information processing task, the building’s information processing process inherently exhibits distributed and localized characteristics [4,5,6,8,9,10,11,12,13,14]. The maintenance of building unit functions is either conducted independently within each unit or carried out within each unit with assistance from adjacent units, with all data processed within each building unit being autonomously collected and handled locally.
Because the information processing required for the maintenance of building functionalities is distributed within each building unit [4,5,6], the maintenance of building functionalities is a combination of the information processing processes within each building unit. In an insect-intelligent building platform [4,5,6,7,8], to meet the information processing and control needs within each building unit, each building unit is equipped with a Building Information Processing Unit (BPU) to satisfy the information processing and control needs of each building unit. All information processing and control tasks within a building unit are completed at last by the BPU deployed in the building unit. When a BPU needs reference data from other building units to complete information processing and equipment control tasks within its local building unit, it can only obtain this data by querying the BPUs in adjacent building units. If each information processing unit deployed within a building unit is viewed as a microenvironment, and it is stipulated that only spatially adjacent microenvironments can exchange information with each other, then the information processing and control network that supports an insect-intelligent building platform consists of a network of microenvironments based on information exchange criteria. All control and information processing tasks within the building were completed within the microenvironment network. Emerging machine learning and artificial intelligence algorithms can help computer networks become smarter [15].
In building units, issues such as intelligent detection of sensor faults, area occupant counting, and tactic selection for environment control are frequently modeled as optimization problems [12,16,17,18,19,20,21,22]. Artificial neural networks, evolutionary computation, and other intelligent computational methods are frequently employed to solve these optimization problems [17,23,24,25,26]. The Particle Swarm Optimization (PSO) algorithm [27] has been widely applied to solve relevant optimization problems within buildings owing to its computational efficiency and simplicity [28,29,30,31,32,33,34,35,36].
Furthermore, for multi-objective optimization problems in building information processing [37,38,39,40,41], the Niche-based Particle Swarm Optimization (NPSO) algorithm [42,43] has been widely applied to solve multimodal multi-objective optimization problems [37,38,39,40,41]. In the NPSO algorithm, the search space is divided into several regions, and all particles are uniformly distributed across these regions to form subpopulations. The radius of a subpopulation is calculated based on the position of the global best particle and all other particles in the subpopulation. A diversity preservation mechanism is also employed to prevent particles from prematurely converging to local optima within the same region. Thus, the NPSO algorithm effectively balances exploration and exploitation, which are crucial for solving complex optimization problems with multiple objectives and potential solutions.
In an insect-intelligent building platform system, the network for information processing is structured as a microenvironment network based on information exchange criteria across all microenvironments within the building. The microenvironment network is a decentralized computing environment, and the existing implementations of the PSO algorithm and its various improvements have not been adapted to the loosely coupled distributed computing architecture provided by this decentralized environment. When the PSO algorithm is employed to solve optimization problems in this microenvironment network, two challenges arise for implementing the PSO algorithm. First, when the computational power of the building’s information processing units is limited, a large number of particles may lead to slow convergence of the PSO algorithm, potentially preventing the timely completion of the computational tasks. Conversely, if the number of particles is too low, the PSO algorithm may converge quickly but cannot guarantee solution precision. Secondly, the PSO algorithm is inherently stochastic and requires multiple runs to obtain a satisfactory approximation of the optimal solution. Executing the PSO algorithm multiple times requires more computational power from the processing nodes and additional time for solution attainment [44].
Because the computational power of each BPU in an insect-intelligent building platform system is generally weak, the performance of solving an optimization problem using the PSO algorithm in a microenvironment network can be enhanced by evenly distributing all particles across each microenvironment by running a PSO algorithm instance with a small number of particles in each microenvironment and exchanging essential computational results among the microenvironments through the network for collaborative computation. This method effectively distributes the optimization requirements based on the PSO algorithm in one microenvironment to each microenvironment node, significantly reducing the computational demands on each microenvironment while simultaneously allowing each microenvironment to approximate the optimal solution to the optimization problem. Consequently, multiple approximate optimal solutions can be rapidly acquired, facilitating the efficient and repeated solving of optimization problems. This study investigates the updating mechanisms for the particle velocity and position of the PSO algorithm in a microenvironment network and presents a Microenvironment-based Particle Swarm Optimization (MPSO) algorithm. The MPSO algorithm is presented in the Section 2. The Section 3 evaluates the performance of the MPSO algorithm through experimental testing, and the Section 4 summarizes the research on the MPSO algorithm.

2. Microenvironment-Based Particle Swarm Optimization Algorithm

In an insect-intelligent building platform, a building is considered a combination of several spatially adjacent building units, each equipped with a BPU. Each BPU represents a microenvironment within the building, and all the microenvironments within the building form a microenvironment network. The microenvironment network is responsible for all building control and information processing tasks. A certain number of sensing and control devices are typically deployed within each building unit, and the BPU located in it is responsible for sensing information and controlling devices in its designated building unit. Because the information processing or control results within a building unit may directly impact adjacent units, BPUs in neighboring units may communicate. Communication is essential for data sharing and collaborative computing. Consequently, in the microenvironment network, microenvironments that are spatially adjacent and require communication are considered neighboring microenvironments. Figure 1 illustrates the topology of a microenvironment network. In Figure 1, a gray dot represents one microenvironment node, and one edge indicates the adjacency between the two microenvironments. Logically, all microenvironment nodes were arranged in a 4-row by 5-column format, totaling 20 microenvironment nodes, as shown in Figure 1. In the microenvironment network illustrated in Figure 1, each node can only communicate and exchange data with its directly adjacent nodes. Any two non-adjacent microenvironment nodes cannot communicate directly or exchange data.
v ( t + 1 ) = ω × v ( t ) + c 1 × r 1 × p Best x ( t ) + c 2 × r 2 × g B e s t x ( t )
x ( t + 1 ) = x ( t ) + v ( t + 1 )
Generally, in the PSO algorithm, a particle represents a feasible solution to the corresponding optimization problem. In practical applications, the velocity and position update tactics for the particles of the PSO algorithm are generally specified by Equations (1) and (2). In Equations (1) and (2), v(t) represents the velocity of particles, and x(t) represents the position of particles at time t. ω is the inertia factor, c1 is the cognitive (individual) acceleration coefficient, and c2 is the social (global) acceleration coefficient. r1 and r2 are random numbers uniformly distributed in the range [0, 1]. pBest denotes the personal best position of the particles, and gBest denotes the global best position of all particles.
When the PSO algorithm is implemented in a microenvironment network, each particle must be assigned to a microenvironment, and each microenvironment may contain several particles (for simplicity, the number of particles in each microenvironment can be the same). The following assumptions are required to implement the PSO algorithm in a microenvironment network:
(1)
There are several particles in each microenvironment.
(2)
For any given microenvironment, gBestL denotes the historical best position of all particles in the microenvironment.
(3)
For a particular microenvironment with m neighboring microenvironments, gBestLi denotes the best position of all particles in the ith neighboring microenvironment, and gBestA is defined according to Equation (3). In the microenvironment-based PSO algorithm, for any particle in a given microenvironment with position x(t) and velocity v(t) at time t, the velocity v(t + 1) and position x(t + 1) at time t + 1 are determined using Equations (4) and (2), respectively. Equation (3) is known as the data exchange tactic in the microenvironment-based PSO algorithm, whereas Equations (4) and (2) describe the update tactic of the particle position within the algorithm.
g B e s t A = min ( g B e s t L , g B e s t L 1 , g B e s t L 2 g B e s t L m )
v ( t + 1 ) = ω × v ( t ) + c 1 × r a n d ( ) × ( p B e s t x ( t ) ) + c 2 × r a n d ( ) × ( g B e s t A x ( t ) )
A full description of the MPSO (Microenvironment-based Particle Swarm Optimization) algorithm is presented in Algorithm 1. In Algorithm 1, the fitness value of each particle is used to evaluate the quality of the particle’s position; the smaller the fitness value, the better the particle’s position. Fitness function fun() is used to calculate the fitness value of a particle, where the position of the particle serves as the independent variable for the function. It is important to note that in microenvironment networks, the fitness function fun() used to calculate the fitness of each particle is consistent across every microenvironment. As shown in Algorithm 1, the following three constraints are imposed:
(1)
Within each microenvironment, a small number of particles serve as a population that can independently execute a PSO algorithm instance, which adapts the characteristics of the microenvironment’s information.
(2)
A microenvironment can exchange information with its neighboring microenvironments. Those microenvironments can query the position information of historically optimal particles within each other’s neighboring microenvironments.
(3)
Non-adjacent microenvironment nodes could not exchange information in the microenvironment network.
Algorithm 1: Microenvironment-based PSO algorithm (MPSO)
Input: 
n, number of particles in a microenvironment; d, the dimension of position for each particle; c1 and c2, learning factors; ω, inertial factor; maxN, maximum number of iterations; fun, fitness function; pNeighbor, the list of neighboring microenvironments
Output: 
gBestL, the historical optimal position of all local particles in the microenvironment.
(1)
Initialize the velocity and position of each particle in the microenvironment.
(2)
Set the initial position of each particle as the initial value of its historical optimal position.
(3)
Calculate gBestL, the local best position for all local particles in the microenvironment.
(4)
k = 1;
(5)
while k <= maxN
(6)
For each neighbor microenvironment in pNeighbor, query its historical best position gBestLi, where, i = 1,2…m, and m is the number of neighbor microenvironment nodes.
(7)
Calculate gBestA according to Equation (3) for each local particle in the microenvironment
(8)
Update the velocity and position of the local particle according to Equation (4) and (2).
(9)
Calculate the fitness value of particles using the function fun () if their position is updated.
(10)
If the fitness value at the new position is smaller than the particle’s historical best fitness value, set the particle’s historical best position to the current position of the particle.
(11)
Update gBestL, the global best position for all local particles.
(12)
k = k + 1
Let gBest be the best position of all particles within the microenvironment network at time k. For any microenvironment within the microenvironment network, fun(gBestL) ≥ fun(gBestA) ≥ fun(gBest). In the MPSO algorithm, the fitness values of the particle positions within the microenvironment can converge to the fun(gBest) after a finite number of iterations because the velocity and position of each particle within any microenvironment are updated using Equations (4) and (2). Although the global best position gBest may not appear in each microenvironment, according to the connectivity characteristics of the microenvironments in the microenvironmental network and Equation (3), the global best position gBest can be transmitted to any microenvironment within a finite number of iterations because any two microenvironments can exchange information within a finite number of steps in the microenvironmental network. In extreme cases, let all m microenvironments in the microenvironmental network be lined in a row; the first microenvironment is labeled with A, and the last microenvironment is labeled with B. If the optimal position, gBest is the local optimal position within microenvironment A, then at most within m − 1 iterations, the local optimal position in microenvironment B will be better than the local optimal position in microenvironment A m − 1 iterations before, that is, better than the global optimal position m − 1 iterations before. The data exchange strategy specified in Equation (3) can propagate the global best position gBest to each microenvironment in the microenvironmental network. The fact that gBest can be transmitted to each microenvironment implies that the local best positions, gBestL of each microenvironment, are within a finite number of steps from the global best position. This is manifested as a slight difference in the fitness values of gBestL for each microenvironment when Algorithm 1 has finished its operation.
In Algorithm 1, Step 5 determines whether Steps 6–13 continue to iterate by checking whether the iteration count k is less than the maximum iteration count maxN. If the condition “k ≤ maxN” in step 5 is revised to “k ≤ maxN and fun(gBestL) ≤ goalExpect”, Algorithm 1 can control whether Steps 6 to 13 continue to iterate based on the condition that whether the iteration count k is less than maxN, and the performance of the global best position of all local particles, gBestL, is better than goalExpect, the expected performance. For the modified iteration control condition, if the maximum iteration count maxN is set to a considerable value, the loop for the update of gBestL is controlled by the condition that the performance of gBestL is better than the expected performance, goalExpect.
For Algorithm 1, when gBestL, the global best position of all local particles, is calculated at the k iteration, if step 6 requires that the historical best position gBestL of each neighboring microenvironment must correspond to the k iteration, Algorithm 1 is the synchronous form of the MPSO algorithm. On the other hand, if Step 6 only requires the historical best position gBestL of each neighboring microenvironment from the most recent iteration, Algorithm 1 is the asynchronous form of the MPSO algorithm.

3. Experiment Results and Analysis

To verify the effectiveness of the MPSO algorithm, six benchmark functions listed in Table 1 were used to evaluate the performance, such as the time consumption and precision of Algorithm 1 in our experiment. In addition, the influence of different communication methods and microenvironment network topologies on the performance of Algorithm 1 is tested. Figure 2 shows the shapes of those six benchmark functions when the number of independent variables d is 2. In the experiment for Algorithm 1 executed in each microenvironment, n = 20, d = 2, c1 = 2, c2 = 2, ω = 0.8, where n is the local particle number, d is the dimension of the particle position, c1 and c2 are learning factors, and ω is the inertia factor. The fitness function fun(), the maximum number of iterations maxN, and the neighbor microenvironment node list pNeighbor were determined based on the optimization problem, the iteration termination condition, and the topology of the microenvironment network.
The topology of the microenvironment network used in the experiment is shown in Figure 3, which corresponds to the distribution of spatial units of the Anhui Province Key Laboratory of Intelligent Building and Building Energy Saving [44]. When an optimization problem is solved using the PSO algorithm, the solution obtained by a single run of the PSO algorithm is typically a candidate solution of the optimization problem, since PSO is a stochastic search algorithm. To find the optimal solution to an optimization problem, the PSO algorithm must be applied multiple times, and the candidate solution with the best precision is chosen as the optimal solution. Because there are 12 microenvironments in the microenvironment network illustrated in Figure 3, the experiment first treated these 12 microenvironments as 12 independent computing environments in which the PSO algorithm independently solved the extremum of benchmark functions. When the PSO algorithm is used to independently solve the extremum of benchmark functions in each computing environment, n = 240, d = 2, c1 = 2, c2 = 2, and ω = 0.8.
The time consumed and convergence precision of the PSO algorithm in the 12-node microenvironment network shown in Figure 2, when maxN = 200 for the extremum of each benchmark function solving, are listed in Table 2. According to Table 2, when the PSO algorithm is repeatedly used to solve the extremum of the benchmark functions, the performance of the extremum candidate values for all benchmark functions, except the Schwefel function, exceeds the desired performance of 1 × 10−5, and the extremum performance of the Schwefel function can reach the 1 × 10−5 level. Additionally, the significant differences between the values in the “maximum” and “average” of precision columns shaded in gray and the corresponding “minimum” values of precision in Table 2 indicate the necessity of running the PSO algorithm multiple times for optimization problem solving. Similarly, Table 3 lists the time, convergence precision, and maximum number of iterations for independently solving the extremum of each benchmark function using the PSO algorithm in 12 microenvironments with maxN = 200 and goal precision = 1 × 10−5. Table 4 lists the time and convergence precision for independently solving the extremum of each benchmark function using the PSO algorithm in 12 microenvironments with goal precision = 1 × 10−5 and maxN = 20,000 to prevent excessively long computation times.
Given that the experimental data in Table 2 show that the performance of the PSO algorithm for the extremum of the Schwefel function solving is greater than 1 × 10−5, the goal performance for the minimum value of the Schwefel function in Table 3 is 1 × 10−4. For comparison, the goal performance for the minimum value of the Schwefel function in Table 4 is 1 × 10−5. Table 3 and Table 4 show that when the PSO algorithm is repeatedly used to solve the extremum of the benchmark functions, the performance of the minimum value of all benchmark functions, excluding the Schwefel function, surpasses 1 × 10−5, which is the goal performance. In contrast, the performance of the Schwefel function’s minimum value surpasses 1 × 10−4. According to the minimum precision in Table 2, Table 3 and Table 4, it can be concluded that the PSO algorithm can effectively obtain the extremum of all benchmark functions. Furthermore, the significant differences between the shaded “maximum” and “average” values of precision and the corresponding “minimum” values of precision in Table 2, Table 3 and Table 4 also highlight the necessity of running the PSO algorithm multiple times.
By comparing the data in the “Minimum” column of the precision part and the “Minimum”, “Maximum”, “Average”, and “Total” columns of the time part for the Schwefel row in Table 2, Table 3 and Table 4, respectively, it can be observed that when the minimum of the Schwefel function is solved using the PSO algorithm, as shown in the data in Table 2 and Table 3, the performance of the feasible solutions obtained by the PSO algorithm can quickly surpass 1 × 10−4. However, by comparing the data in Table 4, even with continued iterations, the precision of the feasible solutions obtained by the PSO algorithm remained challenging, reaching 1 × 10−5 within 20,000 iterations.
With the microenvironment network of 12 microenvironment nodes illustrated in Figure 3, the experiment further tested the time and precision performance of the synchronous MPSO algorithm for the extremum search of each benchmark function. The experimental results for maxN = 200 are presented in Table 5. Table 6 presents the experimental results for maxN = 200 and 1 × 10−5 as the goal precision, where maxN is the maximum number of iterations. Table 7 lists the experimental results for 1 × 10−5 as the goal precision. In Table 6 and Table 7, the goal precision for the minimum Schwefel function is 1 × 10−4. In the experiment, the number of particles in the PSO algorithm was set to 240, with 20 particles in each microenvironment. Based on the minimum precision values listed in Table 5, Table 6 and Table 7, it can be concluded that the synchronous MPSO algorithm can effectively determine the extremum of all benchmark functions. Furthermore, by comparing the summary of the time part in Table 2, Table 3 and Table 4 with the maximum time part in Table 5, Table 6 and Table 7, it is evident that when an optimization problem must be solved multiple times under the same convergence conditions to obtain some candidate solutions, and the optimal solution for the desired optimization problem is selected from those multiple candidates, the synchronous MPSO algorithm consistently requires much less time than the PSO algorithm.
Using the microenvironment network of the 12-node microenvironment illustrated in Figure 3, the experiment further tested the time and precision performance of the asynchronous MPSO algorithm for the extremum of each benchmark function. The experimental results for maxN = 200 are presented in Table 8. Table 9 presents the experimental results for maxN = 200 and 1 × 10−5 as the goal precision, where maxN is the maximum number of iterations. Table 10 lists the experimental results for 1 × 10−5 as the goal precision. In Table 8 and Table 9, the goal precision for the minimum Schwefel function is 1 × 10−4. In the experiment, the number of particles in the PSO algorithm was set to 240, with 20 particles in each microenvironment. Based on the minimum precision values listed in Table 8, Table 9 and Table 10, it can be concluded that the asynchronous MPSO algorithm can effectively determine the extremum of all benchmark functions. Furthermore, by comparing the summary of the time part in Table 2, Table 3 and Table 4 with the maximum time part in Table 8, Table 9 and Table 10, it is evident that when an optimization problem must be solved multiple times under the same convergence conditions to obtain some candidate solutions, and the optimal solution for the desired optimization problem is selected from those multiple candidates, the asynchronous MPSO algorithm consistently requires much less time than the PSO algorithm. By comparing the precision and time data in Table 5, Table 6 and Table 7 and Table 8, Table 9 and Table 10, it can be observed that the asynchronous MPSO algorithm generally achieves slightly lower precision in approximating the optimal solutions of all benchmark functions compared to the synchronous MPSO algorithm, but it requires more time. The synchronous MPSO algorithm performed better than the asynchronous MPSO algorithm in solving the optimal solutions of the benchmark functions.
The topology of the microenvironment network for the performance data of the MPSO listed in Table 5, Table 6, Table 7, Table 8, Table 9 and Table 10 is illustrated in Figure 3, which is an actual microenvironment network deployed in the Anhui Provincial Key Laboratory of Intelligent Buildings and Building Energy Saving. It is consistent with the spatial unit distribution of the laboratory. Furthermore, to test the robustness of the MPSO algorithm, four different microenvironmental network topologies were used, and the microenvironment network had 12 microenvironment nodes. The topologies of the four microenvironmental networks are shown in Figure 4. These topologies were used to evaluate the performance of the synchronous MPSO algorithm in solving the extremum of all benchmark functions. The goal performance for the Schwefel function was set to 1 × 10−4, whereas that for the other benchmark functions was set to 1 × 10−5.
Four microenvironment networks are shown in Figure 4. Each microenvironment network shown in Figure 4 has 12 microenvironment nodes, each of which has a unique topology, such as a line, ring, full, or random topology. In the line microenvironment network, except for the first and last nodes, each node has two neighbors, and each node can only communicate with its adjacent neighbors. In a ring microenvironment network, each node has two neighboring nodes, and all nodes form a closed loop. Each microenvironment node can communicate with its neighbors only in a ring microenvironment network. In a full microenvironment network, each microenvironment node can communicate with each other. The random topology of a microenvironment network is a randomly generated connected graph with 12 nodes. When generating a random topology, each node must have at least one neighbor and no more than six neighbors. This corresponds to the spatial structure of a building, where each spatial unit can have up to six neighbors located in directions such as up, down, left, right, front, and back.
To test the performance of the synchronous MPSO algorithm for the extremum of all benchmark functions solved across the four different microenvironment network topologies, in further experiments, the synchronous MPSO algorithm was executed in each microenvironment, and n = 20, d = 2, c1 = 2, c2 = 2, ω = 0.8, where n is the local particle number, d is the dimension of the particle position, c1 and c2 are the learning factors, and ω is the inertia factor. Furthermore, the goal performance for each benchmark function, except for the Schwefel function, was set to 1 × 10−5, and the goal precision for the Schwefel function was 1 × 10−4. Meanwhile, maxN, the maximum number of iterations, was set to 20,000. The fitness function fun() was defined as each of the benchmark functions, and the list of neighboring microenvironment nodes pNeighbor was determined based on the microenvironment network topologies shown in Figure 4.
With those four microenvironment networks illustrated in Figure 4, the experiment further tested the time and precision performance of the synchronous MPSO algorithm for solving the extremum of each benchmark function. The time and precision data are listed in Table 11, Table 12, Table 13 and Table 14. The precision of the synchronous MPSO algorithm in solving the extremum of all benchmark functions in each microenvironment network topology can reach this goal. Furthermore, by comparing the Total Time data in the “Total Time” column of Table 4 with the maximum time data in the “Max Time” column of Table 11, Table 12, Table 13 and Table 14 to solve the extremum of each benchmark function, it is evident that when the best solution of an optimization problem needs to be selected from some candidate solutions, the time consumed by the synchronous MPSO algorithm is less than the time consumed by the PSO algorithm.
In the current research and application of building intelligence and building energy efficiency, to complete some common tasks, such as fault diagnosis and optimization, fine-grained environmental control, energy efficiency optimization, occupant localization, and occupant behavior analysis, the reconstruction of the temperature field in building spatial units based on temperature monitoring data from a few interesting locations is a common optimization task. The PSO algorithm is frequently employed to optimize the temperature field reconstruction model.
In the following experiment, a feedforward neural network with one hidden layer was used as the reconstruction model of the temperature field for building spatial units. This model had an input layer with two input neurons, a hidden layer with two hidden neurons, and an output layer with one output neuron. The input layer was fully connected to the hidden layer, and the hidden layer was fully connected to the output layer. Each neuron in the hidden and output layers had a bias. The active function for each hidden layer neuron is a hyperbolic tangent function, f ( x ) = e x e x e x + e x , and the active function for each output layer neuron is a linear function, f ( x ) = x . During data collection, only five temperature monitoring points were arranged in the northeast, southeast, southwest, northwest, and center of the building spatial unit. The volume of the collected data were minimal, whereas the spatial span was large. Therefore, it is not advisable to train the feedforward neural network using the BP algorithm directly. Hence, the model parameters (weights of all neuron connections) were trained using the PSO algorithm. The performance of the MPSO algorithm for the temperature field reconstruction model training in the microenvironment topology illustrated in Figure 3 was tested in the following experiment. For comparison, the performance of the PSO algorithm for training the temperature field reconstruction model was also tested in the following experiment. Information such as computation time, precision, and number of iterations of the PSO, Synchronous MPSO, and Asynchronous MPSO algorithms for the temperature field reconstruction model training are given in Table 15, and the temperature fields reconstructed by the temperature field reconstruction model trained by the PSO, Synchronous MPSO, and Asynchronous MPSO algorithms are shown in Figure 5. In the experiment, the goal precision for the PSO, Synchronous MPSO, and Asynchronous MPSO algorithms was set to 1 × 10−8, and the maximum number of iterations (maxN) was set to 2000. The number of particles in the PSO algorithm was 600, whereas the number of particles in each microenvironment for both the synchronous and asynchronous MPSO algorithms was 50. To collect data for the information in Table 15, the PSO algorithm independently optimized the temperature field reconstruction model in 12 microenvironments. By contrast, the synchronous and asynchronous MPSO algorithms optimized the model in the same 12 microenvironments.
From the data in the “Min” column of the precision part in Table 15, it can be asserted that the PSO, synchronous MPSO, and asynchronous MPSO algorithms can effectively optimize the temperature field reconstruction model. Moreover, by comparing the data in the “Min”, “Max”, and “mean” columns of the precision part in Table 15, it is evident that when the PSO algorithm is used to optimize the temperature field reconstruction model, the precision of the model fluctuates significantly upon convergence. In contrast, when the synchronous and asynchronous MPSO algorithms were used to optimize the temperature field reconstruction model, the precision of the model remained stable. The similarity in the contour plots of the temperature fields reconstructed by the models using the three methods, as shown in Figure 5, also indicates that these algorithms can optimize the temperature field reconstruction model. Furthermore, from the time taken for the optimization of the temperature field reconstruction model in Table 15, the maximum optimization time for the synchronous and asynchronous MPSO algorithms was significantly less than the total time taken by the PSO algorithm when it was repeated 12 times (the total time was 12 times the average value). The maximum time required by the synchronous MPSO algorithm was also less than that of the asynchronous MPSO algorithm. The synchronous MPSO algorithm is more suitable for optimizing the temperature field reconstruction model.
All experiments were completed on a microenvironment network platform with 12 PCs as microenvironment nodes. Each PC had one Intel Core i7 processor and 16 GB of DDR4 memory. A microenvironment network platform was constructed based on the local area network. All PCs were purchased from the same vendor in the same batch to ensure consistent performance metrics. The operating system for each microenvironment was Linux Fedora fc38.x86_64. Each algorithm was implemented using the Python 3.12.4 programming language. MariaDB 10.5.21 for Linux (x86_64) was deployed to store the local data in each microenvironment, and the UDP protocol is used for communication.

4. Discussion

The conceptual framework of micro-environment network construction is fundamentally rooted in the decentralized nature of information processing within building systems. The MPSO algorithm represents an innovative implementation of the PSO algorithm, specifically adapted for microenvironment networks. By aggregating computational resources across the entire network, the MPSO algorithm facilitates in-time updates of localized prediction models, such as temperature field distributions, occupant fields, and other environmental parameters. Furthermore, it enables the dynamic refinement of fault detection frameworks and equipment health prognostics models. Traditionally, the updating of these local models has been confined to isolated computational units operating independently within their respective domains. In contrast, the MPSO algorithm introduces a paradigm-shifting capability, enabling the updating of local models through the strategic utilization of the entire building’s computational resources. This architectural innovation represents a fundamental departure from conventional, siloed approaches, establishing a distributed optimization framework that harnesses collective computational power while maintaining localized model specificity.
This paper investigates the application of microenvironment network modeling in decentralized insect-intelligent building platforms for information processing networks. However, during the adaptation and migration of the PSO algorithm to this information processing network, the scenario where certain micro-environments experience failures or need to handle urgent tasks, thereby necessitating their withdrawal from collaborative computation, was not considered. This oversight could have a potentially critical impact on the accuracy of the optimal solutions obtained by the MPSO algorithm. In extreme cases, the withdrawal of key nodes from the micro-environment network of collaborative computation could lead to the collapse of the MPSO algorithm. Alternatively, even if the MPSO algorithm continues to execute normally, it may degrade into multiple independent operations within partitioned regions. This situation is analogous to multiple instances of the PSO algorithm solving for optimal solutions with fewer particles, which could significantly diminish the precision of the obtained optimal solutions.

5. Conclusions and Future Work

By considering the distribution of building spatial units, the diversity of facilities, and the spatial distribution of facility components within buildings, the information processing processes in buildings naturally possess distributed characteristics. The inherent distribution characteristics of building information processing are referenced in the context of an insect-intelligent building platform. Distributed sensors and the processing of building information are achieved by deploying building information processing units within building units equipped with sensors, control, and information processing capabilities. This insect-intelligent building platform technology offers a new perspective with self-organizing and self-adaptive features for intelligent building operation, maintenance, and building energy efficiency management. The microenvironment-based particle swarm optimization (MPSO) algorithm was designed by considering the foundational network of an insect-intelligent building platform as a microenvironment network. The MPSO algorithm was tailored to the structure of the microenvironment network. The MPSO algorithm integrates the computational resources distributed across various building information units, providing a rapid solution approach based on evolutionary computation principles for solving optimization problems within an insect-intelligent platform. Because the MPSO algorithm requires the collaboration of microenvironment nodes within a microenvironment network to solve optimization problems, this study tested the computation time and precision of the synchronous and asynchronous forms of the MPSO algorithm in solving optimization problems. The computation time and precision are compared with the computation time and precision of the PSO algorithm for the same optimization problem. The experimental results show that both the synchronous and asynchronous MPSO algorithms can quickly solve the given optimization problems, while the performance is equivalent to that of the PSO algorithm.
To obtain multiple candidate solutions simultaneously, the MPSO algorithm iteratively updates the positions of particles through collaboration among adjacent microenvironments. The best-performing candidate solution was selected as the optimal solution. This method for obtaining the optimal solution does not consider the possibility of microenvironments experiencing faults or some cases, such as a microenvironment node having to exit the collaborative computation for some emergency tasks. The effect of specific microenvironments existing for collaborative computation on the performance of the MPSO algorithm is an interesting issue. Furthermore, the influence of key nodes in the microenvironment network forcing to exit of collaborative computation on the performance of the MPSO algorithm is another critical issue that requires attention.
If an optimization problem is solved using the PSO algorithm in a microenvironment network, it can also be achieved by assigning appropriate subtasks to each microenvironment node and aggregating the results of all subtasks at the node that issued the tasks to find the current best solution to the optimization problem. This process is iteratively updated to obtain the overall optimal solution. With this approach, the solution for the optimization problem obtained by the PSO algorithm is only presented in the microenvironment that starts the solving process of the optimization problem. The performance of the MPSO algorithm implemented in this manner is worth investigating. Furthermore, the identification of microenvironment network topologies that support this implementation method and the calculation of the microenvironment network diameter are topics that merit further research.

Author Contributions

Z.Z., H.C.: Methodology, Conceptualization, Validation. S.X., P.W.: Methodology, Software, Writing—original draft. H.C., S.Z.: Methodology, Visualization, Writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partially supported by the Discipline (Major) Top-notch Talent Academic Funding Project of Anhui Provincial University and College under Grant gxbjZD2021067 and gxyq2022030, the Innovative Leading Talents Project of Anhui Provincial Special Support Program under Grant [2022]21, the Key Project of Natural Science Research of Universities of Anhui Province under Grant 2024AH050246, the director foundation of the Anhui Province Key Laboratory of Intelligent Building & Building Energy Saving under Grant IBES2022ZR01 and IBES2024ZR02.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

The authors would like to express their deepest gratitude to Anhui Province Key Laboratory of Intelligent Building and Building Energy Saving at Anhui Jianzhu University for providing the necessary support to conduct this research.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Qi, S. Studies on Architecture of Decentralized System in Intelligent Building. Ph.D. Thesis, Tsinghua University, Beijing, China, 2015. [Google Scholar]
  2. Zhao, T.; Guan, X.; Chen, Y.; Hua, P. Study on Information Model of Building Spatial Unit for Distributed Architecture. Build. Sci. 2023, 39, 233–240. [Google Scholar] [CrossRef]
  3. Zhao, Q.; Xia, L.; Jiang, Z. Project report: New generation intelligent building platform techniques. Energy Inform. 2018, 1, 12–16. [Google Scholar] [CrossRef]
  4. Zhao, Q.; Jiang, Z. Insect intelligent building (I2B): A New Architecture of Building Control Systems Based on Internet of Things (IoT). Adv. Intell. Syst. Comput. 2019, 890, 457–466. [Google Scholar] [CrossRef]
  5. Xing, T.; Yan, H.; Wang, Y.; Wang, X.; Zhao, Q. Honeycomb: An open-source distributed system for smart buildings. Patterns 2022, 11, 3. [Google Scholar] [CrossRef]
  6. Zhang, Z.Y.; Fang, B.; Wang, P.; Cheng, H.M. A Local Area Network-Based Insect Intelligent Building Platform. Int. J. Pattern Recognit. Artif. Intell. 2023, 37, 2359004. [Google Scholar] [CrossRef]
  7. Jiang, Z.; Dai, Y.; Jiang, Y. Swarm intelligent building automation system. Heat. Vent. Air Cond. 2019, 11, 2–17. [Google Scholar]
  8. Jiang, Z.; Dai, Y. A decentralized, flat-structured building automation system. Energy Procedia 2017, 122, 68–73. [Google Scholar] [CrossRef]
  9. Zhang, Z.; Xu, S.; Wang, P.; Cheng, H.; Zhang, S. Simulation Experiment Study on Firefighting System under the Insect Intelligent Building Platform. Build. Sci. 2020, 36, 149–154. [Google Scholar] [CrossRef]
  10. Diao, P.H.; Shih, N.J. BIM-Based AR Maintenance System (BARMS) as an Intelligent Instruction Platform for Complex Plumbing Facilities. Appl. Sci. 2019, 9, 1592. [Google Scholar] [CrossRef]
  11. Laohaviraphap, N.; Waroonkun, T. Integrating Artificial Intelligence and the Internet of Things in Cultural Heritage Preservation: A Systematic Review of Risk Management and Environmental Monitoring Strategies. Buildings 2024, 14, 3979. [Google Scholar] [CrossRef]
  12. Hosamo, H.H.; Svennevig, P.R.; Svidt, K.; Han, D.; Nielsen, H.K. A Digital Twin predictive maintenance framework of air handling units based on automatic fault detection and diagnostics. Energy Build. 2022, 261, 111988. [Google Scholar] [CrossRef]
  13. Xu, J.; Li, D.; Gu, W.; Chen, Y. UAV-assisted task offloading for IoT in smart buildings and environment via deep reinforcement learning. Build. Environ. 2022, 222, 109218. [Google Scholar] [CrossRef]
  14. Li, Z.; Zhang, J.; Mu, S. Passenger spatiotemporal distribution prediction in airport terminals based on insect intelligent building architecture and its contribution to fresh air energy saving. Sustain. Cities Soc. 2023, 97, 104772. [Google Scholar] [CrossRef]
  15. Cerquitelli, T.; Meo, M.; Curado, M.; Skorin-Kapov, L.; Tsiropoulou, E.E. Machine Learning Empowered Computer Networks. Comput. Netw. 2023, 230, 109807. [Google Scholar] [CrossRef]
  16. Li, H.; Guo, Y.; Zhao, H.; Wang, Y.; Chow, D. Towards automated greenhouse: A state of the art review on greenhouse monitoring methods and technologies based on internet of things. Comput. Electron. Agric. 2021, 191, 106558. [Google Scholar] [CrossRef]
  17. Qaisar, I.; Sun, K.; Zhao, Q.; Xing, T.; Yan, H. Multi-Sensor-Based Occupancy Prediction in a Multi-Zone Office Building with Transformer. Buildings 2023, 13, 2002. [Google Scholar] [CrossRef]
  18. Li, C.; Lu, P.; Zhu, W.; Zhu, H.; Zhang, X. Intelligent Monitoring Platform and Application for Building Energy Using Information Based on Digital Twin. Energies 2023, 16, 6839. [Google Scholar] [CrossRef]
  19. Li, H.; Xu, J.; Zhao, Q.; Wang, S. Economic Model Predictive Control in Buildings Based on Piecewise Linear Approximation of Predicted Mean Vote Index. IEEE Trans. Autom. Sci. Eng. 2023, 21, 3384–3395. [Google Scholar] [CrossRef]
  20. Ahmad, N.; Egan, M.; Gorce, J.M.; Dibangoye, J.S.; Le Mouël, F. Codesigned Communication and Data Analytics for Condition-Based Maintenance in Smart Buildings. IEEE Internet Things J. 2023, 10, 15847–15856. [Google Scholar] [CrossRef]
  21. Piras, G.; Muzi, F.; Tiburcio, V.A. Digital Management Methodology for Building Production Optimization through Digital Twin and Artificial Intelligence Integration. Buildings 2024, 14, 2110. [Google Scholar] [CrossRef]
  22. Sun, K.; Zhao, Q.; Zou, J. A review of building occupancy measurement systems. Energy Build. 2020, 216, 109965. [Google Scholar] [CrossRef]
  23. Nguyen, T.P. AIoT-based indoor air quality prediction for building using enhanced metaheuristic algorithm and hybrid deep learning. J. Build. Eng. 2025, 105, 112448. [Google Scholar] [CrossRef]
  24. Koziel, S.; Pietrenko-Dabrowska, A.; Wójcikowski, M.; Pankiewicz, B. Nitrogen Dioxide Monitoring by Means of a Low-Cost Autonomous Platform and Sensor Calibration via Machine Learning with Global Data Correlation Enhancement. Sensors 2025, 25, 2352. [Google Scholar] [CrossRef]
  25. Xue, W.P.; Jia, N.; Zhao, M.T. Multi-agent deep reinforcement learning based HVAC control for multi-zone buildings considering zone-energy-allocation optimization. Energy Build. 2025, 329, 115241. [Google Scholar] [CrossRef]
  26. Jijun, S.; Daogang, P. Application research on improved genetic algorithm and active disturbance rejection control on quadcopters. Meas. Control. 2024, 57, 1347–1357. [Google Scholar] [CrossRef]
  27. Kennedy, J.; Eberhart, R. Optimization Particle Swarm. In Proceedings of the 1995 IEEE International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995. [Google Scholar]
  28. Malik, S.; Kim, D. Prediction-Learning Algorithm for Efficient Energy Consumption in Smart Buildings Based on Particle Regeneration and Velocity Boost in Particle Swarm Optimization Neural Networks. Energies 2018, 11, 1289. [Google Scholar] [CrossRef]
  29. Jing, W.; Yu, J.; Luo, W.; Li, C.; Liu, X. Energy-saving diagnosis model of central air-conditioning refrigeration system in large shopping mall. Energy Rep. 2021, 7, 4035–4046. [Google Scholar] [CrossRef]
  30. Fan, G.-F.; Zheng, Y.; Gao, W.-J.; Peng, L.-L.; Yeh, Y.-H.; Hong, W.-C. Forecasting residential electricity consumption using the novel hybrid model. Energy Build. 2023, 290, 113085. [Google Scholar] [CrossRef]
  31. Huang, Y.; Zhang, J.; Mo, Y.; Lu, S.; Ma, J. A Hybrid Optimization Approach for Residential Energy Management. IEEE Access 2020, 8, 225201–225209. [Google Scholar] [CrossRef]
  32. Li, L.; He, Y.; Zhang, H.; Fung, J.C.H.; Lau, A.K.H. Enhancing IAQ, thermal comfort, and energy efficiency through an adaptive multi-objective particle swarm optimizer-grey wolf optimization algorithm for smart environmental control. Build. Environ. 2023, 235, 110235. [Google Scholar] [CrossRef]
  33. Robyr, J.L.; Gonon, F.; Favre, L.; Niederhäuser, E.L. Intelligent multi-objective optimization for building energy and comfort management. J. King Saud Univ. Eng. Sci. 2018, 30, 195–204. [Google Scholar] [CrossRef]
  34. Malik, M.Z.; Shaikh, P.H.; Khatri, S.A.; Shaikh, M.S.; Baloch, M.H.; Shaikh, F. Analysis of multi-objective optimization: A technical proposal for energy and comfort management in buildings. Int. Trans. Electr. Energy Syst. 2021, 31, e12736. [Google Scholar] [CrossRef]
  35. Schito, E.; Conti, P.; Urbanucci, L.; Testi, D. Multi-objective optimization of HVAC control in museum environment for artwork preservation, visitors’ thermal comfort and energy efficiency. Build. Environ. 2020, 180, 107018. [Google Scholar] [CrossRef]
  36. Yu, M.G.; Pavlak, G.S. Extracting interpretable building control rules from multi-objective model predictive control data sets. Energy 2022, 240, 122691. [Google Scholar] [CrossRef]
  37. Zhuang, Y.; Huang, Y.; Liu, W. Integrating Sensor Ontologies with Niching Multi-Objective Particle Swarm Optimization Algorithm. Sensors 2023, 23, 5069. [Google Scholar] [CrossRef]
  38. Connolly, J.-F.; Granger, E.; Sabourin, R. Dynamic multi-objective evolution of classifier ensembles for video face recognition. Appl. Soft Comput. 2013, 13, 3149–3166. [Google Scholar] [CrossRef]
  39. Guo, W.; Zhang, B.; Chen, G.; Wang, X.; Xiong, N. A PSO-Optimized Minimum Spanning Tree-Based Topology Control Scheme for Wireless Sensor Networks. Int. J. Distrib. Sens. Netw. 2013, 9, 985410. [Google Scholar] [CrossRef]
  40. Zou, Y.; Yang, G.; Zheng, H.; Yi, J.; Hu, R. Dispatching for Integrated Energy System Based on Improved Niche PSO Algorithm. J. Electr. Power Syst. Autom. 2020, 32, 47–52+60. [Google Scholar] [CrossRef]
  41. Phoemphon, S.; Leelathakul, N.; So-In, C. An enhanced node segmentation and distance estimation scheme with a reduced search space boundary and improved PSO for obstacle-aware wireless sensor network localization. J. Netw. Comput. Appl. 2024, 221, 103783. [Google Scholar] [CrossRef]
  42. Brits, R.; Engelbrecht, A.P.; van den Bergh, F. A niching particle swarm optimizer. In Proceedings of the 4th Asia-Pacific Conference on Simulated Evolution and Learning (SEAR 2002), Singapore, 22 April 2002; pp. 692–696. [Google Scholar]
  43. van den Bergh, F.; Engelbrecht, A.P. A new locally convergent particle swarm optimizer. In Proceedings of the 2002 IEEE International Conference on Systems, Man and Cybernetics, Yasmine Hammamet, Tunisia, 6–9 October 2002; pp. 94–99. [Google Scholar] [CrossRef]
  44. Chen, T.; Zhang, Z.; Wang, P.; Cheng, H. An Approach to Microenvironment-Based Particle Swarm Optimization Algorithm. In Lecture Notes in Electrical Engineering, Proceedings of the 2023 International Conference on Wireless Communications, Networking and Applications, Shenzhen, China, 29–31 December 2023; Springer: Singapore, 2025; Volume 1361, pp. 162–169. [Google Scholar] [CrossRef]
Figure 1. The topology of a microenvironment network.
Figure 1. The topology of a microenvironment network.
Buildings 15 01778 g001
Figure 2. 2D visualizations of benchmark functions.
Figure 2. 2D visualizations of benchmark functions.
Buildings 15 01778 g002aBuildings 15 01778 g002b
Figure 3. The topology of a microenvironment network.
Figure 3. The topology of a microenvironment network.
Buildings 15 01778 g003
Figure 4. The topology of 4 microenvironment networks.
Figure 4. The topology of 4 microenvironment networks.
Buildings 15 01778 g004
Figure 5. Comparison of temperature fields reconstructed by three PSO Algorithms.
Figure 5. Comparison of temperature fields reconstructed by three PSO Algorithms.
Buildings 15 01778 g005
Table 1. Benchmark functions.
Table 1. Benchmark functions.
FunctionFormulaRange
Sphere f ( x ) = i = 1 d x i 2 [−100, 100]
Rosenbrock f ( x ) = i = 1 d 1 100 x i + 1 x i 2 2 + x i 1 2 [−100, 100]
Ackley f ( x ) = 20 e 0.2 1 d i = 1 d x i 2 e 1 d i = 1 d cos 2 π x i + 20 + e [−100, 100]
Rastrigin f ( x ) = 10 d + i = 1 d x i 2 10 cos 2 π x i [−100, 100]
Griewank f ( x ) = 1 + 1 4000 i = 1 d x i 2 i = 1 d cos x i i [−100, 100]
Schwefel f ( x ) = 418.9829 d i = 1 d x i sin | x i | [−500, 500]
Table 2. The performance of the PSO algorithm (maxN = 200).
Table 2. The performance of the PSO algorithm (maxN = 200).
BenchmarkTime (Second)Precision
MinimumMaximumMeanSummaryMinimumMaximumMean
Sphere798.0000961.4279 × 10−112.6983 × 10−85.7411 × 10−9
Rosenbrock687.3333881.6056 × 10−93.9061 × 10−56.5947× 10−5
Ackley8109.00001082.9338 × 10−68.7710 × 10−41.3989 × 10−4
Rastrigin798.41671011.2292 × 10−128.6732 × 10−61.4882 × 10−6
Griewank798.41671011.1171 × 10−70.00740.0042
Schwefel797.9167952.5455 × 10−5118.438411.1567
Table 3. The performance of the PSO algorithm (maxN = 200, goal performance = 1× 10−5).
Table 3. The performance of the PSO algorithm (maxN = 200, goal performance = 1× 10−5).
BenchmarkTime (Second)PrecisionNumber of
Iterations
MinimumMaximumMeanSummaryMinimumMaximumMean
Sphere243.0000361.0524 × 10−69.9253 × 10−64.1243 × 10−678
Rosenbrock475.1667621.0410 × 10−69.0092 × 10−51.1146 × 10−5140
Ackley8108.75001056.7112 × 10−65.4444 × 10−41.3135 × 10−4200
Rastrigin396.3333764.0304 × 10−82.1599 × 10−57.2190 × 10−6152
Griewank4107.3333889.6462 × 10−70.00740.0028200
Schwefel185.9167712.6262 × 10−525.20992.4680155
Table 4. The performance of the PSO algorithm (maxN = 20,000, goal performance = 1 × 10−5).
Table 4. The performance of the PSO algorithm (maxN = 20,000, goal performance = 1 × 10−5).
BenchmarkTime (Second)PrecisionNumber of
Iterations
MinimumMaximumMeanSummaryMinimumMaximumMean
Sphere132.4167298.0604 × 10−76.5498 × 10−63.8992 × 10−664
Rosenbrock586.5000787.9954 × 10−79.9958 × 10−65.4557 × 10−6161
Ackley72012.50001502.7555 × 10−66.8468 × 10−51.1611 × 10−5283
Rastrigin396.0833731.4402 × 10−68.0403 × 10−64.8746 × 10−6149
Griewank38715.91671911.0553 × 10−69.4158 × 10−64.5158 × 10−6212
Schwefel71612071100.413,2052.5455 × 10−5335.5827181.320720,000
Table 5. The performance of the Synchronous MPSO algorithm (maxN = 200).
Table 5. The performance of the Synchronous MPSO algorithm (maxN = 200).
BenchmarkTime (Second)Precision
MinimumMaximumMeanMinimumMaximumMean
Sphere91512.00001.9026 × 10−112.7730 × 10−101.9970 × 10−10
Rosenbrock101612.83332.6863 × 10−62.6863 × 10−62.6863 × 10−6
Ackley101613.00005.2168 × 10−65.2168 × 10−65.2168 × 10−6
Rastrigin91612.58333.0659 × 10−83.0659 × 10−83.0659 × 10−8
Griewank91613.08336.6870 × 10−71.4386 × 10−61.0537 × 10−6
Schwefel141514.08332.7654 × 10−52.8024 × 10−52.7993 × 10−5
Table 6. The performance of the Synchronous MPSO algorithm (maxN = 200, goal performance = 1 × 10−5).
Table 6. The performance of the Synchronous MPSO algorithm (maxN = 200, goal performance = 1 × 10−5).
BenchmarkTime (Second)PrecisionIterations
MinimumMaximumMeanMinimumMaximumMean
Sphere2159.08336.9061 × 10−66.9061 × 10−66.9061 × 10−658
Rosenbrock6118.41673.5176 × 10−63.5176 × 10−63.5176 × 10−6135
Ackley101512.66679.2855 × 10−69.2855 × 10−69.2855 × 10−6183
Rastrigin7129.33333.5689 × 10−69.9922 × 10−64.6394 × 10−6145
Griewank4117.75001.8658 × 10−61.8658 × 10−61.8658 × 10−680
Schwefel5118.33335.0415 × 10−55.0415 × 10−55.0415 × 10−5132
Table 7. The performance of the Synchronous MPSO algorithm (maxN = 20,000, goal performance = 1 × 10−5).
Table 7. The performance of the Synchronous MPSO algorithm (maxN = 20,000, goal performance = 1 × 10−5).
BenchmarkTime (Second)PrecisionIterations
MinimumMaximumMeanMinimumMaximumMean
Sphere3116.83333.3261 × 10−63.3261 × 10−63.3261 × 10667
Rosenbrock91411.41672.3513 × 10−62.3513 × 10−62.3513 × 10−6173
Ackley121815.00008.9925 × 10−69.8653 × 10−69.6471 × 10−6243
Rastrigin101512.41671.2856 × 10−61.2856 × 10−61.2856 × 10−6169
Griewank91512.00003.9471 × 10−66.3238 × 10−66.1257 × 10−6171
Schwefel91310.91677.3792 × 10−57.3792 × 10−57.3792 × 10−5196
Table 8. The performance of the Asynchronous MPSO algorithm (maxN = 200).
Table 8. The performance of the Asynchronous MPSO algorithm (maxN = 200).
BenchmarkTime (Second)Precision
MinimumMaximumMinimumMaximumMinimumMaximum
Sphere8119.00001.7644 × 10−105.2181 × 10−81.7286 × 10−8
Rosenbrock7128.83332.3681 × 10−50.84350.0959
Ackley7129.25001.4856 × 10−55.5352 × 10−41.2233 × 10−4
Rastrigin8139.25001.3918 × 10−70.01710.0021
Griewank8129.33333.1917 × 10−40.02240.0153
Schwefel91411.75001.5578 × 10−44.20610.8016
Table 9. The performance of the Asynchronous MPSO algorithm (maxN = 200, goal performance = 1 × 10−5).
Table 9. The performance of the Asynchronous MPSO algorithm (maxN = 200, goal performance = 1 × 10−5).
BenchmarkTime (Second)PrecisionIterations
MinimumMaximumMeanMinimumMaximumMean
Sphere375.83331.2883 × 10−67.2896 × 10−64.4165 × 10−6116
Rosenbrock6118.16671.4505 × 10−60.00303.8550 × 10−4200
Ackley7138.91675.7938 × 10−61.7202 × 10−46.5167 × 10−5200
Rastrigin7108.25009.1335 × 10−73.0272 × 10−48.5521 × 10−5200
Griewank8119.08330.00740.01170.0079200
Schwefel8119.08336.7772 × 10−52.98750.5172200
Table 10. The performance of the Asynchronous MPSO algorithm (maxN = 20,000, goal performance = 1 × 10−5).
Table 10. The performance of the Asynchronous MPSO algorithm (maxN = 20,000, goal performance = 1 × 10−5).
BenchmarkTime (Second)PrecisionIterations
MinimumMaximumMeanMinimumMaximumMean
Sphere395.83332.1601 × 10−78.7867 × 10−64.7173 × 10−6127
Rosenbrock71610.41671.1558 × 10−69.3172 × 10−64.8171 × 10−6227
Ackley114521.66671.5834 × 10−69.8807 × 10−66.7718 × 10−6407
Rastrigin72210.75008.9664 × 10−89.2383 × 10−64.9633 × 10−6209
Griewank116926.16671.4646 × 10−79.2350 × 10−64.1181 × 10−6452
Schwefel151269327.33332.8554 × 10−52.4522 × 10−41.0859 × 10−4520
Table 11. The performance of the Synchronous MPSO algorithm (Line).
Table 11. The performance of the Synchronous MPSO algorithm (Line).
BenchmarkTime (Second)PrecisionIterations
MinimumMaximumMeanMinimumMaximumMeanMinimumMaximumMedian
Sphere3127.50004.2457 × 10−74.6760 × 10−74.4609 × 10−7768581
Rosenbrock7118.41679.1477 × 10−69.1477 × 10−69.1477 × 10−6146155150
Ackley131916.25008.7556 × 10−68.7556 × 10−68.7556 × 10−6262272267
Rastrigin7108.33335.2039 × 10−65.2039 × 10−65.2039 × 10−6128137132
Griewank384240.00005.2645 × 10−65.2645 × 10−65.2645 × 10−6691700695
Schwefel91310.33336.6853 × 10−56.6853 × 10−56.6853 × 10−5167178172.5
Table 12. The performance of the Synchronous MPSO algorithm (Ring).
Table 12. The performance of the Synchronous MPSO algorithm (Ring).
BenchmarkTime (Second)PrecisionIterations
MinimumMaximumMeanMinimumMaximumMeanMinimumMaximumMedian
Sphere3128.58335.2046 × 10−65.2046 × 10−65.2046 × 10−6697572
Rosenbrock81310.75006.1511 × 10−66.1511 × 10−66.1511 × 10−6186196191
Ackley141715.16676.3740 × 10−66.3740 × 10−66.3740 × 10−6250256253
Rastrigin6118.25001.3716 × 10−61.3716 × 10−61.3716 × 10−6127133130
Griewank101412.50009.6413 × 10−69.6413 × 10−69.6413 × 10−6211217214
Schwefel91411.50003.5431 × 10−53.5431 × 10−53.5431 × 10−5181187184
Table 13. The performance of the Synchronous MPSO algorithm (Full).
Table 13. The performance of the Synchronous MPSO algorithm (Full).
BenchmarkTime (Second)PrecisionIterations
MinimumMaximumMeanMinimumMaximumMeanMinimumMaximumMedian
Sphere5118.00006.9881 × 10−66.9881 × 10−66.9881 × 10−6656766
Rosenbrock81411.16679.9103 × 10−69.9103 × 10−69.9103 × 10−6119120120
Ackley192622.50009.9360 × 10−69.9360 × 10−69.9360 × 10−6267268268
Rastrigin111613.58334.9590 × 10−64.9590 × 10−64.9590 × 10−6162163163
Griewank101612.83331.0800 × 10−71.0800 × 10−71.0800 × 10−7150151151
Schwefel91411.16675.4167 × 10−55.4167 × 10−55.4167 × 10−5125127126
Table 14. The performance of the Synchronous MPSO algorithm (Random).
Table 14. The performance of the Synchronous MPSO algorithm (Random).
BenchmarkTime (Second)PrecisionIterations
MinimumMaximumMeanMinimumMaximumMeanMinimumMaximumMedian
Sphere385.50001.0932 × 10−61.0932 × 10−61.0932 × 10−6677170
Rosenbrock6108.25007.6528 × 10−67.6528 × 10−67.6528 × 10−6134139137
Ackley91511.83339.4930 × 10−69.4968 × 10−69.4958 × 10−6182186185
Rastrigin7118.91673.5646 × 10−63.5646 × 10−63.5646 × 10−6125129127
Griewank91411.00001.0142 × 10−61.0142 × 10−61.0142 × 10−6180183182
Schwefel72015.41679.5925 × 10−59.5925 × 10−59.5925 × 10−5153158156
Table 15. Performance comparison of PSO and MPSO for temperature field reconstruction.
Table 15. Performance comparison of PSO and MPSO for temperature field reconstruction.
AlgorithmTime (Second)PrecisionIterations
MinMaxMeanMinMaxMeanMinMaxMedian
PSO69181166.41673.0331 × 10−91.34060.326399120002000
Synchronous MPSO838483.91679.5743 × 10−99.5743 × 10−99.5743 × 10−9787792790
Asynchronous MPSO4515174.16674.4517 × 10−99.9995 × 10−98.1274 × 10−95451829678
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, Z.; Xu, S.; Wang, P.; Cheng, H.; Zhang, S. Decentralized Microenvironment-Based Particle Swarm Optimization Algorithm for Insect-Intelligent Building Platform. Buildings 2025, 15, 1778. https://doi.org/10.3390/buildings15111778

AMA Style

Zhang Z, Xu S, Wang P, Cheng H, Zhang S. Decentralized Microenvironment-Based Particle Swarm Optimization Algorithm for Insect-Intelligent Building Platform. Buildings. 2025; 15(11):1778. https://doi.org/10.3390/buildings15111778

Chicago/Turabian Style

Zhang, Zhenya, Shaojie Xu, Ping Wang, Hongmei Cheng, and Shuguang Zhang. 2025. "Decentralized Microenvironment-Based Particle Swarm Optimization Algorithm for Insect-Intelligent Building Platform" Buildings 15, no. 11: 1778. https://doi.org/10.3390/buildings15111778

APA Style

Zhang, Z., Xu, S., Wang, P., Cheng, H., & Zhang, S. (2025). Decentralized Microenvironment-Based Particle Swarm Optimization Algorithm for Insect-Intelligent Building Platform. Buildings, 15(11), 1778. https://doi.org/10.3390/buildings15111778

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop