Next Article in Journal
Permutation Entropy and Its Niche in Hydrology: A Review
Previous Article in Journal
Phase Coordinate Uncomputation in Quantum Recursive Fourier Sampling
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Dynamic-Step-Size Regulation in Pulse-Coupled Neural Networks

1
School of Information Science and Engineering, Lanzhou University, Lanzhou 730000, China
2
Gansu Computing Center, Lanzhou 730030, China
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Entropy 2025, 27(6), 597; https://doi.org/10.3390/e27060597
Submission received: 30 April 2025 / Revised: 20 May 2025 / Accepted: 29 May 2025 / Published: 3 June 2025
(This article belongs to the Section Signal and Data Analysis)

Abstract

:
Pulse-coupled neural networks (PCNNs) are capable of segmenting digital images in a multistage unsupervised fashion; however, optimal output selection remains challenging. To address the above problem, this paper emphasizes the role of the step size, which influences the decreasing speed of the membrane potential and the dynamic threshold profoundly. A dynamic-step-size mechanism is proposed, utilizing trigonometric functions to adaptively control segmentation granularity, along with the supervised optimization of a single parameter ϕ via intersection over union (IoU) maximization, reducing tuning complexity. Thus, the number of groups of image segmentation becomes controllable and the model itself becomes more adaptive than ever for various scenarios. Experimental results further demonstrate the enhanced robustness under noise (92.1% Dice at σ = 0.2 ), outperforming SPCNN and PCNN with IoU = 0.8863, Dice = 0.901, and 0.8684 s/image.

1. Introduction

Image segmentation remains a fundamental challenge in computer vision, where traditional PCNN models exhibit notable advantages in unsupervised segmentation due to their bio-inspired characteristics. However, their reliance on fixed step-size (ST) settings severely limits practical applications. It was Eckhorn’s research on the visual cortex neurons of cats that gave birth to the PCNN model. The PCNN model’s structure has multiple inputs and a single output and its neurons can work in both excitatory and inhibitory states. The parallel inputs are internally superimposed in time and space dimensions to form a nonlinear output. This phenomenon is the so-called coupling. The PCNN inherits those biologic characteristics and enables image pixels in neighboring areas or that have similar gray levels to cluster together.
During the past few decades, plenty of valuable research studies about this model have emerged in the image processing field. In 2007, Ma et al. [1] applied the time matrix of release pulse in image enhancement for the first time. In 2009, Zhan et al. [2] introduced a Spiking Cortical Model (SCM) which had lower computational complexity and higher accuracy than before. Two years later, Chen et al. [3] modified the sigmoid function of the SCM with a classical firing condition to further simplify the computational complexity. Recently, researchers have put more attention on the intrinsic activity mechanism of neurons, e.g., the heterogeneous PCNN [4,5] and the non-integer-step model [6] are becoming new focuses in the image processing field. In 2021, Liu et al. [7] proposed the continuous-coupled neural network (CCNN), which could generate a variety of stochastic responses stimulated by different driven signals [8,9]. Besides the above research, Wang et al. [10] proposed an infrared–visible image fusion method using the snake visual mechanism and a PCNN. It mimicked snake vision and enhanced fusion quality, broadening PCNNs’ applications in image fusion. In recent years, PCNNs have found numerous applications in the field of image processing. For example, Hu et al. proposed a remote sensing image reconstruction method based on a parameter-adaptive dual-channel pulse-coupled neural network (Dual-PCNN) in Ref. [11]. This method achieved excellent results in image noise reduction and fusion, further expanding the application scope of PCNNs in practical image processing. PCNNs have great potential for developing image segmentation algorithms, whose performance deeply relies on appropriate parameter selection. However, how to select the right outputs is still an issue, although many adaptive parameter filtering methods have been proposed [2,12,13,14,15,16,17].
Although prior studies [1,2,3] have advanced parameter adaptation strategies, the dynamic adjustment mechanism for the step size remains underexplored. Traditional models (e.g., SPCNN) face three key limitations:
  • Uncontrollable granularity: a large ST leads to over-segmentation (noise sensitivity), while a small ST causes under-segmentation (detail loss).
  • Generalization constraints: a fixed ST struggles to adapt to diverse gray distributions and complex scenes.
  • Parameter tuning complexity: a manual ST adjustment is required to balance accuracy and efficiency.
This work aimed to address these issues through dynamic-step-size adaptation.
Here, we try to explore the mechanism of step size working on a dendritic tree. The rest of this paper is organized as follows: Section 2 begins with a brief inspection of a basic PCNN model and its pulse generator mechanism. Section 3 reviews two very significant related works, the Simplified PCNN (SPCNN) and non-integer-step-index model. Section 4 is devoted to the study of the step size in model operation, and some experiments are performed. Section 5 concludes the whole work.

2. Neuron Structure

The past decades have witnessed great achievements in the neuron structure modification and parameter setting of PCNNs. As a type of bionic neural network [18], there are four main components in a neuron of a PCNN: the dendrite, the membrane potential, the dynamic threshold, and the action potential, which were also redefined by Lindblad and Kinser [19] as the feeding input, the linking input, the internal activity, and the pulse generator, respectively. The fundamental elements of Eckhorn’s model consist of distinct leaky integrators which can be implemented by first-order recursive digital filters [20]. The neuron exhibits bidirectional connectivity, allowing for both the reception of information from neighboring neurons and the transmission of signals to subsequent ones. As shown in Figure 1, the feeding synapses, F, receive an external stimulus that is the main input signal, whereas the linking synapses, L, receive auxiliary signals to modulate the feeding inputs. The nonlinear connection modulation component, also known as the internal activity of the neuron, U, consists of a linear linking, a feedback input, and a bias. During the process of signal transmission, neuronal electrical signal intensity is full, so two decay factors a F and a L are set in the transfer process from feeding synapses to other neurons and linking synapses to other neurons, respectively. When a neuron receives a postsynaptic action potential from neighboring neurons, it charges immediately and then decays exponentially. Each neuron is denoted with indices (indices ( i , j )), and one of its neighboring neurons is denoted with indices (k,l). The mathematical expressions of a PCNN are as follows:
L i j [ n ] = e a L L i j [ n 1 ] + V L k l W i j , k l Y k l [ n 1 ]
F i j [ n ] = e a F F i j [ n 1 ] + V F k l M i j , k l Y k l [ n 1 ]
U i j [ n ] = F i j [ n ] ( 1 + β L i j [ n ] )
θ i j [ n ] = e a θ [ n 1 ] + V θ Y i j [ n ]
And the pulse generator is
Y i j [ n ] = 1 , U i j [ n ] > θ i j [ n ] 0 , e l s e .
  • L i j [ n ] : linking input term, where a L is the linking decay factor, V L is the linking amplification coefficient, and  W i j , k l denotes the neighborhood weighting matrix;
  • F i j [ n ] : feeding input term, with  a F as the feeding decay factor, V F the feeding amplification coefficient, and  M i j , k l the spatial coupling matrix;
  • U i j [ n ] : internal activity, modulated by β (linking strength coefficient);
  • θ i j [ n ] : dynamic threshold, governed by a θ (threshold decay factor) and V θ (threshold amplification coefficient);
  • Y i j [ n ] : pulse output (one indicates firing, zero otherwise).
A particular neuron tends to fire at a certain frequency if all the parameters are determined; as illustrated in Figure 2, after three iterations, the neuron prefers firing periodically. In fact, all neurons show the aforementioned behavior when perceiving different external stimuli, which means the information can be fully perceived merely after a finite number of iterations.

3. Related Works

Recent advances in visual perception modeling [21] demonstrate that PCNN-based approaches significantly outperform traditional methods in complex background segmentation. Combined with adaptive parameter strategies [22] and comprehensive application reviews [23], these developments highlight the growing potential of PCNN in real-time vision systems.
In addition to related works such as the Simplified PCNN (SPCNN) and the non-integer-step-index PCNN mentioned above, there are also studies that explore the PCNN model in depth from different dimensions. Reference [24] conducted research on the non-coupled PCNN. Based on the traditional non-coupled PCNN model, a linking term was introduced to improve it. By analyzing the firing mechanism of the improved model, it was found that its firing time and interval changed with the firing states generated by the neighborhood and its own firing conditions in each iteration process. That study also delved into the influence of parameters such as the linking weight matrix and the linking coefficient on the network output characteristics, revealing that under specific parameter settings, the non-coupled PCNN could exhibit the network output characteristic of image edge detection, which was verified through numerous experiments. That research achievement enriched the research system of the PCNN model and provided important references for subsequent studies.
Moreover, Xu et al. [25] proposed a novel adaptively optimized PCNN model for hyperspectral image sharpening. They designed a SAM-CC strategy to assign hyperspectral bands to the multispectral bands and proposed an improved PCNN considering the differences in neighboring neurons, which was applied to remote sensing image fusion and achieved good results, expanding the application scenarios of PCNNs in remote sensing image processing. In addition to these studies, Qi et al. proposed an adaptive dual-channel PCNN model [26]. They applied it to infrared and visible image fusion, combined with a novel image decomposition strategy, and obtained excellent results, which further broadened the application scope of PCNNs in image fusion .
This section gives a brief overview of SPCNN and non-integer-step-index PCNN, proposed by Chen [3]. The former put forward a smart automatic parameter setting method; the latter emphasized the leverage of the step size for neurons’ perceptibility.

3.1. SPCNN and Adaptive Parameter

Compared with previous version, the SPCNN not only has a more concise model expression but also makes great progress in automatic parameter setting. The internal activity of the SPCNN consists of a leak integrator, linking input, and feeding input.
U i j [ n ] = S i j ( 1 + β V L k l W i j , k l Y k l [ n 1 ] ) + e a F U i j [ n 1 ] ,
where S i j is the external stimulus, and other parameters have the same meaning as indicated above.
The dynamic threshold E i j is rewritten as
E i j [ n ] = e a E E i j [ n 1 ] + V E Y i j [ n 1 ] ,
where V E and e a E have the same meaning as V θ and e a Θ in Equation (4). For clarity, these symbols are used in this paper.
In addition, the pulse generator of the SPCNN is inherited from the PCNN without any changes. Several main parameters are calculated as follows:
a F = l o g ( 1 σ ) ,
β = S m a x / S 6 V L ,
V E = a a F + 1 + 6 β V L ,
V L = 1 ,
a E = l o g ( V E S m a x ( 1 e 3 a F / ( 1 e a F + 6 β V L e a F ) ) ) ,
where σ denotes the standard deviation of the normalized intensities of an original image. More detail on the proof process can be found in Chen’s paper [3].

3.2. Non-Integer-Step-Index PCNN

The fact is that the neurons of the PCNN model are usually based on non-integer time, which is often ignored in the discrete form. Thus, non-integer-step-index PCNN changes the integer step into a decimal one to achieve preferable balance between resolution and computational complexity.
To handle a non-integer δ t in discrete implementations, we adopted a linear interpolation between adjacent iterations. For  δ t = n + α ( n Z , 0 < α < 1 ), the membrane potential is updated as
U i j [ t + δ t ] = ( 1 α ) U i j [ t + n ] + α U i j [ t + n + 1 ]
This ensures smooth transitions while avoiding subscript indexing issues.
U i j [ t + δ t ] = S i j ( 1 + β V L k l W i j , k l Y k l [ t ] ) + e a F δ t U i j [ t ]
E i j [ t + δ t ] = e a E δ t E i j [ t ] + V E Y i j [ t ]
Y i j [ t + δ t ] = 1 , U i j [ t + δ t ] > E i j [ t ] 0 , e l s e
where δ is the step size.
Though the idea is enlightening, in fact, it is not easy to realize the model in this form. People are used to storing ht output of every stage in an array or matrix, but decimals cannot correspond to a subscript of an array. Instead, the step size can change at each iteration step.

4. Research on Step Size

Generally speaking, in the study of artificial neural networks, it is difficult to monitor the internal processes. If the results of a model are not satisfactory, the first reaction is often to modify parameters. In fact, the model may work well, except that the output does not meet human expectations. If more subgraphs are split, will the result be more accurate? Or can the output include more detail if fewer subgraphs are split? From Equation (13), we obtain
U i j [ t ] = S i j ( 1 + β V L k l W i j , k l Y k l [ t δ t ] ) + e a F δ t U i j [ t δ t ] ,
Subtracting Equation (16) from Equation (13) yields
U i j [ t + δ t ] = S i j β V L ( k l W i j , k l Y k l [ t ] k l W i j , k l Y k l [ t δ t ] ) + ( 1 + e a F δ t ) U i j [ t ] e a F δ t U i j [ t δ t ]
Regarding t as n δ t , one gets
U i j [ n ] = S i j β V L ( k l W i j , k l Y k l [ n δ t ] k l W i j , k l Y k l [ n δ t ] ) + ( 1 + e a F δ t ) U i j [ n δ t ] e a F δ t U i j [ n 2 δ t ]
Expanding Equation (13) using a first-order Taylor series approximation for e a F δ t , we have:
e a F δ t 1 a F δ t + ( a F δ t ) 2 2
Neglecting higher-order terms ( O ( δ t 2 ) ), we substitute the expression into Equation (13) and derive the discrete form as shown in Equation (17). This approximation ensures computational tractability while maintaining the dynamic coupling behavior.
Equation (18) indicates that a variable ST dynamically adjusts segmentation sensitivity by modulating two factors:
  • The historical decay rate of membrane potential ( e a F S T );
  • The neighborhood pulse coupling difference ( Y k l [ n 1 ] Y k l [ n 2 ] ).
Compared to a fixed ST, a variable ST enables adaptive granularity control across iterations.
Equation (18) bridges non-integer- and variable-step-size models. By substituting δ t = S T and allowing S T to vary per iteration, we extend the discrete PCNN framework to support dynamic-step adaptation. This formulation preserves the biological coupling mechanism while enabling the adaptive control of the membrane potential decay ( e a F S T ) and neighborhood pulse coupling.
Assuming δ t = 1 in Equation (13), we get Equation (6), and we get Equation (18) from Equation (13), so if we assume δ t = 1 in Equation (18), then we get a U equal to one in Equation (6):
U i j [ n ] = S i j β V L ( k l W i j , k l Y k l [ n 1 ] k l W i j , k l Y k l [ n 2 ] ) + ( 1 + e a F δ t ) U i j [ n 1 ] e a F δ t U i j [ n 2 ]
For programming convenience, the indices should be integers. Thus, if  δ t changes, we have to use a new step size (ST) to replace δ t , and the former equation is rewritten as
U i j [ n ] = S i j β V L ( k l W i j , k l Y k l [ n 1 ] k l W i j , k l Y k l [ n 2 ] ) + ( 1 + e a F S T ) U i j [ n 1 ] e a F S T U i j [ n 2 ]
Equation (16) equals
U i j [ n 1 ] = S i j ( 1 + β V L k l W i j , k l Y k l [ n 2 ] ) + e a F S T U i j [ n 2 ]
as t = n 1 . Let us use n instead of n+1 and introduce U i j [ n ] into Equation (22); we can get the same result:
U i j [ n ] = S i j ( 1 + β V L k l W i j , k l Y k l [ n 1 ] ) + e a F S T U i j [ n 1 ] .
Via the former derivations, we extend Equation (16), which is an important hidden intermediate procedure that shows that the ST can actually change across iterations. The ST can affect the image segmentation result significantly. When ST equals to one, the model is the traditional SPCNN, and it becomes the non-integer-step-index model when ST is decimal. However, the latter is still a fixed-step-size model, whose application scope is limited.
The PCNN can capture both grayscale level and position information of pixels in an image. Here, we explore the function of the ST in grayscale perception. Figure 3a,c show two images used in the experiment, and Figure 3b,d are their corresponding histograms. We took “Lena” as an example to show how the ST works on determining a threshold according to grayscale level and position.
Figure 4 displays the histograms of four components of the image “Lena” separated by the SPCNN with ST = 1. For simplicity, these four histograms were marked with different colors and the corresponding pixels were labeled with the same color as in Figure 5a and b, respectively. In addition, Figure 5c,d represent the marked histograms and image when ST = 0.6. The pixels were recorded and displayed with the same colors as in Figure 5b,d. When the ST became smaller, the largest interval (e.g., the blue part in Figure 5c) tended to split first. In the meanwhile, these adjacent intervals occupied the extreme parts of the plot. Notice that there was always a small blue area between the two largest intervals, which was less affected. In fact, that area represented the boundary between the foreground and the background. We could use it as the threshold for binary segmentation. It can be observed that the segmentation result did not strictly rely on the threshold, as the spatial information between different pixels was taken into account, i.e., a neuron was easier to trigger if its neighbors had already fired, since the convolution operation could include the neighboring neurons’ information. This phenomenon is known as synchronous firing, which enables the PCNN to remove isolated noise. This ability becomes weak for a smaller ST, which causes more clusters to emerge, as shown in Figure 5a,c. Thus, the neurons at the edge of two groups can even cluster into a new group, like the orange part in Figure 5c. On the contrary, we can obtain more complete and continuous results when the ST is increased.
However, the selection of a suitable ST is full of challenges since a larger value enables more neurons to synchronously fire but a lower ability to distinguish objects, while a smaller ST has a higher distinctive ability, but more noise emerges. As the SPCNN segmentation deeply rely on the grayscale information of the image, we could let the images with similar grayscale values share the same step size. We can determine the best step size of these images with manually segmented results using an evaluation metric like the intersection over union (IoU).
Figure 6 and Figure 7 show the best 12 ST curves in 100 iterations with a totally random ST at each step. The criterion for perfect segmentation was to ensure the highest IoU. After a large number of experiments and after removing some extreme values, we found that the trigonometric function could fit the ST curve relatively well and consumed less time than using a totally random number.
The ST value was expected to be in the interval [0, 1] to ensure the PCNN model can sufficiently distinguish between inputs, so we assumed ST to be
S T = 0.5 sin ( w t + ϕ ) + 0.5
where t is the iteration time, and  ϕ is a randomness parameter affecting the time of the first peak of ST curve.
The sinusoidal function (Equation (23)) was chosen over linear/exponential alternatives for three reasons: (1) Periodicity ensures cyclic exploration of granularity levels, avoiding local minima; (2) The bounded output [ 0 , 1 ] matches the ST range; (3) Parameter efficiency (only ϕ and w). Biologically, the sine function mimics neural oscillations observed in cortical networks [8], where rhythmic firing enhances feature discrimination. Mathematically, the derivative d ( S T ) / d t = 0.5 w cos ( w t + ϕ ) naturally modulates edge sensitivity by amplifying gradient changes (see Figure 8).
According to (22), an ST of U i j [ n 2 ] is one step behind, so it is equivalent to the cosine operation. During the iteration process, the sine item of U i j [ n 1 ] and the cosine item of U i j [ n 2 ] work on U i j [ n ] together. Therefore, the internal activity U varies spirally, which is a unique characteristic of this new mechanism.
Extensive experiments showed that w was related to the standard deviation of the normalized image. Thus, let w be equal to
w = l o g ( 1 σ )
and ϕ is in the range [0, pi/w].
Figure 9 shows the neurons activity of the aforementioned scheme and the final segmentation results (Algorithm 1). When the ST gets smaller, the image tends to be divided into more parts, but for the variable-step-size PCNN, it is an exception. It is striking that although the ST in Equation (24) is always smaller than one, Figure 9c has less parts than Figure 9a. In this example, ϕ was not considered.
However, it is clear that in the first iteration of the variable-step-size PCNN in Figure 9c, the model narrows down the target area and ignores those bright parts of the wall behind the character, compared to what SPCNN achieves in Figure 9a.
To determine the best ST, as w is related to the statistical information of the image, we only considered the value of ϕ to simplify the problem. Because Figure 9c showed better results than Figure 9a, we believed that the ST in the former outperformed the latter, and the ϕ of the former was recorded until we encountered a better ST. Many ways are available to find the best segmentation; one of the most effective ones is via the IoU metric.
Figure 10, Figure 11 and Figure 12 shows how the variable-step-size PCNN works. The training and test sets are independent, and obtaining ST actually means obtaining ϕ . The optimal ST is obtained when the maximum IoU between manual and automatic segmentation is reached. With that ST, the images in the test set are segmented perfectly. In experiments, the higher the cosine similarity between the test set and the training set, the better the performance.
Algorithm 1 Variable-step-size PCNN segmentation
Require:  Input image S i j , max iterations N max , image std σ
Ensure:  Segmentation mask Y i j
  1:
Initialize U i j [ 0 ] S i j , Y i j [ 0 ] 0
  2:
Compute w log ( 1 / σ ) {Equation (24)}
  3:
for  n = 1  to  N max  do
  4:
    S T n 0.5 sin ( w n + ϕ ) + 0.5 {Dynamic step size}
  5:
   Update U i j [ n ] via Equation (20) {Membrane potential}
  6:
   Update θ i j [ n ] via Equation (4) {Dynamic threshold}
  7:
    Y i j [ n ] 1 ( U i j [ n ] > θ i j [ n ] ) {Pulse generator}
  8:
end for

5. Biological and Theoretical Analysis

The dynamic-step-size mechanism draws inspiration from two fundamental neurobiological phenomena:
  • Adaptive synaptic coupling: Neurons adjust their connection strength based on temporal input patterns, mirroring how S T n balances synchronization and desynchronization. This aligns with the PCNN’s core design philosophy [19].
  • Intrinsic oscillation: the sinusoidal S T n (Equation (24)) reflects rhythmic firing patterns observed in visual cortex networks [18], where periodic modulation enhances feature discrimination.
Mathematically, the continuous dynamics can be decomposed as
d U d t = α U Membrane decay + S ( 1 + β L ) Stimulus - coupling + γ ω cos ( ω t + ϕ ) Step - size modulation ,
where
  • α a F : decay rate from Equation (2);
  • γ = 0.5 / σ : noise-adaptive scaling factor;
  • cos ( ω t + ϕ ) : derivative of the ST controller.
This formulation achieves the following:
  • Phase adaptation: cosine terms modulate synchronization timing;
  • Edge sensitivity: Local gradient maxima trigger ST reduction;
  • Stability: bounded S T n [ 0 , 1 ] prevents divergence.
Extended validation: To further validate generalization, we tested the model on the following:
  • Medical Images: 100 chest X-rays from the NIH dataset.
  • Remote Sensing: 50 GaoFen-2 satellite images.
Metrics: we evaluated performance using TPR (True Positive Rate), TNR (True Negative Rate), and cross-entropy.
To validate ϕ ’s generalizability, we tested the same ϕ on the medical and satellite images. As shown in Table 1, the model retained IoU > 0.81 across domains, demonstrating strong cross-domain adaptability.

6. Experimental Results

In this section, we used nine images from the Berkeley Segmentation Dataset to verify the proposed scheme (Table 2), as in [3]. We utilized false colors to mark those neurons firing at different times. The earlier the neuron fires, the cooler its color. From cold to warm, the colors were blue, light blue, green, yellow, orange, and red.
Table 2. ϕ of column(c) in Figure 13.
Table 2. ϕ of column(c) in Figure 13.
Image # ϕ Values
1–51.16771.15301.83411.57340.9053
6–90.61940.06640.97911.14340.9915
Figure 13. Segmentation results of nine natural gray images from the Berkeley Segmentation Dataset. Each row illustrates the experiment of one image. Respectively, images in the first column are the original input images. Images in column (a) are binarized images. Images in column (b) are the final segmentation results obtained by the SPCNN with the proposed automatic parameter setting method. Images in column (c) are the final segmentation results obtained by the random PCNN with ST produced by the method in Figure 10 (1000 pictures in training set).
Figure 13. Segmentation results of nine natural gray images from the Berkeley Segmentation Dataset. Each row illustrates the experiment of one image. Respectively, images in the first column are the original input images. Images in column (a) are binarized images. Images in column (b) are the final segmentation results obtained by the SPCNN with the proposed automatic parameter setting method. Images in column (c) are the final segmentation results obtained by the random PCNN with ST produced by the method in Figure 10 (1000 pictures in training set).
Entropy 27 00597 g013
For the first image, (c1) removed noise on the giraffe but split the background in (b1) into two parts. However, we obtained a better output when we used the method mentioned previously, i.e., separately find a threshold and obtain parts.
(c2) merged the yellow part and orange part in (b2) into a green part but split the background into two parts.
(c3) made a mistake as it considered the clothes as background. This was because another background candidate was too complex and would have divided the image into many small parts around the white rocks on the ground.
(c4) removed noise on the sea in (b4) and created clearer boundaries between the sea and the sky and between the person and the sea.
(c5) merged the two parts of the plane in (b) and removed the noise at the bottom left but also produced some strange new noises at the top left and right.
(c6) successfully merged the green part, yellow part, and brown part in (b6) into a brown part.
(c7) merged the green part and yellow part in (b7) into a red part. Incidentally, the processing of the background in (c7) was more similar to that of (a7).
(c8) removed the noise at the top right and narrowed the green area. We think this was more reasonable than (b8).
In (c9), although there were partial branches considered as foreground, our method was comparatively much better than the other methods. It is undeniable that that image was too complex for all algorithms, and no method could effectively pick out the leopard.
Table 3 reveals the enhanced noise robustness of our model. Under high noise ( σ = 0.2 ), RandomStepPCNN maintained 92.1% of its baseline Dice score (0.901 vs. 0.933 at σ = 0.1 ), while PCNN dropped to 69.4% (0.694 vs. 0.883). Figure 14 further demonstrates this stability through continuous noise variations. Notably, while UNet [27] achieved higher recall (TPR = 0.9966) due to its deep architecture, our random PCNN demonstrated a superior IoU (0.8863 vs. 0.5116) and computational efficiency (0.8684 s vs. 1.16 s), indicating better balance between accuracy and speed for real-time applications.
The objective evaluation was measured by the IoU, cross-entropy, true positive rate (TPR), and true negative rate (TNR), which are shown in Table 4.
According to Table 4, the variable-step-size PCNN achieved much smaller cross-entropy [29] than other models. This is because it divides the image into several pieces, and there is always one piece with a high probability of being close to the optimal segmentation result.
TPR and TNR depict the similarity between segmentation result of a specific algorithm and manual segmentation in another way [30]. Since the outputs of the variable-step-size PCNN was finer, the TPR was lower, and the TNR was higher than those of the SPCNN.
Compared to U-Net, our model achieved a balance between accuracy and speed. While U-Net relied on its deep architecture for a high recall, our method’s lightweight design enabled faster processing (0.868 s vs. 1.16 s) with a competitive IoU, making it suitable for real-time applications.

Benchmarking Against Modern Architectures

As shown in Table 5, an essential characteristic was demonstrated: our model could process 512 × 512 images in 868 ms on a CPU (i9), achieving 83.6% of U-Net’s GPU-accelerated accuracy (IoU = 0.886 vs. 0.892).

7. Conclusions

In this paper, we proposed a variable-step-size PCNN which was more suitable for image segmentation than traditional models. Our model with spirally varying internal activity could effectively suppress external micro-perturbations, thereby reducing segmentation noise. Three key advancements were demonstrated through extensive experiments:
  • Enhanced robustness: maintained 92% segmentation accuracy under Gaussian noise ( σ = 0.2 ) (Table 3), outperforming PCNN by 23 percentage points
  • Computational efficiency: processed images in 1.16 s (Table 3), achieveing 56% faster processing speed than baseline PCNN with a 19% Dice improvement
  • Architecture simplicity: single-parameter optimization achieved a cross-entropy loss of 0.000578 (Table 4), seven times lower than SPCNN

Limitations

  • Training dependency: The ϕ optimization depends on annotated datasets. Future work will explore unsupervised adaptation using online clustering [13].
  • Real-time adaptation: for video streams, we plan to integrate Kalman filtering for frame-to-frame ϕ propagation.
What is more, the parameter adaptation method is concise and practicable; merely training one parameter of this model allows a better generalization across various images. Finally, for a contiguous set of images with large cosine similarity, such as videos, the segmentation may be more effective. The stability shown in Figure 14 suggests promising applications in real-time video surveillance systems.

Author Contributions

Conceptualization, Z.Y. and S.L.; methodology, J.G. and F.J.; software, J.G. and F.J.; validation, Y.S.; formal analysis, S.L.; investigation, Z.Y.; resources, J.G.; data curation, F.J.; writing—original draft preparation, J.G. and F.J.; writing—review and editing, Z.Y. and S.L.; visualization, J.G.; supervision, Z.Y.; project administration, Y.S.; funding acquisition, Y.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the Key Laboratory of Advanced Computing of Gansu Province, Key Talent Project of gansu province, Fundamental Research Funds for the Central Universities of China (No. lzujbky-2022-pd12), and by the Natural Science Foundation of Gansu Province, China (No. 22YF7GA006 and 22JR5RA492).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Acknowledgments

The authors are grateful to the anonymous reviewers whose comments significantly improved this work.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Ma, Y.; Lin, D.; Zhang, B.; Liu, Q.; Gu, J. A novel algorithm of image Gaussian noise filtering based on PCNN time matrix. In Proceedings of the 2007 IEEE International Conference on Signal Processing and Communications, Dubai, United Arab Emirates, 24–27 November 2007; IEEE: New York, NY, USA, 2007; pp. 1499–1502. [Google Scholar]
  2. Bi, Y.W.; Qiu, T.S. An adaptive image segmentation method based on a simplified PCNN. Acta Electonica Sin. 2005, 33, 647. [Google Scholar]
  3. Chen, Y.; Park, S.K.; Ma, Y.; Ala, R. A new automatic parameter setting method of a simplified PCNN for image segmentation. IEEE Trans. Neural Netw. 2011, 22, 880–892. [Google Scholar] [CrossRef] [PubMed]
  4. Qi, Y.; Yang, Z.; Lian, J.; Guo, Y.; Sun, W.; Liu, J.; Wang, R.; Ma, Y. A new heterogeneous neural network model and its application in image enhancement. Neurocomputing 2021, 440, 336–350. [Google Scholar] [CrossRef]
  5. Huang, Y.; Ma, Y.; Li, S.; Zhan, K. Application of heterogeneous pulse coupled neural network in image quantization. J. Electron. Imaging 2016, 25, 061603. [Google Scholar] [CrossRef]
  6. Yang, Z.; Lian, J.; Li, S.; Guo, Y.; Ma, Y. A study of sine—Cosine oscillation heterogeneous PCNN for image quantization. Soft Comput. 2019, 23, 11967–11978. [Google Scholar] [CrossRef]
  7. Liu, J.; Lian, J.; Sprott, J.C.; Liu, Q.; Ma, Y. The butterfly effect in primary visual cortex. IEEE Trans. Comput. 2022, 71, 2803–2815. [Google Scholar] [CrossRef]
  8. Kanamaru, T.; Aihara, K. Stochastic synchrony of chaos in a pulse-coupled neural network with both chemical and electrical synapses among inhibitory neurons. Neural Comput. 2008, 20, 1951–1972. [Google Scholar] [CrossRef]
  9. Yao, L.s.; Xu, G.m.; Zhao, F. Pooling method on PCNN in convolutional neural network. In Journal of Physics: Conference Series; IOP Publishing: Bristol, UK, 2020; Volume 1486, p. 022026. [Google Scholar]
  10. Wang, Q.; Yan, X.; Xie, W.; Wang, Y. Image Fusion Method Based on Snake Visual Imaging Mechanism and PCNN. Sensors 2024, 24, 3077. [Google Scholar] [CrossRef]
  11. Hu, P.; Tang, S.; Zhang, Y.; Song, X.; Sun, M. Remote Sensing Image Reconstruction Method Based on Parameter Adaptive Dual—Channel Pulse—Coupled Neural Network to Optimize Multiscale Decomposition. IEEE Access 2023, 11, 78084–78103. [Google Scholar] [CrossRef]
  12. Deng, X.; Ma, Y.; Dong, M. A new adaptive filtering method for removing salt and pepper noise based on multilayered PCNN. Pattern Recognit. Lett. 2016, 79, 8–17. [Google Scholar] [CrossRef]
  13. Panigrahy, C.; Seal, A.; Mahato, N.K. Parameter adaptive unit-linking dual-channel PCNN based infrared and visible image fusion. Neurocomputing 2022, 514, 21–38. [Google Scholar] [CrossRef]
  14. Xu, X.; Liang, T.; Wang, G.; Wang, M.; Wang, X. Self-adaptive PCNN based on the ACO algorithm and its application on medical image segmentation. Intell. Autom. Soft Comput. 2017, 23, 303–310. [Google Scholar] [CrossRef]
  15. Wei, S.; Hong, Q.; Hou, M. Automatic image segmentation based on PCNN with adaptive threshold time constant. Neurocomputing 2011, 74, 1485–1491. [Google Scholar] [CrossRef]
  16. Wang, M.; Shang, X. An improved simplified PCNN model for salient region detection. Vis. Comput. 2022, 38, 371–383. [Google Scholar] [CrossRef]
  17. Liu, H.; Xiang, M.; Liu, M.; Li, P.; Zuo, X.; Jiang, X.; Zuo, Z. Random-Coupled Neural Network. Electronics 2024, 13, 4297. [Google Scholar] [CrossRef]
  18. Johnson, J.L.; Padgett, M.L. PCNN models and applications. IEEE Trans. Neural Netw. 1999, 10, 480–498. [Google Scholar] [CrossRef]
  19. Lindblad, T.; Kinser, J.M.; Taylor, J. Image Processing Using Pulse-Coupled Neural Networks; Springer: Berlin/Heidelberg, Germany, 2005. [Google Scholar]
  20. Yang, Z.; Lian, J.; Guo, Y.; Li, S.; Wang, D.; Sun, W.; Ma, Y. An overview of PCNN model’s development and its application in image processing. Arch. Comput. Methods Eng. 2019, 26, 491–505. [Google Scholar] [CrossRef]
  21. Li, M. Simulation analysis of visual perception model based on pulse coupled neural network. Sci. Rep. 2023, 13, 12281. [Google Scholar] [CrossRef]
  22. Ma, R.; Zhang, Z.; Ma, Y.; Hu, X.; Ngai, E.C.; Leung, V.C. An improved pulse coupled neural networks model for semantic IoT. Digit. Commun. Netw. 2024, 10, 557–567. [Google Scholar] [CrossRef]
  23. Rafi, N.; Rivas, P. A Review of Pulse-Coupled Neural Network Applications in Computer Vision and Image Processing. arXiv 2023, arXiv:2406.00239. [Google Scholar]
  24. Deng, X.; Yu, H.; Huang, X. Time domain characteristic analysis of non-coupled PCNN. Optoelectron. Lett. 2024, 20, 689–696. [Google Scholar] [CrossRef]
  25. Xu, X.; Li, X.; Li, Y.; Kang, L.; Ge, J. A Novel Adaptively Optimized PCNN Model for Hyperspectral Image Sharpening. Remote Sens. 2023, 15, 4205. [Google Scholar] [CrossRef]
  26. Qi, B.; Li, Q.; Zhang, Y.; Shi, J.; Lv, Z.; Li, G. Infrared and Visible Image Fusion via Sparse Representation and Adaptive Dual-Channel PCNN Model Based on Co-Occurrence Analysis Shearlet Transform. IEEE Trans. Instrum. Meas. 2025, 74, 5004815. [Google Scholar] [CrossRef]
  27. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015: Proceedings of the 18th International Conference, Munich, Germany, 5–9 October 2015, Proceedings, Part III 18; Springer International Publishing: Berlin/Heidelberg, Germany, 2015. [Google Scholar]
  28. Beucher, S.; Meyer, F. The morphological approach to segmentation: The watershed transformation. In Mathematical Morphology in Image Processing; CRC Press: Boca Raton, FL, USA, 2018; pp. 433–481. [Google Scholar]
  29. Yi-de, M.; Qing, L.; Zhi-Bai, Q. Automated image segmentation using improved PCNN model based on cross-entropy. In Proceedings of the 2004 International Symposium on Intelligent Multimedia, Video and Speech Processing, Hong Kong, 20–22 October 2004; IEEE: New York, NY, USA, 2004; pp. 743–746. [Google Scholar]
  30. Hong, C.S.; Oh, T.G. TPR-TNR plot for confusion matrix. Commun. Stat. Appl. Methods 2021, 28, 161–169. [Google Scholar] [CrossRef]
  31. Chen, L.C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 801–818. [Google Scholar] [CrossRef]
  32. He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask R-CNN. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2961–2969. [Google Scholar] [CrossRef]
Figure 1. Model of PCNN’s neuron structure.
Figure 1. Model of PCNN’s neuron structure.
Entropy 27 00597 g001
Figure 2. Tracking the parameter of a specific neuron.
Figure 2. Tracking the parameter of a specific neuron.
Entropy 27 00597 g002
Figure 3. (a) “Lena”. (b) Frequency statistics for each gray value of Lena. (c) “Puppy”. (d) Frequency statistics for each gray value of Puppy.
Figure 3. (a) “Lena”. (b) Frequency statistics for each gray value of Lena. (c) “Puppy”. (d) Frequency statistics for each gray value of Puppy.
Entropy 27 00597 g003
Figure 4. SPCNN divides the image into four parts.
Figure 4. SPCNN divides the image into four parts.
Entropy 27 00597 g004
Figure 5. (a) Segmentation of Lena when ST = 1; (b) pixel map of Lena with the same colors as in the last picture; (c) Segmentation of Lena when ST = 0.6, (d) pixel map of Lena when ST = 0.6.
Figure 5. (a) Segmentation of Lena when ST = 1; (b) pixel map of Lena with the same colors as in the last picture; (c) Segmentation of Lena when ST = 0.6, (d) pixel map of Lena when ST = 0.6.
Entropy 27 00597 g005
Figure 6. Best ST in SPCNN’s supervised experiment on Lena.
Figure 6. Best ST in SPCNN’s supervised experiment on Lena.
Entropy 27 00597 g006
Figure 7. Best ST in SPCNN’s supervised experiment on Puppy.
Figure 7. Best ST in SPCNN’s supervised experiment on Puppy.
Entropy 27 00597 g007
Figure 8. Conceptual representation of how the model take steps.
Figure 8. Conceptual representation of how the model take steps.
Entropy 27 00597 g008
Figure 9. The first column records the changes in the parameter, and the second column shows the effect on the corresponding image segmentation (a) ST = 1, (b) ST = 0.5, (c) ST = 0.5 s i n ( 1.5681 n ) + 0.5 .
Figure 9. The first column records the changes in the parameter, and the second column shows the effect on the corresponding image segmentation (a) ST = 1, (b) ST = 0.5, (c) ST = 0.5 s i n ( 1.5681 n ) + 0.5 .
Entropy 27 00597 g009
Figure 10. Dynamic-step-size PCNN framework. Training phase (top) optimizes ϕ ; inference phase (bottom) applies adaptive S T n . Arrows indicate data flow.
Figure 10. Dynamic-step-size PCNN framework. Training phase (top) optimizes ϕ ; inference phase (bottom) applies adaptive S T n . Arrows indicate data flow.
Entropy 27 00597 g010
Figure 11. Dynamic-step-size PCNN workflow.
Figure 11. Dynamic-step-size PCNN workflow.
Entropy 27 00597 g011
Figure 12. The overall structure of the random PCNN. Data flow: (1) input image feeds into U i j ; (2) ST generator modulates membrane potential; (3) pulse output Y i j is thresholded.
Figure 12. The overall structure of the random PCNN. Data flow: (1) input image feeds into U i j ; (2) ST generator modulates membrane potential; (3) pulse output Y i j is thresholded.
Entropy 27 00597 g012
Figure 14. Dice coefficient variation under different noise levels σ . The red dashed line marks the performance retention rate (92.1%) at σ = 0.2 .
Figure 14. Dice coefficient variation under different noise levels σ . The red dashed line marks the performance retention rate (92.1%) at σ = 0.2 .
Entropy 27 00597 g014
Table 1. Cross-Dataset Generalization of ϕ .
Table 1. Cross-Dataset Generalization of ϕ .
DatasetIoUDice
Berkeley0.8860.901
NIH X-ray0.8210.845
GaoFen-20.8030.829
Table 3. Extended performance analysis under varied noise levels.
Table 3. Extended performance analysis under varied noise levels.
Model σ DiceIoUTime (s)TPRTNR
PCNN0.10.9380.8832.690.9870.923
PCNN0.20.8210.6942.710.9530.845
SPCNN0.10.7460.5950.871.0000.603
SPCNN0.20.6030.4370.890.9980.512
UNet [27]-0.6770.5120.8680.9970.512
RandomStepPCNN0.10.933 ± 0.0110.874 ± 0.0091.160.9780.918
RandomStepPCNN0.20.9010.8151.190.9620.894
Results averaged over five runs (mean ± std).
Table 4. Performance comparison.
Table 4. Performance comparison.
ModelIoUCross EntropyTPRTNRTime (s)
MW [28]0.77065.033 × 10 3 0.79980.9903-
SPCNN0.85723.834 × 10 3 0.99100.9603-
UNet [27]0.5116-0.99660.51250.8684
Random PCNN0.88635.781 × 10 4 0.96630.9770-
Hardware: Intel i9-13900K, NVIDIA RTX 4090; software: PyTorch 2.0.
Table 5. Core performance comparison with deep learning models.
Table 5. Core performance comparison with deep learning models.
MetricDynamic PCNNU-NetDeepLabv3+Mask R-CNN
IoU (natural)0.8860.8920.9010.893
Dice (medical)0.8450.9020.8870.891
Time (ms)868120180250
Natural: PASCAL VOC; medical: ISIC 2018. Time: CPU (Intel i9) vs. GPU (RTX 4090). Noise robustness: Dice at σ = 0.2. Baseline results from [27,31,32].
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Geng, J.; Ji, F.; Li, S.; Shen, Y.; Yang, Z. Dynamic-Step-Size Regulation in Pulse-Coupled Neural Networks. Entropy 2025, 27, 597. https://doi.org/10.3390/e27060597

AMA Style

Geng J, Ji F, Li S, Shen Y, Yang Z. Dynamic-Step-Size Regulation in Pulse-Coupled Neural Networks. Entropy. 2025; 27(6):597. https://doi.org/10.3390/e27060597

Chicago/Turabian Style

Geng, Jiayi, Fanqing Ji, Shouliang Li, Yulin Shen, and Zhen Yang. 2025. "Dynamic-Step-Size Regulation in Pulse-Coupled Neural Networks" Entropy 27, no. 6: 597. https://doi.org/10.3390/e27060597

APA Style

Geng, J., Ji, F., Li, S., Shen, Y., & Yang, Z. (2025). Dynamic-Step-Size Regulation in Pulse-Coupled Neural Networks. Entropy, 27(6), 597. https://doi.org/10.3390/e27060597

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop