Fast Segmentation Algorithm for Cystoid Macular Edema Based on Omnidirectional Wave Operator

: Optical coherence tomography (OCT) is widely used in the ﬁeld of ophthalmic imaging. The existing technology cannot automatically extract the contour of the OCT images of cystoid macular edema (CME) and can only evaluate the degree of lesions by detecting the thickness of the retina. To solve this problem, this paper proposes an automatic segmentation algorithm that can segment the CME in OCT images of the fundus quickly and accurately. This method ﬁrstly constructs the working environment by denoising and contrast stretching, secondly extracts the region of interest (ROI) containing CME according to the average gray distribution of the image, and then uses the omnidirectional wave operator to perform multidirectional automatic segmentation. Finally, the fused segmentation results are screened by gray threshold and position feature, and the contour extraction of CME is realized. The segmentation results of the proposed method on data set images are compared with those obtained by manual marking of experts. The accuracy, recall, Dice index, and F1-score are 88.8%, 75.0%, 81.1%, and 81.3%, respectively, with the average process time being 1.2 s. This algorithm is suitable for general CME image segmentation and has high robustness and segmentation accuracy.


Introduction
Optical coherence tomography (OCT) [1] is an imaging technology based on lowcoherence interference. It can obtain tomography of biological tissues by measuring the interference signals of the reflected light and the backscattered light. The reflected light comes from the reference arm and the backscattered light comes from different depths inside the biological tissue [2]. Compared with retinal imaging methods such as fundus cameras, ultrasound detection, and fundus angiography, OCT is safe, fast, high-precision, and non-invasive. Additionally, OCT has been widely used in the field of ophthalmology [3][4][5]. OCT technology can obtain two-dimensional tomographic imaging of the fundus, making it an important tool for the evaluation and diagnosis of ophthalmic diseases.
The macula is located in the optical center of the human eye and the health of the macula is closely related to human vision. Macular edema is caused by the destruction of a retinal barrier. The infiltration of vascular fluid or protein deposits between retinal layers occurs with an increase in vascular permeability. This will lead to retinal swelling, resulting in decreased vision and even irreversible blindness [6,7]. Macular edema with an obvious cystic structure is called cystoid macular edema (CME). Early detection and treatment of macular edema are very important for preventing permanent visual impairment. However, clinical commercial instruments can only assess the degree of lesions by detecting the thickness of the retina and thus can neither fully extract the CME nor obtain accurate information about the lesion. In addition, the manual measurement of the retinal cyst area in the OCT image by doctors is very time-consuming and lengthy training is needed to accumulate experience in the early stage [8]. Therefore, automatic segmentation technology

Methods
OCT is an imaging technology using the principle of low coherence interference. OCT technology measures the interference signals of reflected light from the reference arm and backscattered light from the sample arm in the Michelson interference light path to obtain the tomographic image of the sample. For retinal OCT images damaged by speckle noise, the wave algorithm has outstanding layer segmentation ability, so it can also extract CME boundaries from images containing CME. However, due to its unidirectional characteristics, the wave algorithm can only extract local contours. To solve this problem, this paper describes an omnidirectional wave operator, which endows the wave algorithm with multidirectional capability through the direction adjustment function, so that it can achieve complete segmentation of objects with closed curves.

Algorithm Overview
When the OCT system acquires fundus images, scattered light waves with random phases are coherently superimposed to produce speckle noise. Speckle noise appears as an irregular speckle pattern, which is randomly distributed in the image [18]. When directly processing the pixels in the image, speckle greatly affects the segmentation accuracy. For the contour extraction algorithm using gradient information, speckle will cause wrong segmentation. In order to reduce the influence of speckle noise, the first step of the omnidirectional wave operator in this paper is to construct a working environment. Through denoising and contrast stretching, the speckle noise is weakened and the contrast between the target and the background area is enhanced. The second step of the algorithm is image segmentation. As CME is located at a specific level of the retina, by determining the ROI area, segmentation results from other areas can be avoided, and the calculation speed can be greatly increased. The algorithm firstly performs ROI division in the second part, and then uses the omnidirectional wave operator to segment the CME. The third step of the algorithm is contour integration. After the results of the four directions are aggregated, the connected domain is processed to obtain the final CME contour. The algorithm flow chart is shown in Figure 1.
Appl. Sci. 2021, 11, x FOR PEER REVIEW 3 of 13 segmentation of CME and help promote the development of ophthalmology and clinical medicine.

Methods
OCT is an imaging technology using the principle of low coherence interference. OCT technology measures the interference signals of reflected light from the reference arm and backscattered light from the sample arm in the Michelson interference light path to obtain the tomographic image of the sample. For retinal OCT images damaged by speckle noise, the wave algorithm has outstanding layer segmentation ability, so it can also extract CME boundaries from images containing CME. However, due to its unidirectional characteristics, the wave algorithm can only extract local contours. To solve this problem, this paper describes an omnidirectional wave operator, which endows the wave algorithm with multidirectional capability through the direction adjustment function, so that it can achieve complete segmentation of objects with closed curves.

Algorithm Overview
When the OCT system acquires fundus images, scattered light waves with random phases are coherently superimposed to produce speckle noise. Speckle noise appears as an irregular speckle pattern, which is randomly distributed in the image [18]. When directly processing the pixels in the image, speckle greatly affects the segmentation accuracy. For the contour extraction algorithm using gradient information, speckle will cause wrong segmentation. In order to reduce the influence of speckle noise, the first step of the omnidirectional wave operator in this paper is to construct a working environment. Through denoising and contrast stretching, the speckle noise is weakened and the contrast between the target and the background area is enhanced. The second step of the algorithm is image segmentation. As CME is located at a specific level of the retina, by determining the ROI area, segmentation results from other areas can be avoided, and the calculation speed can be greatly increased. The algorithm firstly performs ROI division in the second part, and then uses the omnidirectional wave operator to segment the CME. The third step of the algorithm is contour integration. After the results of the four directions are aggregated, the connected domain is processed to obtain the final CME contour. The algorithm flow chart is shown in Figure 1.

Construct the Working Environment
The basic principle of the OCT imaging system is a low-coherence interference. Coherent measurement will inevitably bring coherent noise. The optical characteristics and movement of the object to be measured, the coherence of the light source, the multiple scattering and distortion of the transmitted beam, and the size of the detector aperture will all affect speckle noise. Therefore, speckle noise is caused by interference and is difficult to remove [19]. Speckle noise is randomly distributed in the image, which affects the imaging quality of the OCT system. Denoising is necessary in the process and application of OCT images [20,21]. The wave algorithm uses gray information of retina layers in OCT images as prior knowledge that the gray value of the target area boundary changes gradually. Therefore, the algorithm in this paper uses Gaussian blur to process, which constructs the working condition.
Due to the weak backscattered light of the fundus structure and low optical power, the contrast between tissue area and background area is not high, which affects the accuracy of image segmentation results. Using gamma transformation can stretch the contrast

Construct the Working Environment
The basic principle of the OCT imaging system is a low-coherence interference. Coherent measurement will inevitably bring coherent noise. The optical characteristics and movement of the object to be measured, the coherence of the light source, the multiple scattering and distortion of the transmitted beam, and the size of the detector aperture will all affect speckle noise. Therefore, speckle noise is caused by interference and is difficult to remove [19]. Speckle noise is randomly distributed in the image, which affects the imaging quality of the OCT system. Denoising is necessary in the process and application of OCT images [20,21]. The wave algorithm uses gray information of retina layers in OCT images as prior knowledge that the gray value of the target area boundary changes gradually. Therefore, the algorithm in this paper uses Gaussian blur to process, which constructs the working condition.
Due to the weak backscattered light of the fundus structure and low optical power, the contrast between tissue area and background area is not high, which affects the accuracy of image segmentation results. Using gamma transformation can stretch the contrast between the tissue and the background. The contrast stretch should be moderate and not make boundaries excessively sharp.

ROI Extraction
The OCT imaging results of the retina include the retina and choroid. The tomography images of the capillary layer, middle vascular layer, and large vascular layer in the choroid will cause interference in CME segmentation. Therefore, before segmentation, the ROI region is extracted to avoid the influence of choroidal imaging. CME is distributed in most layers of the retina. Once the internal and external environment of the retina is determined, the region to be segmented is determined. The calculation time is reduced and the calculation speed can be improved at the same time. The boundary of the ROI is the inner limiting membrane (ILM) and the outer Bruch membrane (OBM), as shown in Figure 2.
Appl. Sci. 2021, 11, x FOR PEER REVIEW 4 of 13 between the tissue and the background. The contrast stretch should be moderate and not make boundaries excessively sharp.

ROI Extraction
The OCT imaging results of the retina include the retina and choroid. The tomography images of the capillary layer, middle vascular layer, and large vascular layer in the choroid will cause interference in CME segmentation. Therefore, before segmentation, the ROI region is extracted to avoid the influence of choroidal imaging. CME is distributed in most layers of the retina. Once the internal and external environment of the retina is determined, the region to be segmented is determined. The calculation time is reduced and the calculation speed can be improved at the same time. The boundary of the ROI is the inner limiting membrane (ILM) and the outer Bruch membrane (OBM), as shown in Figure 2. The ROI is judged by calculating the row gray mean distribution by Formula (1).
Given an image of size M × N pixels, the gray value of each pixel is I(i,j), where i is the row coordinate of the pixel and j is the column coordinate of the pixel. The average row gray value is calculated by using Formula (1).
For the convenience of observation, the white curve on the left side of Figure 3 indicates the change in the average gray value of each row, and the blue straight line indicates the mean gray value of the whole image. The intersection points A and B of the two curves can divide the image into three parts. From top to bottom are the posterior vitreous, retina, and sclera. The segmentation range of this algorithm is the retina which is the region between points A and B. Using the wave algorithm, the image is segmented from top to bottom to obtain the ILM layer and the OBM layer, that is, the ROI area. The ROI is judged by calculating the row gray mean distribution by Formula (1).
Given an image of size M × N pixels, the gray value of each pixel is I(i,j), where i is the row coordinate of the pixel and j is the column coordinate of the pixel. The average row gray value is calculated by using Formula (1).
For the convenience of observation, the white curve on the left side of Figure 3 indicates the change in the average gray value of each row, and the blue straight line indicates the mean gray value of the whole image. The intersection points A and B of the two curves can divide the image into three parts. From top to bottom are the posterior vitreous, retina, and sclera. The segmentation range of this algorithm is the retina which is the region between points A and B. Using the wave algorithm, the image is segmented from top to bottom to obtain the ILM layer and the OBM layer, that is, the ROI area.

Contour Extraction by Omnidirectional Wave Operator
The basic idea of the wave algorithm is to transform the gray value of pixels into the potential energy of fluid mechanics. By modifying the potential energy equation of fluid

Contour Extraction by Omnidirectional Wave Operator
The basic idea of the wave algorithm is to transform the gray value of pixels into the potential energy of fluid mechanics. By modifying the potential energy equation of fluid mechanics, it has good applicability in image applications. In the fluid potential energy equation (Formula (2)), g is gravity acceleration, h is fluid height, gh is gravity potential energy (ϕ g ), p is fluid pressure, ρ is the fluid density, p/ρ represents pressure potential energy, v is the fluid velocity, and v 2 /2 represents kinetic energy.
In the image, the pixel is analogized as fluid particles. Since the pixel is square, the density ρ = 1, and the z-axis of the image in three-dimensional space is set as a gray value, then the pressure p = 0 at H height, and the fluid pressure energy is p/ρ = 0. The traditional fluid potential energy equation evolves the wave potential energy equation (Formula (3)) and the wave correction equation (Formula (4)) for the image process. The fluid velocity (v) is replaced by the normalized gray difference value (v and v q ), v is the fluid velocity in the 3 × 3 template, and v q is the fluid velocity in the 3 × 2 template which is located in front of the central pixel. σ is the speed regulation factor. A set of points containing boundary lines can be obtained by the wave potential energy equation, and the wave correction equation can extract boundary lines from the set of points.
When extracting the contour of the target area, the wave algorithm can only extract curves whose tangent direction is vertical to the operation direction. In the segmentation of the ten-layer structure of the retina, the boundary line of the retinal inter-layer structure is approximately vertical to the operation direction, so the wave algorithm can effectively extract the retinal layer structure. However, for a vertical biconvex oval or transverse spindle-shaped structure such as CME, the wave algorithm can only extract the curve vertical to the operation direction, so it cannot obtain the complete contour. Aiming at the unidirectional problem of the wave algorithm, this paper puts forward an omnidirectional wave operator and adds a direction adjustment function, which makes the wave algorithm have the ability to extract closed curves.
An omnidirectional wave operator can perform the omnidirectional operation on the target in four directions. Specifically, the parameters in the equation are transformed by the direction adjustment function (Formula (5)), (i, j) is the pixel coordinate in the image, and (i , j ) is the pixel coordinate after direction adjustment. θ is the rotation angle, and the positive angle is counterclockwise and the negative angle is clockwise. θ is set to 0, π/2, π, 3π/2 in turn.
To give an example to gain a better understanding of the direction adjustment function, A rotates θ degrees counterclockwise to A (Figure 4). The mathematical relationship between the coordinates of point A, the coordinates of point A , and the rotation angle θ can be obtained by operating in polar coordinates. The mathematical relationship can be written as a matrix, as shown in Formula (5).

      
To give an example to gain a better understanding of the direction adjustment function, rotates θ degrees counterclockwise to ′ (Figure 4). The mathematical relationship between the coordinates of point , the coordinates of point ′, and the rotation angle θ can be obtained by operating in polar coordinates. The mathematical relationship can be written as a matrix, as shown in Formula (5). After direction adjustment, the parameters (Formulas (6) to (10)) in the wave potential energy equation and the wave correction equation have multidirectional properties.
represents gravitational potential energy, where Z is the 3*3 template where the central pixel is located, ′, ′ is the gray value of the central pixel of the template, ′, ′ is the Gaussian weighting function, and MAX is the maximum gray value of the whole image.
represents kinetic energy, v is central velocity, is wavefront velocity, is a regulation factor, and the function of wavefront velocity is to eliminate speckle noise near the boundary line, which leads to a sharp increase in kinetic energy. The Q template is a 3 × 2 template before the center pixel, and is the normalized difference value of adjacent pixels in the Q template. The function of regulation factor is to prevent the large area and high gray value area in the image from causing too much kinetic energy and over-segmentation of boundary areas.
controls the influence of the gray value of the 3 × 3 template on kinetic energy, and the value is between 0 and 1. In the wave correction equation, K is a geometric shape determination function, which shows the geometric distribution of gray values near the point to be measured; C is the gray difference judgment function, which shows the numerical difference in gray values near the point to be measured. After direction adjustment, the parameters (Formulas (6) to (10)) in the wave potential energy equation and the wave correction equation have multidirectional properties. ϕ g represents gravitational potential energy, where Z is the 3 × 3 template where the central pixel is located, I(i , j ) is the gray value of the central pixel of the template, g(i , j ) is the Gaussian weighting function, and MAX is the maximum gray value of the whole image. ϕ v represents kinetic energy, v is central velocity, v q is wavefront velocity, σ is a regulation factor, and the function of wavefront velocity v q is to eliminate speckle noise near the boundary line, which leads to a sharp increase in kinetic energy. The Q template is a 3 × 2 template before the center pixel, and v q is the normalized difference value of adjacent pixels in the Q template. The function of regulation factor σ is to prevent the large area and high gray value area in the image from causing too much kinetic energy and over-segmentation of boundary areas. σ controls the influence of the gray value of the 3 × 3 template on kinetic energy, and the value is between 0 and 1. In the wave correction equation, K is a geometric shape determination function, which shows the geometric distribution of gray values near the point to be measured; C is the gray difference judgment function, which shows the numerical difference in gray values near the point to be measured.
After the process of the wave potential energy equation and wave correction equation, the contour of the target can be obtained. In this paper, an image containing CME is selected to verify the extraction results of the omnidirectional wave operator in four directions separately and the fusion results of four directions, as shown in Figure 5. Considering that the image is a two-dimensional plane, a two-dimensional coordinate is established in the image plane. The rotation angle θ is the angle formed with any point A and its rotated point A , respectively, with the rotation center. When the operator turns to π/2, π, and 3 π/2, the operating direction is parallel to the positive direction of the y-axis, negative direction of the x-axis, and the negative direction of the y-axis. selected to verify the extraction results of the omnidirectional wave operator in four di-rections separately and the fusion results of four directions, as shown in Figure 5. Considering that the image is a two-dimensional plane, a two-dimensional coordinate is established in the image plane. The rotation angle θ is the angle formed with any point A and its rotated point A', respectively, with the rotation center. When the operator turns to π/2, π, and 3 π/2, the operating direction is parallel to the positive direction of the y-axis, negative direction of the x-axis, and the negative direction of the y-axis. It can be seen from Figure 5 that the contour lines extracted in four directions by using the omnidirectional wave operator all have approximately the same tangent direction, and the results in a single direction are all non-closed and incomplete, so the unidirectional wave algorithm cannot extract complete closed curves. As CME is elliptical, the wave algorithm cannot realize the complete segmentation of the cyst. The algorithm proposed in this paper adds the orientation characteristic based on the wave algorithm, and the curves in different directions can be obtained through the independent extraction in four directions, and then the complete results can be obtained by using curve fusion, realizing the accurate and complete segmentation of CME.
The omnidirectional wave operator has the advantages of the wave algorithm. It is insensitive to randomly distributed speckle noise by using gradual gray values to identify boundaries, so it can avoid false segmentation caused by noise or local mutation. In addition, compared with the segmentation methods, such as the active contour model and level set, which iterate many times to seek energy minimization, the pixel-based method has outstanding advantages in computing speed. Furthermore, the omnidirectional wave operator has directional characteristics and can extract arbitrary targets with closed It can be seen from Figure 5 that the contour lines extracted in four directions by using the omnidirectional wave operator all have approximately the same tangent direction, and the results in a single direction are all non-closed and incomplete, so the unidirectional wave algorithm cannot extract complete closed curves. As CME is elliptical, the wave algorithm cannot realize the complete segmentation of the cyst. The algorithm proposed in this paper adds the orientation characteristic based on the wave algorithm, and the curves in different directions can be obtained through the independent extraction in four directions, and then the complete results can be obtained by using curve fusion, realizing the accurate and complete segmentation of CME.
The omnidirectional wave operator has the advantages of the wave algorithm. It is insensitive to randomly distributed speckle noise by using gradual gray values to identify boundaries, so it can avoid false segmentation caused by noise or local mutation. In addition, compared with the segmentation methods, such as the active contour model and level set, which iterate many times to seek energy minimization, the pixel-based method has outstanding advantages in computing speed. Furthermore, the omnidirectional wave operator has directional characteristics and can extract arbitrary targets with closed curves. Among the segmentation algorithms with the ability to distinguish directions, the omnidirectional wave operator still has advantages. For example, the segmentation method based on the directional graph method [22] can only extract targets with a large number of curves in the same direction, and it depends on the reliability of the points' directions, so the extraction of single-line items such as CME is not good.

Contour Integration
After the boundary of the target region is obtained by an omnidirectional wave algorithm, the process results in four directions that need to be integrated. In contour integration, it is necessary to consider: (a) that the boundary points and non-boundary points are classified uniformly; (b) eliminating other imaging interference, such as hard exudation [23]. Before contour integration, the segmentation results from four directions are binarized, and the boundary points and non-boundary points are marked, respectively. Then, the initial direction is fixed, and the extraction results obtained by operators in other directions are superimposed. As long as a pixel is identified as a boundary in a certain direction, it will become a point on the final omnidirectional boundary line. This method can make up for the shortcomings of the unidirectional wave algorithm or other algorithms which do not have the ability of omnidirectional target segmentation and can obtain a more complete target contour.
Since the fundus may have hard exudation, it also has the contour of a closed curve. To avoid the interference of hard exudation, the algorithm in this paper filters the segmentation Appl. Sci. 2021, 11, 6480 8 of 13 results of hard exudation by judging the gray mean value of the connected domain. Since the CME capsule cavity is filled with liquid components such as plasma and vitreous humor, the imaging presents a low-reflective signal, and the internal gray value of the CME is low. Meanwhile, the hard exudation presents high-reflective particles, and its internal gray value is high. The gray-scale screening method uses the average gray value of the whole image as the threshold. For all connected areas, only those whose average gray value is lower than the threshold are retained, and those whose average gray value higher than the threshold are filtered, that is, the interference of hard exudation symptoms is eliminated.
In addition, since the cyst appears between the layers of the retina, the thickness of the cyst is within a certain range. According to the distribution of cysts in the data set, we choose 1.5 times the average thickness of the retina as the upper limit of the cyst size and 0.5 times the average thickness of the retina as the lower limit. By filtering the position of the fusion result, the contour information can be more accurate.

Evaluation Method
According to the correctness of the predicted result, the predicted result can be divided into four categories: true positive, false positive, true negative, and false negative. A true positive is an outcome where the model correctly predicts the positive class. Similarly, a true negative is an outcome where the model correctly predicts the negative class. A false positive is an outcome where the model incorrectly predicts the positive class. Additionally, a false negative is an outcome where the model incorrectly predicts the negative class.
To quantitatively evaluate the correctness and feasibility of this algorithm, three indexes are used to evaluate the experimental results, Dice index, accuracy, and recall. The Dice index is used to measure the similarity between two sets. In image segmentation, it measures the proportion of predicted results in all results, including real results and false results. The value of the Dice index is between 0 and 1. The larger the Dice index is, the more accurate the prediction result is. It is generally believed that a Dice coefficient value higher than 0.70 indicates excellent consistency [24].
Precision, also known as a positive predictive value, indicates the proportion of correctly predicted results, that is, the ratio of true positives to true predictions. Precision is in the range [0,1]. The larger the value is, the higher the accuracy and the more accurate the segmentation results are. It reflects the anti-error detection ability of the segmentation algorithm.
Recall is also called sensitivity. Recall indicates the proportion predicted in the real result. Recall is in the range [0,1]. The larger the value is, the higher the accuracy and the more accurate the segmentation results are. It reflects the anti-missing detection ability of the segmentation algorithm.
Precision and recall are often summarized as a single quantity, the F1-score, which is the harmonic mean of both measures and is in the range [0,1].

Results
The data used in this paper are from an SD-OCT public data set that was collected by Farisu et al. from the Visual and Image Processing Laboratory of Duke University [14]. This data set is from the standard Spectralis (Heidelberg Engineering, Heidelberg, Germany) 61line volume scan protocol. The volume scans were Q = 61 Bscans × N = 768 Ascans with an axial resolution of 3.87 um/pixel, lateral resolutions ranging from 10.94-11.98 um/pixel, and azimuthal resolution ranging from 118-128 um/pixel. This data set contains 110 OCT images of the fundus collected from 10 patients, including normal fundus images and fundus images with CME diseases. The public data set was manually marked by two experts from Duke Eye Center, who are both fellowship-trained medical retina specialists of the faculty at the Duke Eye Center. They have over 5 years of clinical experience working with diabetic subjects and many years of experience in studying OCT images.
The boundary obtained by this algorithm is the predicted value, and the cyst manually marked by two experts is the true value. Figure 6a Among the existing algorithms, two algorithms with accurate segmentation results and strong robustness were selected for comparative experiments. DRLSE [11] and the Snake active contour model [25] both need to select the initial contour, and the segmentation of target areas with complex contours is not good (Figure 6d,e). Figure 6f is the segmentation result of the proposed algorithm and it shows a good segmentation result. Table 1 shows the objective evaluation of the segmentation results obtained by DRLSE, the Snake active contour model, and the proposed algorithm. The Dice index, accuracy, and recall of this algorithm are 81.1%, 88.8%, 75.0%, and 81.3%, respectively.  Among the existing algorithms, two algorithms with accurate segmentation results and strong robustness were selected for comparative experiments. DRLSE [11] and the Snake active contour model [25] both need to select the initial contour, and the segmentation of target areas with complex contours is not good (Figure 6d,e). Figure 6f is the segmentation result of the proposed algorithm and it shows a good segmentation result. Table 1 shows the objective evaluation of the segmentation results obtained by DRLSE, the Snake active contour model, and the proposed algorithm. The Dice index, accuracy, and recall of this algorithm are 81.1%, 88.8%, 75.0%, and 81.3%, respectively. Among the four indexes, the Dice index measures the similarity between the predicted result and the real result and the F1-score reflects the performance of accuracy and recall rate. The result of the proposed algorithm in this paper is the best. The accuracy of the three algorithms is very high, that is, the algorithm has a strong ability to segment the real target area. The pixels in the non-target area are not judged as the target area, which indicates that the algorithm does not misjudge. Recall is a measure of the recognition rate of real results. It reflects the anti-missing detection ability of the segmentation algorithm. Among the three algorithms, the recall of this algorithm is the highest. It should also be noted that the manual annotation by experts is subjective, and the results of manual annotation by three experts were slightly different, so we used the average value to evaluate the algorithm.
We compared the processing speed of the proposed algorithm with Snake and DRLSE. The average time of Snake and DRLSE is 49 s and 18.43 s, while the time of the proposed algorithm is only 1.2 s. The operating time is reduced by an order of magnitude.
According to the above evaluation indexes, the algorithm in this paper is not only superior to the other two algorithms in segmentation accuracy but also has great advantages in operation speed. It is a fast and fully automatic algorithm with high segmentation accuracy.
This paper proposes an automatic segmentation method with multiple operating directions. The selection of the number of directions has been verified by experiments and theoretical analysis. When designing the algorithm, based on experience, we believe that an arbitrarily shaped target with a closed contour needs to be segmented in eight directions, but in actual experiments, we found that segmentation in four directions is sufficient to determine any closed curve contour. In order to better explain this, we considered four directions, six directions, and eight directions, and calculated the evaluation index of the segmentation results in the corresponding number of directions. Figure 7 shows the segmentation results of the three cases. Table 2 shows four evaluation indicators. theoretical analysis. When designing the algorithm, based on experience, we believe that an arbitrarily shaped target with a closed contour needs to be segmented in eight directions, but in actual experiments, we found that segmentation in four directions is sufficient to determine any closed curve contour. In order to better explain this, we considered four directions, six directions, and eight directions, and calculated the evaluation index of the segmentation results in the corresponding number of directions. Figure 7 shows the segmentation results of the three cases. Table 2 shows four evaluation indicators.  It can be seen from Figure 7 that the four-direction operator and eight-direction operator have better segmentation performance than the six-direction operator, and the sixdirection operator cannot fully identify the cyst. From the results of the evaluation indicators, the difference between the Dice index and the F1-score is very small, and they have similar segmentation performance. We choose the four-direction operator instead of the eight-direction operator, considering the following aspects: (a) Recall rate is more important in disease detection applications. In disease screening and testing, we pay more attention to the anti-missing rate. Even if the anti-error detection rate (accuracy) is low, error detection can be avoided by secondary screening or in-depth inspection. We cannot tolerate missed inspections more than false inspections. Operators in four directions have a higher anti-missing rate than operators in eight directions.  It can be seen from Figure 7 that the four-direction operator and eight-direction operator have better segmentation performance than the six-direction operator, and the six-direction operator cannot fully identify the cyst. From the results of the evaluation indicators, the difference between the Dice index and the F1-score is very small, and they have similar segmentation performance. We choose the four-direction operator instead of the eight-direction operator, considering the following aspects: (a) Recall rate is more important in disease detection applications. In disease screening and testing, we pay more attention to the anti-missing rate. Even if the anti-error detection rate (accuracy) is low, error detection can be avoided by secondary screening or in-depth inspection. We cannot tolerate missed inspections more than false inspections. Operators in four directions have a higher anti-missing rate than operators in eight directions.
(b) The authenticity of medical images. In the calculation process of six and eight directions, the rotated operator will change the image pixel distribution to a certain extent and affect the segmentation result. The four-direction operator does not. Even though such an impact is small, in the medical image process, the authenticity of the medical image should be maintained.
(c) Calculating speed. We compared the processing speed of the operator with different numbers of operating directions. In the CME detection experiment, the four-direction operator takes 1.2 s, the six-direction operator takes 1.273 s, and the eight-direction operator takes 1.286 s. The smaller the number of directions, the shorter the calculation time of the algorithm.

Conclusions
This paper proposes a method of automatically segmenting CME in fundus OCT images, which can achieve rapid segmentation while maintaining good segmentation accuracy. It also has good applicability and robustness. This algorithm compares image pixels with fluid particles. By considering the fluid potential energy equation in fluid mechanics, we propose a potential energy equation suitable for images and design an orientation adjustment function, based on the omnidirectional wave operator, to realize the segmentation of CME. The algorithm is insensitive to randomly distribute speckle noise through tracking the gradual gray value characteristics of the region boundary. Through ROI extraction, the interference of sub-choroidal angiography can be filtered, while accelerating the calculation process. Compared with the manual segmentation results of experts, the Dice index, accuracy, recall, and F1-score of the segmentation results of this algorithm are 81.1%, 88.8%, 75.0%, and 81.3%, respectively, which have a good consistency.
The average operation time is 1.2 s, which can provide an important basis for the automatic detection of clinical ophthalmic diagnosis and treatment instruments.