Author Contributions
Conceptualization, X.F. and J.L.; methodology, J.L., X.F., H.C., L.M. and F.L.; software, J.L., X.F., H.C., L.M. and F.L.; validation, J.L., X.F., H.C., L.M. and F.L.; formal analysis, J.L., X.F., H.C., L.M. and F.L.; investigation, J.L., X.F., H.C., L.M. and F.L.; resources, J.L., X.F., H.C., L.M. and F.L.; data curation, J.L., X.F., H.C., L.M. and F.L.; writing—original draft preparation, J.L.; writing—review and editing, J.L., X.F., H.C., L.M. and F.L. All authors have read and agreed to the published version of the manuscript.
Figure 1.
Details of the constructed local gradient model.
Figure 1.
Details of the constructed local gradient model.
Figure 2.
Flow chart of Hessian matrix background suppression model with local gradient significance.
Figure 2.
Flow chart of Hessian matrix background suppression model with local gradient significance.
Figure 3.
Overall flow chart of detection model.
Figure 3.
Overall flow chart of detection model.
Figure 4.
Sequence scene original, original 3D.
Figure 4.
Sequence scene original, original 3D.
Figure 5.
(a–j): Top hat, ANI, PSTNN, ASTTV, GST, NTFRA, NRAM, HB-MLCM, ADMD, and the proposed method’s difference diagram and three-dimensional diagram in scene A, respectively.
Figure 5.
(a–j): Top hat, ANI, PSTNN, ASTTV, GST, NTFRA, NRAM, HB-MLCM, ADMD, and the proposed method’s difference diagram and three-dimensional diagram in scene A, respectively.
Figure 6.
(a–j): Top hat, ANI, PSTNN, ASTTV, GST, NTFRA, NRAM, HB-MLCM, ADMD, and the proposed method’s difference diagram and three-dimensional diagram in scene B, respectively.
Figure 6.
(a–j): Top hat, ANI, PSTNN, ASTTV, GST, NTFRA, NRAM, HB-MLCM, ADMD, and the proposed method’s difference diagram and three-dimensional diagram in scene B, respectively.
Figure 7.
(a–j): Top hat, ANI, PSTNN, ASTTV, GST, NTFRA, NRAM, HB-MLCM, ADMD, and the proposed method’s difference diagram and three-dimensional diagram in scene C, respectively.
Figure 7.
(a–j): Top hat, ANI, PSTNN, ASTTV, GST, NTFRA, NRAM, HB-MLCM, ADMD, and the proposed method’s difference diagram and three-dimensional diagram in scene C, respectively.
Figure 8.
(a–j): Top hat, ANI, PSTNN, ASTTV, GST, NTFRA, NRAM, HB-MLCM, ADMD, and the proposed method’s difference diagram and three-dimensional diagram in scene D, respectively.
Figure 8.
(a–j): Top hat, ANI, PSTNN, ASTTV, GST, NTFRA, NRAM, HB-MLCM, ADMD, and the proposed method’s difference diagram and three-dimensional diagram in scene D, respectively.
Figure 9.
(a–j): Top hat, ANI, PSTNN, ASTTV, GST, NTFRA, NRAM, HB-MLCM, ADMD, and the proposed method’s difference diagram and three-dimensional diagram in scene E, respectively.
Figure 9.
(a–j): Top hat, ANI, PSTNN, ASTTV, GST, NTFRA, NRAM, HB-MLCM, ADMD, and the proposed method’s difference diagram and three-dimensional diagram in scene E, respectively.
Figure 10.
(a–j): Top hat, ANI, PSTNN, ASTTV, GST, NTFRA, NRAM, HB-MLCM, ADMD, and the proposed method’s difference diagram and three-dimensional diagram in scene F, respectively.
Figure 10.
(a–j): Top hat, ANI, PSTNN, ASTTV, GST, NTFRA, NRAM, HB-MLCM, ADMD, and the proposed method’s difference diagram and three-dimensional diagram in scene F, respectively.
Figure 11.
Comparison diagram of energy-enhancement model of local multi-scale gradient maximum before and after energy enhancement for 6 scenes.
Figure 11.
Comparison diagram of energy-enhancement model of local multi-scale gradient maximum before and after energy enhancement for 6 scenes.
Figure 12.
Panels (a1–j1) shows the detection results of sequence A of top hat, ANI, PSTNN, ASTTV, GST, NTFRA, NRAM, HB-MLCM, ADMD, and the proposed method.
Figure 12.
Panels (a1–j1) shows the detection results of sequence A of top hat, ANI, PSTNN, ASTTV, GST, NTFRA, NRAM, HB-MLCM, ADMD, and the proposed method.
Figure 13.
Panels (a1–j1) shows the detection results of sequence B of top hat, ANI, PSTNN, ASTTV, GST, NTFRA, NRAM, HB-MLCM, ADMD, and the proposed method.
Figure 13.
Panels (a1–j1) shows the detection results of sequence B of top hat, ANI, PSTNN, ASTTV, GST, NTFRA, NRAM, HB-MLCM, ADMD, and the proposed method.
Figure 14.
Panels (a1–j1) shows the detection results of sequence C of top hat, ANI, PSTNN, ASTTV, GST, NTFRA, NRAM, HB-MLCM, ADMD, and the proposed method.
Figure 14.
Panels (a1–j1) shows the detection results of sequence C of top hat, ANI, PSTNN, ASTTV, GST, NTFRA, NRAM, HB-MLCM, ADMD, and the proposed method.
Figure 15.
Panels (a1–j1) shows the detection results of sequence D of top hat, ANI, PSTNN, ASTTV, GST, NTFRA, NRAM, HB-MLCM, ADMD, and the proposed method.
Figure 15.
Panels (a1–j1) shows the detection results of sequence D of top hat, ANI, PSTNN, ASTTV, GST, NTFRA, NRAM, HB-MLCM, ADMD, and the proposed method.
Figure 16.
Panels (a1–j1) shows the detection results of sequence E of top hat, ANI, PSTNN, ASTTV, GST, NTFRA, NRAM, HB-MLCM, ADMD, and the proposed method.
Figure 16.
Panels (a1–j1) shows the detection results of sequence E of top hat, ANI, PSTNN, ASTTV, GST, NTFRA, NRAM, HB-MLCM, ADMD, and the proposed method.
Figure 17.
Panels (a1–j1) shows the detection results of sequence F of top hat, ANI, PSTNN, ASTTV, GST, NTFRA, NRAM, HB-MLCM, ADMD, and the proposed method.
Figure 17.
Panels (a1–j1) shows the detection results of sequence F of top hat, ANI, PSTNN, ASTTV, GST, NTFRA, NRAM, HB-MLCM, ADMD, and the proposed method.
Figure 18.
Detection results of the FNPC detection model.
Figure 18.
Detection results of the FNPC detection model.
Figure 19.
Schematic of ROC curves for 10 model detection models in 6 sequence scenarios.
Figure 19.
Schematic of ROC curves for 10 model detection models in 6 sequence scenarios.
Table 1.
Pseudo code for target signal enhancement.
Table 1.
Pseudo code for target signal enhancement.
Step 1. Input the image P which outputs with improved Hessian matrix. |
Step 2. Use multi-scale gradient to enhance the energy of image P and output the enhancement results in direction. |
Step3. Initialization parameters = [ ]; threshold ; statistical parameters ; candidate target storage matrix ; constant for judging segmentation ; image after segmentation . |
Step 4. Compare the 4 directions’ results in with the set threshold to update the statistical parameter as follows: |
|
= |
|
|
Step 5. Use the following judgment pair and the count number to judge whether it meets the requirements as a candidate target and update num: |
|
= 0 |
|
= |
|
|
Step 6. With num, segment candidate targets and output the as follows: |
|
= 1 |
= |
|
= 0 |
= |
Step 7. end |
Table 2.
Pseudocode of the overall flow of the algorithm.
Table 2.
Pseudocode of the overall flow of the algorithm.
Step 1. Input sequence images; |
Step 2. Determine parameters M and X in Formula (2) operation, and initialize parameter Z in Formula (1) value as 5; |
Step3. Use Formulas (1) and (2) to calculate the local significance of the target, and then combine Formulas (3)–(8) to finish background modeling and output the difference image P in Formula (8); |
Step 4. Complete the energy enhancement with Formulas (9) and (10) and pseudocode in Table 1 and output the energy-enhanced image; |
Step 5. Define Formulas (11) and (12) to enhance the difference between target region and background region. In addition, input the target coordinates according to the different enhanced scenes for Formula (13); |
Step 6. Utilize Formula (13) to calculate the distance between candidate target and real target coordinate from Step 5, and similarity between candidate target region and real target region; |
Step 7. Unite Formula (14) to fulfill sequence detection and output target’s trajectory; |
Step 8. end |
Table 3.
Sequence scene-related information.
Table 3.
Sequence scene-related information.
Sequence | Sequence Size | Target Size | Image Size | Target Details |
---|
Sequence A | 296 frames | 2 × 2 | 621 × 501 | UAV in complex clouds. |
Sequence B | 100 frames | 2 × 2 | 256 × 152 | UAV movement in air and ground background. |
Sequence C | 302 frames | 3 × 3 | 481 × 251 | UAV motion in dark bright layered background. |
Sequence D | 876 frames | 2 × 2 | 481 × 251 | UAV motion in dark bright layered background. |
Sequence E | 300 frames | 3 × 3 | 640 × 512 | UAV movement in air and ground background. |
Sequence F | 300 frames | 3 × 3 | 640 × 512 | UAV in complex clouds. |
Table 4.
Relevant computational parameters for each model.
Table 4.
Relevant computational parameters for each model.
Method | Index |
---|
Top hat [55] | Structure shape: structure size 3 × 3, db = [0 1 1 1 0, 1 0 0 0 1, 1 0 0 0 1, 1 0 0 0 1, 0 1 1 1 0]; b = [0 1 1 0, 1 1 1 1, 1 1 1 1, 0 1 1 0]; |
ANI [56] | ; |
PSTNN [23] | Patch size , slid step ,
; |
ASTTV [26] | ; |
GST [58] | , boundarywidth , filtersize ; |
NTFRA [57] | Patch size , slid step , step ; |
NRAM [59] | Patch size , slid step ; |
HBMLCM [19] | ; |
ADMD [60] | ; |
Proposed | ; |
Table 5.
Calculation of indicators in scenario A of each model.
Table 5.
Calculation of indicators in scenario A of each model.
Method | Top Hat [55] | ANI [56] | PSTNN [23] | ASTTV [26] | GST [58] | NTFRA [57] | NRAM [59] | HBMLCM [19] | ADMD [60] | Proposed |
---|
Index |
SSIM | 0.9941 | 0.9939 | 0.9985 | 1.0000 | 1.0000 | 0.9994 | 1.0000 | 0.9946 | 0.9893 | 1.0000 |
SNR | 3.8100 | 2.8800 | 15.6600 | 61.5200 | 18.8900 | 10.0700 | 47.3400 | 39.7800 | 21.4000 | 61.8600 |
IC | 16.1528 | 13.7604 | 20.5339 | 22.2776 | 18.9766 | 20.5339 | 20.5339 | 19.0612 | 20.4495 | 20.5339 |
Table 6.
Calculation of indicators in scenario B of each model.
Table 6.
Calculation of indicators in scenario B of each model.
Method | Top Hat [55] | ANI [56] | PSTNN [23] | ASTTV [26] | GST [58] | NTFRA [57] | NRAM [59] | HBMLCM [19] | ADMD [60] | Proposed |
---|
Index |
SSIM | 0.9937 | 0.9968 | 0.9638 | 1.0000 | 1.0000 | 0.9987 | 0.9630 | 0.9708 | 0.9768 | 1.0000 |
SNR | 3.3600 | 4.5400 | 17.8100 | 21.9100 | 24.5100 | 10.1400 | 20.5500 | 12.3500 | 22.0600 | 21.8000 |
IC | 9.9217 | 10.4831 | 15.9631 | 12.4000 | 15.6710 | 16.1449 | 16.702 | 15.0016 | 16.5134 | 16.7027 |
Table 7.
Calculation of indicators in scenario C of each model.
Table 7.
Calculation of indicators in scenario C of each model.
Method | Top Hat [55] | ANI [56] | PSTNN [23] | ASTTV [26] | GST [58] | NTFRA [57] | NRAM [59] | HBMLCM [19] | ADMD [60] | Proposed |
---|
Index |
SSIM | 0.9994 | 0.9994 | 0.9721 | 1.0000 | 1.0000 | 0.9990 | 0.9769 | 0.9995 | 0.9992 | 0.9999 |
SNR | 26.8900 | 20.7600 | 39.4000 | 51.1700 | 61.3500 | 13.5900 | 35.3100 | 44.2800 | 44.5300 | 16.4200 |
IC | 3.8400 | 4.6761 | 4.3805 | 3.0965 | 2.0393 | 4.3658 | 4.5744 | 5.8280 | 6.4572 | 9.5680 |
Table 8.
Calculation of indicators in scenario D of each model.
Table 8.
Calculation of indicators in scenario D of each model.
Method | Top Hat [55] | ANI [56] | PSTNN [23] | ASTTV [26] | GST [58] | NTFRA [57] | NRAM [59] | HBMLCM [19] | ADMD [60] | Proposed |
---|
Index |
SSIM | 0.9993 | 0.9990 | 0.9914 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 0.9995 | 0.9988 | 1.0000 |
SNR | 17.4400 | 16.1000 | 69.5300 | 0.0200 | 54.2700 | 38.6000 | 50.4700 | 60.7200 | 35.9000 | 34.6400 |
IC | 1.0000 | 1.0000 | 1.0393 | 1.1818 | 1.0495 | 1.0754 | 1.0754 | 0.8697 | 0.9655 | 1.1394 |
Table 9.
Calculation of indicators in scenario E of each model.
Table 9.
Calculation of indicators in scenario E of each model.
Method | Top Hat [55] | ANI [56] | PSTNN [23] | ASTTV [26] | GST [58] | NTFRA [57] | NRAM [59] | HBMLCM [19] | ADMD [60] | Proposed |
---|
Index |
SSIM | 0.9968 | 0.9993 | 0.9845 | 0.9921 | 1.0000 | 0.9947 | 0.9907 | 0.9988 | 0.9987 | 1.0000 |
SNR | 8.0200 | 5.1500 | 25.8300 | 11.1000 | 68.4700 | 5.6600 | 56.8500 | 61.8400 | 44.5600 | 50.2900 |
IC | 11.7408 | 10.8942 | 15.8657 | 16.9196 | 14.9649 | 13.7152 | 15.8657 | 13.2639 | 14.4949 | 16.0730 |
Table 10.
Calculation of indicators in scenario F of each model.
Table 10.
Calculation of indicators in scenario F of each model.
Method | Top Hat [55] | ANI [56] | PSTNN [23] | ASTTV [26] | GST [58] | NTFRA [57] | NRAM [59] | HBMLCM [19] | ADMD [60] | Proposed |
---|
Index |
SSIM | 0.9960 | 0.9976 | 0.9836 | 0.9998 | 1.0000 | 0.9996 | 0.9835 | 0.9996 | 0.9983 | 1.0000 |
SNR | 6.6800 | 3.7200 | 28.7300 | 16.6500 | 68.9500 | 22.9000 | 60.4100 | 110.5600 | 53.6400 | 57.1600 |
IC | 5.9050 | 4.5500 | 8.2615 | 7.8955 | 7.7495 | 7.6748 | 8.2615 | 6.8528 | 7.0710 | 8.3680 |
Table 11.
Calculation of background modeling complexity in different scenarios of each model (unit: frames/second).
Table 11.
Calculation of background modeling complexity in different scenarios of each model (unit: frames/second).
Method | Top Hat [55] | ANI [56] | PSTNN [23] | ASTTV [26] | GST [58] | NTFRA [57] | NRAM [59] | HBMLCM [19] | ADMD [60] | UCTransnet [61] | Pro |
---|
Time Consumption |
Scence A | 0.1555 | 1.1242 | 1.7371 | 59.2074 | 0.1092 | 7.9126 | 38.6607 | 0.1705 | 0.1756 | 440.7594 | 4.3726 |
Scence B | 0.0631 | 0.1420 | 0.2326 | 5.9643 | 0.0230 | 1.4706 | 1.6768 | 0.0699 | 0.0920 | 151.5486 | 0.5156 |
Scence C | 0.0578 | 0.4358 | 0.3205 | 19.5844 | 0.0516 | 5.8255 | 10.5272 | 0.1373 | 0.1787 | 439.0609 | 1.5982 |
Scence D | 0.1104 | 0.8034 | 0.6351 | 19.2231 | 0.0556 | 5.3351 | 11.0282 | 0.1076 | 0.1572 | 1080.6195 | 2.9033 |
Scence E | 0.1098 | 2.0828 | 3.2974 | 58.1772 | 0.0946 | 10.8307 | 44.9906 | 0.1675 | 0.2001 | 195.0051 | 8.0268 |
Scence F | 0.1209 | 2.0691 | 2.6908 | 56.4643 | 0.0717 | 10.8489 | 38.7814 | 0.1723 | 0.2245 | 195.0051 | 7.7177 |