Figure 1.
The imaging geometry of the optical-ISAR joint observation system.
Figure 1.
The imaging geometry of the optical-ISAR joint observation system.
Figure 2.
Definition of LOS.
Figure 2.
Definition of LOS.
Figure 3.
Geometric of motion vectors for space targets.
Figure 3.
Geometric of motion vectors for space targets.
Figure 4.
Joint observation imaging projection geometry for space targets.
Figure 4.
Joint observation imaging projection geometry for space targets.
Figure 5.
Flowchart of the proposed method.
Figure 5.
Flowchart of the proposed method.
Figure 6.
Network structure of OpticalSegNet.
Figure 6.
Network structure of OpticalSegNet.
Figure 7.
Structure diagram of FFM.
Figure 7.
Structure diagram of FFM.
Figure 8.
Structure diagram of ACAM.
Figure 8.
Structure diagram of ACAM.
Figure 9.
Structure of ISARSegNet.
Figure 9.
Structure of ISARSegNet.
Figure 10.
Flowchart of the octree-space carving-based 3D reconstruction method.
Figure 10.
Flowchart of the octree-space carving-based 3D reconstruction method.
Figure 11.
Flowchart of octree construction.
Figure 11.
Flowchart of octree construction.
Figure 12.
Flowchart of the region offset correction method based on projection optimization.
Figure 12.
Flowchart of the region offset correction method based on projection optimization.
Figure 13.
Simulation process of experimental data: (a) optical imaging; (b) ISAR imaging.
Figure 13.
Simulation process of experimental data: (a) optical imaging; (b) ISAR imaging.
Figure 14.
Geometric relationships between the space targets and observation stations.
Figure 14.
Geometric relationships between the space targets and observation stations.
Figure 15.
Three-Dimensional models of the space targets used: (a) Aqua, (b) CALIPSO, (c) CloudSat, (d) COROT, (e) Jason-1, (f) Meteor, (g) Sentinel, and (h) TG-1.
Figure 15.
Three-Dimensional models of the space targets used: (a) Aqua, (b) CALIPSO, (c) CloudSat, (d) COROT, (e) Jason-1, (f) Meteor, (g) Sentinel, and (h) TG-1.
Figure 16.
Partial simulated optical images: (a) Aqua, (b) CALIPSO, (c) CloudSat, (d) COROT, (e) Jason-1, (f) Meteor, (g) Sentinel, and (h) TG-1.
Figure 16.
Partial simulated optical images: (a) Aqua, (b) CALIPSO, (c) CloudSat, (d) COROT, (e) Jason-1, (f) Meteor, (g) Sentinel, and (h) TG-1.
Figure 17.
Partial simulated ISAR images: (a) Aqua, (b) CALIPSO, (c) CloudSat, (d) COROT, (e) Jason-1, (f) Meteor, (g) Sentinel, and (h) TG-1.
Figure 17.
Partial simulated ISAR images: (a) Aqua, (b) CALIPSO, (c) CloudSat, (d) COROT, (e) Jason-1, (f) Meteor, (g) Sentinel, and (h) TG-1.
Figure 18.
LOS and equivalent LOS variations in the observation stations: (a–c) LOS variations in ARC1, ARC3 and ARC5; (d–f) equivalent LOS variations in ARC1, ARC3 and ARC5.
Figure 18.
LOS and equivalent LOS variations in the observation stations: (a–c) LOS variations in ARC1, ARC3 and ARC5; (d–f) equivalent LOS variations in ARC1, ARC3 and ARC5.
Figure 19.
Observation images and ground truth semantic segmentation masks: (a–c) optical images of Aqua, Meteor and TG-1; (d–f) ISAR images of Aqua, Meteor and TG-1; (g–i) semantic segmentation masks for (a–c); and (j–l) semantic segmentation masks for (d–f).
Figure 19.
Observation images and ground truth semantic segmentation masks: (a–c) optical images of Aqua, Meteor and TG-1; (d–f) ISAR images of Aqua, Meteor and TG-1; (g–i) semantic segmentation masks for (a–c); and (j–l) semantic segmentation masks for (d–f).
Figure 20.
Semantic segmentation results of Aqua’s optical images (a–h) and ISARimages (i–p) using different methods: (a,i) Deeplabv3+, (b,j) FCN, (c,k) CL-NL-Unet, (d,l) Pix2pixGan, (e,m) Segmenter, (f,n) Segformer, (g,o) Proposed, and (h,p) Threshold.
Figure 20.
Semantic segmentation results of Aqua’s optical images (a–h) and ISARimages (i–p) using different methods: (a,i) Deeplabv3+, (b,j) FCN, (c,k) CL-NL-Unet, (d,l) Pix2pixGan, (e,m) Segmenter, (f,n) Segformer, (g,o) Proposed, and (h,p) Threshold.
Figure 21.
Semantic segmentation results of Meteor’s optical images (a–h) and ISARimages (i–p) using different methods: (a,i) Deeplabv3+, (b,j) FCN, (c,k) CL-NL-Unet, (d,l) Pix2pixGan, (e,m) Segmenter, (f,n) Segformer, (g,o) Proposed, and (h,p) Threshold.
Figure 21.
Semantic segmentation results of Meteor’s optical images (a–h) and ISARimages (i–p) using different methods: (a,i) Deeplabv3+, (b,j) FCN, (c,k) CL-NL-Unet, (d,l) Pix2pixGan, (e,m) Segmenter, (f,n) Segformer, (g,o) Proposed, and (h,p) Threshold.
Figure 22.
Semantic segmentation results of TG-1’s optical images (a–h) and ISAR images (i–p) using different methods: (a,i) Deeplabv3+, (b,j) FCN, (c,k) CL-NL-Unet, (d,l) Pix2pixGan, (e,m) Segmenter, (f,n) Segformer, (g,o) Proposed, and (h,p) Threshold.
Figure 22.
Semantic segmentation results of TG-1’s optical images (a–h) and ISAR images (i–p) using different methods: (a,i) Deeplabv3+, (b,j) FCN, (c,k) CL-NL-Unet, (d,l) Pix2pixGan, (e,m) Segmenter, (f,n) Segformer, (g,o) Proposed, and (h,p) Threshold.
Figure 23.
Comparison of 3D reconstruction results: (a–c) reconstruction results of CPR method for Aqua, Meteor and TG-1; (d–f) reconstruction results of the proposed method results for Aqua, Meteor and TG-1; (g–i) ideal 3D structures of Aqua, Meteor, and TG-1. The colors represent the elevation of the target voxels.
Figure 23.
Comparison of 3D reconstruction results: (a–c) reconstruction results of CPR method for Aqua, Meteor and TG-1; (d–f) reconstruction results of the proposed method results for Aqua, Meteor and TG-1; (g–i) ideal 3D structures of Aqua, Meteor, and TG-1. The colors represent the elevation of the target voxels.
Figure 24.
Results of region offset correction: (a–c) optimization iteration curves of Aqua, Meteor and TG-1; (d–f) offset estimation error distributions of Aqua, Meteor and TG-1.
Figure 24.
Results of region offset correction: (a–c) optimization iteration curves of Aqua, Meteor and TG-1; (d–f) offset estimation error distributions of Aqua, Meteor and TG-1.
Figure 25.
Comparison of the real target region, reconstructed reprojection region obtained by CPR, and reconstructed reprojection region of the proposed method for: (a–c) optical image of Aqua, (d–f) ISAR image of Aqua, (g–i) optical image of Meteor, (j–l) ISAR image of Meteor, (m–o) optical image of TG-1, and (p–r) ISAR image of TG-1.
Figure 25.
Comparison of the real target region, reconstructed reprojection region obtained by CPR, and reconstructed reprojection region of the proposed method for: (a–c) optical image of Aqua, (d–f) ISAR image of Aqua, (g–i) optical image of Meteor, (j–l) ISAR image of Meteor, (m–o) optical image of TG-1, and (p–r) ISAR image of TG-1.
Figure 26.
Three-Dimensional reconstruction results of (a,d,g,j) Aqua, (b,e,h,k) Meteor and (c,f,i,l) TG-1 under Config1, Config2, Config3 and Config4. The colors represent the elevation of the target voxels.
Figure 26.
Three-Dimensional reconstruction results of (a,d,g,j) Aqua, (b,e,h,k) Meteor and (c,f,i,l) TG-1 under Config1, Config2, Config3 and Config4. The colors represent the elevation of the target voxels.
Figure 27.
Three-Dimensional reconstruction results of (a,d,g) Aqua, (b,e,h) Meteor and (c,f,i) TG-1 using ISEA, ST-ISEA and OFM, respectively. The colors represent the elevation of the target voxels.
Figure 27.
Three-Dimensional reconstruction results of (a,d,g) Aqua, (b,e,h) Meteor and (c,f,i) TG-1 using ISEA, ST-ISEA and OFM, respectively. The colors represent the elevation of the target voxels.
Figure 28.
Variation of 3D reconstruction quality with proportion of images used for reconstruction and image interval for (a,d) Aqua, (b,e) Meteor and (c,f) TG-1.
Figure 28.
Variation of 3D reconstruction quality with proportion of images used for reconstruction and image interval for (a,d) Aqua, (b,e) Meteor and (c,f) TG-1.
Figure 29.
Variation of 3D reconstruction quality with proportion of offset images and target region offset for (a) Aqua, (b) Meteor and (c) TG-1.
Figure 29.
Variation of 3D reconstruction quality with proportion of offset images and target region offset for (a) Aqua, (b) Meteor and (c) TG-1.
Figure 30.
The processing flow of the measured data.
Figure 30.
The processing flow of the measured data.
Figure 31.
Measured images: (a–c) optical images (Frames 1–3); (d–f) ISAR images (Frames 1–3).
Figure 31.
Measured images: (a–c) optical images (Frames 1–3); (d–f) ISAR images (Frames 1–3).
Figure 32.
Comparative analysis of ground truth masks and semantic segmentation results: (a–c) ground truth masks of optical images (Frames 1–3); (d–f) ground truth masks of ISAR images (Frames 1–3); (g–i) segmentation results corresponding to (a–c); (j–l) segmentation results corresponding to (d–f).
Figure 32.
Comparative analysis of ground truth masks and semantic segmentation results: (a–c) ground truth masks of optical images (Frames 1–3); (d–f) ground truth masks of ISAR images (Frames 1–3); (g–i) segmentation results corresponding to (a–c); (j–l) segmentation results corresponding to (d–f).
Figure 33.
Three-Dimensional Imaging results: (a–c) 3D reconstruction results of optical images (Frames 1–3); (d–f) results of observing (a–c) along the normal direction; (g–i) 3D reconstruction results of ISAR images (Frames 1–3); (j–l) results of observing (g–i) along the normal direction. The colors represent the elevation of the target voxels.
Figure 33.
Three-Dimensional Imaging results: (a–c) 3D reconstruction results of optical images (Frames 1–3); (d–f) results of observing (a–c) along the normal direction; (g–i) 3D reconstruction results of ISAR images (Frames 1–3); (j–l) results of observing (g–i) along the normal direction. The colors represent the elevation of the target voxels.
Table 1.
Parameters of the used TLE data.
Table 1.
Parameters of the used TLE data.
| TLE Name | Parameters |
|---|
| TLE1 | 1 39150U 13018A 23092.27047355 .00001617 00000-0 23877-3 0 9995 2 39,150 98.0352 163.0146 0,022,971 76.1631 284.2129 14.76472717535358 |
| TLE2 | 1 44310U 19032A 23092.17633197 .00006723 00000-0 52032-3 0 9993 2 44,310 44.9832 194.7597 0,009,778 146.1448 288.1752 15.02106084209691 |
| TLE3 | 1 51102U 22004A 23092.58401987 .00000276 00000-0 11131-3 0 9990 2 51,102 98.5957 152.8512 0,500,395 303.1456 52.2629 13.83927097 62353 |
| TLE4 | 1 25544U 98067A 23091.10374725 .00020749 00000-0 37896-3 0 9992 2 25,544 51.6419 350.4695 0,007,223 140.3984 6.4707 15.49341873389833 |
| TLE5 | 1 48274U 21035A 23091.59408903 .00036645 00000-0 37919-3 0 9995 2 48,274 41.4735 262.9651 0,005,222 273.2786 165.3212 15.63969261109859 |
| TLE6 | 1 37820U 11053A 16266.35688463 .00025497 00000-0 24137-3 0 9991 2 37,820 42.7662 24.7762 0,015,742 351.0529 104.2087 15.66280400285808 |
Table 2.
Locations of the observation stations.
Table 2.
Locations of the observation stations.
| Observation Station | Position |
|---|
| ISAR1 | Shanghai (31.10N, 121.36E, 17.70 m) |
| Optical1 | Shanghai (31.10N, 121.36E, 17.70 m) |
| ISAR2 | Beijing (39.91N, 116.39E, 35.00 m) |
Table 3.
Rendering parameters used for optical imaging simulation.
Table 3.
Rendering parameters used for optical imaging simulation.
| Parameter | Value |
|---|
| Image size (cells × cells) | - | 512 × 512 |
| Defocus blur | | |
| |
| Motion blur | | |
| (°) | |
| Gaussian noise | | |
| Salt-and-pepper noise | (%) | |
| |
Table 4.
Imaging parameters of ISAR device.
Table 4.
Imaging parameters of ISAR device.
| Parameter | Value |
|---|
| Data size (cells × cells) | 512 × 512 |
| Signal frequency (GHz) | 10, 12 and 14 |
| Bandwidth (GHz) | 2, 3 and 4 |
| Pulse repetition frequency (Hz) | 30 |
Table 5.
Comparison of MIOU and IOU for target region extraction results.
Table 5.
Comparison of MIOU and IOU for target region extraction results.
| Method | MIOU | IOU |
|---|
| Deeplabv3+ | 0.8632/0.8573 | 0.9078/0.9012 |
| FCN | 0.8561/0.8598 | 0.9037/0.9024 |
| CL-NL-Unet | 0.8921/0.8843 | 0.9256/0.9183 |
| Pix2pixGan | 0.8797/0.8764 | 0.9005/0.8936 |
| Segmenter | 0.8894/0.8792 | 0.9219/0.9015 |
| Segformer | 0.8821/0.8729 | 0.9158/0.9083 |
| Proposed | 0.9063/0.8957 | 0.9340/0.9202 |
| Threshold | - | 0.8391/0.8176 |
Table 6.
Impact of different network architectures on semantic segmentation performance.
Table 6.
Impact of different network architectures on semantic segmentation performance.
| Method | MIOU | IOU |
|---|
| Unet | 0.8603/0.8566 | 0.8908/0.8797 |
| Unet + IGPB | 0.8840 (+0.0237)/0.8790 (+0.0224) | 0.9091 (+0.0183)/0.8964 (+0.0167) |
| Unet + BCFO | 0.8856 (+0.0253)/0.8814 (+0.0248) | 0.9080 (+0.0172)/0.8962 (+0.0165) |
| Unet + BSB | 0.8727 (+0.0124)/0.8659 (+0.0093) | 0.9162 (+0.0254)/0.9014 (+0.0217) |
| Unet + IGPB + BCFO | 0.9005 (+0.0402)/0.8961 (+0.0395) | 0.9220 (+0.0312)/0.9095 (+0.0298) |
| Unet + IGPB + BSB | 0.8975 (+0.0372)/0.8931 (+0.0365) | 0.9259 (+0.0351)/0.9134 (+0.0337) |
| Unet + BCFO + BSB | 0.8986 (+0.0383)/0.8933 (+0.0367) | 0.9280 (+0.0372)/0.9162 (+0.0365) |
| Unet + IGPB + BCFO + BSB | 0.9032 (+0.0429)/0.8957 (+0.0391) | 0.9340 (+0.0432)/0.9202 (+0.0405) |
Table 7.
Comparison of RA, RI and VIOU among between the two methods.
Table 7.
Comparison of RA, RI and VIOU among between the two methods.
| Method | Aqua | Meteor | TG-1 |
|---|
| CPR | 93.57/55.19/53.17 | 92.87/42.39/41.05 | 91.15/55.02/52.23 |
| Proposed | 93.53/94.57/88.76 | 93.95/90.86/85.84 | 94.37/85.28/81.15 |
Table 8.
Comparison of RR-IOU across different methods.
Table 8.
Comparison of RR-IOU across different methods.
| Method | Aqua | Meteor | TG-1 |
|---|
| CPR | 0.5317 | 0.4105 | 0.5223 |
| Proposed | 0.8876 | 0.8584 | 0.8115 |
Table 9.
Comparison of running time of different methods under different reconstruction region settings.
Table 9.
Comparison of running time of different methods under different reconstruction region settings.
| Target | Reconstruction Region Setting | Time Consumption of the CPR Method (s) | Time Consumption of the Proposed Method (s) |
|---|
| Aqua | 150 × 150 × 150 | 392.6931 | 5.1620 |
| 200 × 200 × 200 | 949.3754 | 12.6386 |
| 300 × 300 × 300 | 3192.3571 | 40.7313 |
| Meteor | 150 × 150 × 150 | 419.7296 | 6.2699 |
| 200 × 200 × 200 | 1005.1388 | 14.5124 |
| 300 × 300 × 300 | 3401.8926 | 46.9426 |
| TG-1 | 150 × 150 × 150 | 489.7593 | 7.5549 |
| 200 × 200 × 200 | 1161.9075 | 15.1930 |
| 300 × 300 × 300 | 3915.6283 | 51.8342 |
Table 10.
Configuration of different setups for the proposed method.
Table 10.
Configuration of different setups for the proposed method.
| Configuration | Optical Device | ISAR Device | Target Region Extraction | Attitude Estimate | Offset Correction |
|---|
| Config1 | × | √ | OpticalSegNet and ISARSegNet | √ | √ |
| Config2 | √ | × | OpticalSegNet and ISARSegNet | - | √ |
| Config3 | √ | √ | OpticalSegNet and ISARSegNet | × | √ |
| Config4 | √ | √ | Threshold | √ | √ |
Table 11.
Comparison of RA, RI and VIOU under different configurations.
Table 11.
Comparison of RA, RI and VIOU under different configurations.
| Target | Config1 | Config2 | Config3 | Config4 | Proposed |
|---|
| Aqua | 85.53/68.97/61.76 | 84.73/61.83/55.63 | 55.71/36.53/28.31 | 90.79/79.05/73.18 | 93.53/94.57/88.76 |
| Meteor | 86.92/69.39/62.83 | 84.85/53.41/48.76 | 54.63/39.51/29.75 | 89.37/81.92/74.65 | 93.95/90.86/85.84 |
| TG-1 | 53.07/90.11/50.15 | 87.59/60.52/55.74 | 53.26/49.17/34.35 | 88.25/87.62/78.47 | 94.37/85.28/81.15 |
Table 12.
Comparison of running time of different methods under different configurations.
Table 12.
Comparison of running time of different methods under different configurations.
| Configuration | Reconstruction Time for Aqua (s) | Reconstruction Time for Meteor (s) | Reconstruction Time for TG-1 (s) | Average Runtime (s) |
|---|
| Config1 | 19.7328 | 24.2067 | 23.4719 | 22.4705 |
| Config2 | 20.4195 | 22.8146 | 26.3211 | 23.1851 |
| Config3 | 38.5421 | 47.2957 | 48.3629 | 44.7336 |
| Config4 | 41.4925 | 43.8162 | 48.9715 | 44.7601 |
Table 13.
Comparison of RA, RI and VIOU for different methods.
Table 13.
Comparison of RA, RI and VIOU for different methods.
| Target | ISEA | ST-ISEA | Proposed |
|---|
| Aqua | 54.39/34.45/26.73 | 89.53/70.68/65.28 | 93.53/94.57/88.76 |
| Meteor | 53.26/37.95/28.47 | 88.94/73.28/67.16 | 93.95/90.86/85.84 |
| TG-1 | 51.98/44.93/31.75 | 89.39/71.10/65.57 | 94.37/85.28/81.15 |
Table 14.
Comparison of time consumption among different methods.
Table 14.
Comparison of time consumption among different methods.
| Method | Reconstruction Time for Aqua (s) | Reconstruction Time for Meteor (s) | Reconstruction Time for TG-1 (s) | Average Runtime (s) |
|---|
| ISEA | 2353.29 | 2676.84 | 2937.41 | 2655.85 |
| ST-ISEA | 4373.62 | 4538.47 | 4271.95 | 4394.68 |
| OFM | 12.56 | 14.87 | 16.42 | 14.62 |
| Proposed | 5485.29 | 5538.56 | 6195.73 | 5739.86 |
Table 15.
Statistics of MIOU and IOU for semantic segmentation on measured data.
Table 15.
Statistics of MIOU and IOU for semantic segmentation on measured data.
| Data Type | MIOU | IOU |
|---|
| Optical images | 0.8935 | 0.9265 |
| ISAR images | 0.8870 | 0.9153 |